id
stringlengths
10
10
title
stringlengths
7
231
abstract
stringlengths
3
2.43k
authors
stringlengths
5
21.5k
published_date
stringlengths
20
20
link
stringlengths
33
34
markdown
stringlengths
133
1.92M
2309.13496
Stratosphere: Finding Vulnerable Cloud Storage Buckets
Misconfigured cloud storage buckets have leaked hundreds of millions of medical, voter, and customer records. These breaches are due to a combination of easily-guessable bucket names and error-prone security configurations, which, together, allow attackers to easily guess and access sensitive data. In this work, we investigate the security of buckets, finding that prior studies have largely underestimated cloud insecurity by focusing on simple, easy-to-guess names. By leveraging prior work in the password analysis space, we introduce Stratosphere, a system that learns how buckets are named in practice in order to efficiently guess the names of vulnerable buckets. Using Stratosphere, we find wide-spread exploitation of buckets and vulnerable configurations continuing to increase over the years. We conclude with recommendations for operators, researchers, and cloud providers.
Jack Cable, Drew Gregory, Liz Izhikevich, Zakir Durumeric
2023-09-23T23:27:19Z
http://arxiv.org/abs/2309.13496v1
# Stratosphere: Finding Vulnerable Cloud Storage Buckets ###### Abstract. Misconfigured cloud storage buckets have leaked hundreds of millions of medical, voter, and customer records. These breaches are due to a combination of easily-guessable bucket names and error-prone security configurations, which, together, allow attackers to easily guess and access sensitive data. In this work, we investigate the security of buckets, finding that prior studies have largely un-derestinated cloud insecurity by focusing on simple, easy-to-guess names. By leveraging prior work in the password analysis space, we introduce Stratosphere, a system that learns how buckets are named in practice in order to efficiently guess the names of vulnerable buckets. Using Stratosphere, we find wide-spread exploitation of buckets and vulnerable configurations continuing to increase over the years. We conclude with recommendations for operators, researchers, and cloud providers. 2021 Cosmology, Cloud storage, Cloud storage, Cloud storage, Cloud storage, Cloud storage, Cloud storage, Cloud storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: journal: Information systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Information systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems based storage + Footnote †: journal: Journal of Information Systems Cloud based storage + Footnote †: journal: Journal of Information Systems Cloud based storage We analyze the 2.1M buckets we find, including those that Stratosphere uncovers across Amazon S3, Google Cloud Storage, and Alibaba Object Storage. Using American Express EarlyBird (Gendez et al., 2018), we identify sensitive data being hosted in 10.6% of public buckets, including a bucket belonging to a Department of Defense contractor. We notify organizations of their exposure and detail the results of our disclosure process. Our improved perspective also allows us to unearth broader ecosystem patterns. We find that AWS S3 exhibits the worst--and continually worsening--security problems: buckets updated on AWS in the past year are on average 4 times more likely to be vulnerable than buckets updated in the last 10 years. In the worst case, 5% of _private_ AWS buckets with readable permissions allow for their permissions to be changed by any unauthenticated user, showing that misconfigurations remain a pressing problem in identified buckets. We further observe evidence that 3% of all vulnerable buckets have already been exploited. Our results highlight that cloud storage buckets continue to be misconfigured in 2021, but that the security community can help operators secure their cloud presence. By releasing Stratosphere as an open-source tool, we hope to enable operators and researchers to more accurately understand and improve cloud security. ## 2. Related Work Security research has benefited from systems that dramatically increase the performance of Internet data collection and analysis. ZMap (ZMap, 2018) and Masscan (Masscan, 2018) have pushed Internet-wide scanning to the theoretical limit, allowing for the discovery of cryptographic weaknesses (Zalewski et al., 2019; Zalewski et al., 2019) and uncovering of real-world attacks (Santori et al., 2019; Zalewski et al., 2019). Snort (Snort, 2020) and Bro (Bro, 2020) have made packet parsing a lightweight and modular operation, allowing for the subversion of botnets (Zalewski et al., 2019; Zalewski et al., 2019) and detection of malware infections in real time (Zalewski et al., 2019). We build on two bodies of literature to create a system that approaches the theoretical limit of cloud storage scanning to efficiently find vulnerable cloud storage buckets. To find valid bucket names used in cloud storage and evaluate the guessability of bucket names, our work leverages password cracking (Zalewski et al., 2019; Zalewski et al., 2019) and password-strength checking (Zalewski et al., 2019) techniques, which predict and characterize how humans use language to construct strings to protect their data. Furthermore, we leverage a sequence of studies that focus on efficiently scanning the IPv6 address space by developing target generation algorithms (TGAs) that draw upon sets of existing known IPv6 addresses to infer additional addresses (Zalewski et al., 2019; Zalewski et al., 2019; Zalewski et al., 2019). For example, Entropy/IP (Zalewski et al., 2019) uses a Bayesian Network to find common IPv6 address patterns in order to generate new IPv6 addresses that exhibit a similar address structure. Most similar to the security problems we uncover in our work is a study by Continella et al. (2019), who found that misconfigured buckets could lead to code injection and defacement of websites that use S3 to load web resources. We compare our results to Continella when appropriate. Beyond identifying buckets, past work has investigated security vulnerabilities in other cloud infrastructure (Zalewski et al., 2019; Zalewski et al., 2019; Zalewski et al., 2019), such as Ristenpart et al., who mapped Amazon's internal IP address space in order to exploit VM co-residency and Somorovsky et al., who found that Amazon control interfaces were vulnerable to XSS attacks. Short blog posts (Zalewski et al., 2019; Zalewski et al., 2019) and videos (Zalewski et al., 2019) discuss the process of setting up and evaluating cloud storage honeypots on AWS S3. ## 3. Evaluating Cloud Storage Scanners Vulnerable cloud storage buckets can be found using active scanning and passive sources (e.g., passive DNS). While buckets found in passive sources are already exposed to the public, passive DNS is a privileged perspective that only uncovers a small sample of all buckets. Furthermore, passive DNS will likely become less effective in the future as DNS-over-HTTPS and DNS-over-TLS become ubiquitously deployed. Efficient cloud storage scanning allows for quickly discovering a greater number of buckets, such as older vulnerable buckets that do not show up in recent Internet traffic. It is imperative that the security community can not only better understand how and why buckets are left unsecured, but can also quickly notify affected organizations en masse, as cloud providers are inexplicably slow at notifying their customers. We deploy 35 vulnerable AWS storage buckets with varying name complexity (Appendix A) and receive unsolicited traffic within the first 24 hours. However, it takes AWS _4 months_ to notify us that our buckets are vulnerable. Several groups have proposed solutions for guessing the names of storage buckets. The current state-of-the-art bucket generator is a scanner developed by Continella et al. (2019), which guesses S3 buckets by generating random 3-4 character sequences and performing random operations of removing a character, duplicating a character, or concatenating a new word from a corpus. There also exist several open source scanners that generate candidate names based on a user-provided word list and template patterns, including Slup (Sleifer et al., 2019), s3enum (Sleifer et al., 2019), S3Scanner (Sleifer et al., 2019), and BucketStream (Bucquet et al., 2019). These open source scanners are "target generators" (Sleifer et al., 2019; Sleifer et al., 2019; Sleifer et al., 2019)--they require a target word to search for candidates, such as an organization name--and are not inherently built to scan globally. s3enum (Sleifer et al., 2019) no longer works as AWS has removed the side-channel it relies upon. Lastly, there are two public search engines--Grayhat Warfare (Grayhat, 2018) and Public Cloud Storage Search (Grayhat, 2018), the former being most well known. There is little public information about how these services uncover buckets. To understand the limits of existing work, we evaluate Continella et al. and Grayhat Warfare against a ground truth sample of real-world buckets found in passive DNS data sources. We show that while state-of-the-art active scanning methods are efficient at finding valid bucket names, they produce extremely short names and are biased against finding vulnerable buckets. We use our understanding of why and how prior work falls short of finding real-world bucket names to build a system tailored towards efficiently finding vulnerable buckets (Section 4). ### Evaluation Methodology One approach for evaluating scanning algorithms would be to simply calculate the hit-rate of found buckets compared to generated candidate names. However, this naive metric does not accurately capture whether generated bucket names are characteristic of real-world buckets where security problems are likely to occur. Furthermore, it does not allow us to evaluate black-box solutions like Grayhat Warfare where we only see found buckets and not any failed lookup attempts. Overall hit-rate also fails to determine whether algorithms are efficient at identifying publicly accessible and "vulnerable" buckets. As such, we further evaluate solutions based on publicly accessible hit rate and vulnerability hit rate. **Definition of Vulnerable.** We define buckets to be _vulnerable_ if they have publicly accessible sensitive content or have misconfigured permissions. We use American Express EarlyBird (Li et al., 2020), a tool that supplies 60 common sensitive filename patterns like private keys and database dumps to detect whether a bucket exposes sensitive data. We further analyze all publicly-available bucket Access Control Lists (ACLs) and classify misconfigured buckets as those with an ACL that allows public write, public delete, or public modification of the ACL itself. **Passive Data Sources.** To investigate whether existing tools can generate real-world bucket names as well as find vulnerable buckets, we compare generated names against a sample of names found in passive DNS data sources. Amazon Web Services (AWS), Google Cloud Platform (GCP), and Alibaba storage buckets are accessible as cloud-specific subdomains (e.g., mybucket.s3.amazonaws.com). Since most DNS queries are made in cleartext, bucket names are sometimes inadvertently exposed to passive DNS sinks. We collect 2.2M candidate bucket names from three well-known passive DNS sources: Farsight (Farsight, 2019), VirusTotal (Brandt et al., 2020), and Zetalytics (Zetalytics, 2020) between October 2019 and June 2020. We additionally search for buckets through Bing via the Azure Cognitive Services API (Bing et al., 2020) using the query site:bucket_hostname and collect 11K candidate buckets that are exposed in public GitHub repositories by querying the GitHub BigQuery dataset (Zetalytics, 2020) for files that contain AWS, GCP, and Alibaba bucket URLs. To confirm that a bucket exists for each name we observe, we perform an HTTP GET request to each candidate bucket using ZGrab (Zetalytics, 2020). We decode response status codes: 404 as nonexistent, 403 as private, and 200 as public. In Alibaba Cloud, an unlistable bucket can return a 200 status code if it is a bucket website, so we consider an Alibaba bucket public if it also allows listing its contents in addition to a 200 response. In total, we uncover 532,284 _unique_ valid buckets from passive DNS and search data sources. We do not make any argument that the passive data sources are complete--however, they provide a best-effort approximation of frequently accessed real-world buckets. **Ethical Considerations.** We follow the best practices set forth by Durumeric et al. (2019) to minimize impact of our research. We received three inquiries about our scans, but no party requested to opt out. In line with prior studies (Zetalytics, 2020; Zetalytics, 2020), we analyze file metadata (e.g., filename and extension), but never directly view or download files except for the index page of public Alibaba website buckets. In the course of our manual analysis, when we suspect exposed sensitive files based on filenames, we make a best effort to disclose to the bucket owner and/or cloud provider (Appendix B). ### Evaluating State-of-the-Art Scanners To evaluate the state-of-the-art scanner designed by Continella et al. (2019), we implement the described algorithm, which guesses buckets by generating random 3-4 character sequences and performing random operations of removing a character, duplicating a character, or concatenating a new word from a corpus. We note that since Continella et al. omit all hyper-parameter values used in their algorithm, we assign equal probabilities to the algorithm's parameter of removing a word, concatenating a word, or stopping. Furthermore, we use a naive scanner--as a control experiment--that scans for random alphanumeric combinations between the lengths of 3 and 64. We also evaluate all publicly available buckets collected by the popular cloud storage collection repository, Grayhat Warfare (Wamfare, 2019), which does not reveal their method of scanning and keeps a majority of found buckets private behind a paywall. We run the Continella and Random scanner for the month of December 2020 and collect buckets from Grayhat in December. Using the existing scanning solutions, we discover an additional unique 189K buckets of which 35K are publicly accessible (Table 1). Continella finds 540 valid public buckets per day, which is an order of magnitude faster than passive sources. Continella achieves an \begin{table} \begin{tabular}{l l l r r r r r r r} \hline \hline Type & Source & Collection & Candidates & Public & Total & Sensitive & Misconfigured & \%Valid & \% Unique \\ & & Time & & & & & & & Valid \\ \hline Passive DNS & Farsight & 7 years & 1,254,682 & 95,648 & 773,595 & 3.3K & 4.1K & 61\% & 55\% \\ & VirusTotal & 8 years & 889,176 & 16,940 & 91,414 & 2.3K & 1.1K & 10\% & 73\% \\ & Zetalytics & 6 years & 83,104 & 71 & 1,892 & 1 & 6 & 2\% & 68\% \\ Search & Bing & \textgreater{}10 years & 36,093 & 4,180 & 16,998 & 1.1K & 194 & 47\% & 52\% \\ & GitHub & 5 years & 10,864 & 850 & 4,444 & 115 & 37 & 41\% & 24\% \\ Repository & Grayhat & 2 years & 115,269 & 37,376 & 42,074 & 2.0K & 2.2K & 37\% & 31\% \\ Scanner & Continella & 1 month & 6,919,902 & 16,199 & 125,712 & 208 & 635 & 4.6\% & 21\% \\ & Random & 1 month & 10,523,161 & 5,002 & 77,985 & 69 & 281 & 0.1\% & 2.0\% \\ \hline New & LSTM & 1.5 months & 16,310,302 & 13,989 & 259,352 & 219 & 1.0K & 1.6\% & 49\% \\ New & Token PCFG & 1.5 months & 14,706,908 & 13,334 & 286,610 & 300 & 1.5K & 2.0\% & 50\% \\ New & Character PCFG & 1.5 months & 16,913,529 & 9,705 & 185,978 & 107 & 543 & 1.1\% & 20\% \\ New & Token Bigrams & 1.5 months & 10,302,097 & 3,022 & 65,864 & 66 & 327 & 0.64\% & 10\% \\ New & Character 5-Grams & 1.5 months & 40,383,614 & 18,396 & 215,525 & 110 & 528 & 0.53\% & 36\% \\ \hline \hline \end{tabular} \end{table} Table 1. Data Sources—We extract buckets from different categories of data sources to empirically analyze the cloud storage space. At least 50% of valid buckets extracted from passive data sources are unique across all sources, thereby highlighting the need to combine many different data sources. Buckets collected by our active scanners are completely unique compared to all passive and search data sources. average hitrate of 4.6%, primarily because over 99% of valid names guessed by Continella are 3 or 4 characters long, effectively focusing on only a small high-density portion of the name space. However, these short valid bucket names are not representative of real-world buckets: 98% of valid buckets in passive data sources are longer than 4 characters. The median length, average length, and median Shannon Entropy of Continella generated names is half that of all names found in passive data sources (Figure 1). The names found by Grayhat are also on average an order of magnitude shorter than passive data sources. Furthermore, Continella finds 15 times and 6.4 times fewer sensitive and misconfigured buckets than found in passive datasources, which is disproportionately lower when compared to the total number of buckets found. We hypothesize that there may exist a correlation between vulnerable buckets and bucket names that are "harder to guess," which may be contributing to Continella's bias against finding vulnerable buckets. To properly quantify the guessability (i.e., complexity) of bucket names, we use zxcvbn (Zucchet et al., 2017), a popular password analysis tool that provides a fast and low-cost algorithm to match occurrences of words from large corpuses to parts of a string and estimates the minimum number of guesses an attacker can successfully guess a name via a dictionary-driven attack. Zxcvbn accounts for bucket names that might appear long and high in entropy due to containing a domain name or a common word which would otherwise be quickly found in a corpus. We extend zxcvbn's corpus of known words by adding a well-known dictionary of 466K common words (K We extend zxcvbn's corpus of known words as described previously in Section 3.2. We run zxcvbn against all public and private buckets found in our least-biased sources: five passive DNS and search sources. We define any part of a bucket name matched to a corpus as a "corpus" token and any part not matched as a "random" token. For example, the bucket names "dogs" and "dogf" would be de-constructed as corpus and corpus+rad, respectively. Our results show that a password analysis tool is successful at finding popular naming conventions and that the ordering of tokens is predictable. **Tokens.** We find that 60% of bucket names contain at least one corpus token, and 2.3% contain a l33t spelling variation. The majority of tokens (53%) are matched to a dictionary, while 14.2% are common technology names, 10% symbols, 9.4% passwords (an existing zxcvbn corpus), 8.4% human names (an existing zxcvbn corpus), and 4.7% domain names. Corpus tokens are reused, with the top 100 and top 1000 tokens appearing in 25% and 35% of all bucket names, respectively (Figure 3). The most popular corpus tokens are "-" (3.9%), "prod" (0.7%), and "test" (0.6%). However, 40% of bucket names contain at least one completely random token. Some of the tokens we label as random may simply be composed of words not found in any of our corpus. For example, the most common random tokens are s (0.7%), -2 (0.7%), and -us (0.5%). We manually investigate the 1,000 most common "random" tokens and identify that 10.6% contain technology-related terms that did not appear in our original corpus and the remaining 89.4% are indeed random alphanumeric strings. Furthermore, more than 99.9% of random tokens each occur in fewer than 0.00001% of buckets and 93% occur in only a single bucket name, which suggests that additional dictionaries have limited utility for predicting the most valid bucket names. **Naming Patterns.** Naming patterns (i.e., specific concatenations of corpus and random tokens produced by zxcvbn) are prevalent across bucket names and are surprisingly predictable. We uncover 3,440 unique naming patterns, but find that the top 5 account for 60% of buckets and top 60 account for 90% of buckets. In general, most names are composed of random tokens concatenated with words appearing in one of our corpus; 92% of buckets contain at least one random token, 60% contain at least one known word, and 53% of buckets contain both. Only 29% of all buckets (Table 2) are a single random character string. The location of tokens in a naming pattern is surprisingly predictable: 68% of tokens that appear more than once appear in the same location (e.g., the first word in a pattern) over 50% of the time and 43% appear in the same location 100% of the time. For example, the random tokens "ules-" and "zcb_" that appear in 1,159 and 755 buckets, respectively, appear as the second and first token, respectively, in a naming pattern 100% of the time. Random tokens that appear in at least 10 bucket names are ten times more likely to always appear in the same index compared to corpus tokens. ### Summary Existing active scanning methods are biased towards finding "easy" names because they (1) do not rely enough on existing corpuses--82% of bucket names found by Continella are random strings (e.g., do not contain a word from any of our provided corpuses), and (2) are oblivious to the structures of real-world names. Consequently, state-of-the-art scanners underestimate the fraction of vulnerable buckets by up to 3.2 times. However, bucket name patterns are prevalent--with the top 5 patterns accounting for 60% of bucket names--and the locations of tokens in naming patterns are stable: 43% of tokens appear in the same place 100% of the time. In the next section, we introduce an intelligent cloud storage scanning system that uses existing bucket names as training data to find \begin{table} \begin{tabular}{l c c} \hline \hline Name & \% & AVG \\ Structure & Names & \(\log_{10}\)(_Guess_) \\ \hline (rand) & 28.5\% & 13.7 \\ Length 18 & 24.1\% & 18.0 \\ Length 25 & 12.4\% & 25.0 \\ Length 4 & 7.2\% & 4.0 \\ (rand,corpus) & 12.0\% & 8.8 \\ (rand,"test") & 3.8\% & 8.1 \\ (rand,"static") & 3.1\% & 7.5 \\ (rand,"img") & 2.8\% & 7.1 \\ (rand,corpus,rand) & 9.8\% & 17.5 \\ (rand,"upload",rand) & 1.2\% & 11.1 \\ (rand,"asset",rand) & 1.1\% & 11.1 \\ (rand,"log",rand) & 0.9\% & 20.5 \\ (corpus,rand) & 6.5\% & 9.5 \\ ("test",rand) & 1.9\% & 7.3 \\ ("img",rand) & 1.4\% & 8.3 \\ ("static",rand) & 1.3\% & 8.5 \\ (corpus,rand,corpus) & 3.3\% & 11.7 \\ ("staging",rand",appspot.com") & 0.9\% & 23.1 \\ (tech term, “s.", top domain) & 0.9\% & 9.6 \\ (tech term,"s.", dict) & 0.2\% & 8.2 \\ \hline \hline \end{tabular} \end{table} Table 2. Top 5 Bucket Naming Patterns—The top 5 naming patterns account for 60% of buckets. Random strings are the most common naming structures, comprising 28.5% of all bucket names. Figure 3. Token Popularity—Tokens in bucket names are often re-used, with the top 100 tokens appearing in 25% of all names. Random tokens are much more diffuse; the top 100 only appear in 9.8% of names. bucket names following a more similar distribution to those found in the wild. ## 4. Stratosphere: A system for finding real-world buckets In this section we introduce Stratosphere, a system that "looks down into the clouds," by efficiently predicting valid and vulnerable buckets with the same semantic complexity as buckets found in passive data sources. Stratosphere achieves its performance by leveraging existing research in the password space. Using bucket names found in passive DNS sources from Section 3 as training data, Stratosphere is able to find buckets with a 4 times higher long-term hit-rate, 4 orders of magnitudes more complex names, and find 2.4 times more misconfigured buckets than existing solutions. Stratosphere is available at [https://github.com/stanford-esrg/stratosphere](https://github.com/stanford-esrg/stratosphere) under the Apache 2.0 license. We note, and further elaborate in the section, that its design naturally restricts its use to researchers with a privileged viewpoint to train against (e.g., passive DNS data), thereby limiting an attacker's ability to abuse Stratosphere. ### Leveraging Password Cracking Cloud storage bucket names encapsulate the process of humans using language to construct strings to protect their data. While prior work has not previously investigated human generated bucket names, there has been extensive work studying passwords, which is another example of human-named strings that can be guessed to reveal data (Song et al., 2018; Wang et al., 2019). The password space has excelled at using patterns, such as commonly used passwords, to predict ("crack") human-named strings (Wang et al., 2019; Wang et al., 2019; Wang et al., 2019). We draw on the similarities between bucket naming patterns and password composition and evaluate three algorithms previously used to successfully guess passwords to guess valid bucket names: Probabilistic Context Free Grammar (PCFG) (Wang et al., 2019), N-Grams (Wang et al., 2019) and Long Short-Term Memory (LSTM) (Wang et al., 2019). We describe below how we adopt each password generator to predict bucket naming patterns. We apply both the PCFG and N-Grams algorithms to characters and tokens, creating a total of five distinct bucket-name generators. **Character-Level 5-Grams.** Consecutive sequences of characters are not randomly distributed. For example, 60% of bucket names contain at least one English word. Character-level 5-grams uses prior correlations of consecutive characters to predict new successive tokens: \(P(X_{i+1}|X_{i})\) where \(X_{i}\) is the \(i\)th character over all consecutive character pairs. Similar to how prior work has used character-level n-grams for fast dictionary attacks on passwords (Wang et al., 2019), we create a distribution of bucket-name lengths from our ground truth set and independently sample a candidate bucket length to determine how many successive characters to generate sequences of a particular length. We then continuously generate characters until we have reached the given length. We choose \(N=5\) to balance performance and memory usage (Wang et al., 2019). **Token-Level Bigrams.** Tokens co-occur across bucket names, with 9.9% occurring in at least two bucket names. Token-level bigrams extends character-level n-grams to generate sequences with distributions over consecutive tokens instead of over consecutive characters. We split a bucket name into tokens by delimiting on "\(\_\)," -", and "\(\_\) characters, since 39% of buckets from extracted sources contain at least one delimiter token and symbol-delimited tokens follow a distribution similar to zxcvbn (Figure 3). We construct a distribution of \(P(X_{i+1}|X_{i})\) where \(X_{i}\) is the \(i\)th token over all consecutive token pairs. We find \(N=2\) to achieve the best token-expressiveness and memory footprint. To generate sequences of a particular length, we create a distribution of token lengths from our ground truth set and independently sample a candidate bucket token length to determine how many successive tokens to generate. We insert a sampled non-alphanumeric delimiter between each consecutive token pair. **Probabilistic Context Free Grammar (Character PCFG).** We use existing naming patterns (Table 2) to help predict new concatenations of random tokens (as 92% of bucket names contain at least one random token, and 40% contain only random tokens). A PCFG is a context free grammar that represents grammatical terminals as a distribution of strings. We leverage techniques used by Weir et. al in the password-cracking literature (Wang et al., 2019) to build a distribution of bucket names to templates of tokens, where consecutive alphabetic characters are grouped by length, consecutive numeric characters are grouped by length, and remaining non-alphanumeric characters are left hard-coded. For example, the bucket name name1234-word-4- is represented as a rule of the form C4N4-C4-N1-. We then guess bucket names by first generating a bucket rule according to a global distribution of bucket rules and then generate each contiguous alphanumeric token according to a local distribution of tokens that conform to that terminal variable (e.g., C4). **Token PCFG.** We use existing naming patterns (Table 2) to help predict new concatenations of existing tokens (as the top 1000 tokens appear in 35% of all bucket names). We modify the Character PCFG to create templates of tokens using the patterns in Section 3. For example, the bucket name "name1234-word-jpg-4-" is represented as a rule of the form "cother\(\rightarrow\)cidiometry word\(\rightarrow\)cfile exension\(\rightarrow\)-number\(\rightarrow\)". The token types "cother", "cidiometry word\(\rightarrow\)", and "cfile extension\(\rightarrow\)" will store a frequency list of each seen token of that type. Each rule is also stored in a frequency list. To generate a bucket, we then generate a rule at random according to its frequency and for each token type sample a token of that type according to its frequency. **LSTM (RNN).** A Long Short-Term Memory (LSTM) recurrent neural network (RNN) is a neural network that generates sequences of predicted elements and is able to find hidden patterns that might have not been explored in Section 3. RNNs have been used to generate password sequences in prior work (Wang et al., 2019). We use the LSTM to generate sequences of text that are similar to our ground-truth set of buckets. The LSTM, given a sequence of characters in the format described above, outputs a probability distribution of the following character. Our generator then draws a character from this multinomial distribution, appends the character to the current candidate name, and repeats the generation process. Finally, when a termination character is predicted, the generator submits this candidate name and checks that the name has not been validated before (Section 3). This model implicitly encodes more state than a Character Grams generator because it considers a full sequence beyond just the previous token when making predictions. We leverage Keras (Keras, 2019), a library on top of TensorFlow, for building and training the LSTM. ### Stratosphere Design Stratosphere is composed of a three-stage pipeline: extraction, validation, and generation. During extraction, all names observed in our passive DNS and search sources (Section 3.1) are fetched. During validation, all extracted names are checked across the AWS, GCP, and Alibaba clouds using the process described in Section 3.1. During generation, the user chooses a generation algorithm (or ensemble of algorithms) of their choice. The generation algorithm then uses the previously validated names as training data to train itself. Once trained, batches of bucket candidates are generated and the validation and generation process are repeated. Stratosphere's performance is highly dependent on the quality of training data. We note that two of our primary passive data sources (Farsight, Virustotal) only provide free access to their data to verified academics. Consequently, the costs are higher for malicious parties to gain access to high-quality training data, thereby limiting an attacker's ability to abuse Stratosphere. ### Performance Evaluation We evaluate five different versions of Stratosphere, each version using a different individual generating algorithm from Section 4.1. We refer to each version of Stratosphere by its generator name. We evaluate each version of Stratosphere across three metrics: hit-rate, time (to generate valid candidates and train), and complexity of bucket names found. We discover that Token PCFG achieves the highest hit-rate and finds the most complex bucket names. **Hit-rate.** We first consider how often each generator's guesses are correct in Figure 5. We note that across all iterations, we do not allow bucket names to repeat, and thus 100% of buckets collected by our scanners are 100% unique compared to all passive and search datasources. During the first generation iteration, our Token Bigrams finds the most valid buckets (11%). However, the generator's accuracy immediately decays. Within 150 iterations, accuracy across all generators plateaus between 0.5% (Token Bi-grams) and 2.5% (Token PCFG). Though the neural network (LSTM) is the most complex scanning algorithm, it does not seem to improve, implying that it learns all latent patterns within the first 10 iterations. In total, the five generators discover 599,016 buckets, of which 393,598 buckets are found by only one generator across all passive, search, repository, and scanning sources (which we consider a "unique valid bucket"). Of those, 39,846 buckets (7%) are publicly accessible. The LSTM and Token PCFG discover the most unique buckets (50%), while the LSTM and the Character-Level 5-Grams have the most stable asymptotic behavior. Compared to Continella, all generators except Token Bigrams experience a hit-rate 1.5 to 4 times larger once 3M candidates are guessed, implying that certain generators will be more successful at finding more buckets overtime. Furthermore, Token PCFG finds 1.4 and 2.4 times more sensitive and misconfigured buckets, respectively, than Continella. **Time.** We calculate the core time that each generator takes to generate 10K valid bucket names over the course of one month (Figure 6(a)). Character 5-Grams is the fastest at generating valid candidates, generating a total of 146K valid buckets in the algorithm's first 100,000 seconds. Unsurprisingly, the LSTM--being a neural network--takes the most time to generate a valid candidate, generating only 10,500 valid candidates in the same time period. We further evaluate the process time to generate valid bucket guesses _with_ re-training and updating the model every 10K candidates in Figure 6(a), and find that it adds nearly an order of magnitude more time to generate the same number of candidates (i.e., character 5-grams generates 15.2K valid buckets in 1M seconds). The hit-rate, generation time, and training time provide tradeoffs around how often a generator's model should be updated and the number of guesses each generator should make - a lower hit-rate but faster generation rate can put greater stress on the network, for example. **Bucket Name Complexity.** As discussed in Section 3.2, while Continella initially appears to have a high success rate at guessing buckets, this is because it guesses short, simple bucket names, effectively focusing on only a small high-density portion of the Figure 4. Stratosphere Design—Stratosphere consists of a three-stage pipeline that allows it to use existing found buckets to predict new buckets with similar semantic structure. Figure 5. Generator Hit-Rate Over Time—During the first generation iteration, Token Bigrams finds the most valid buckets (11%). Over time, all generators’ accuracy plateaus below 2.5% (i.e., fewer than 250 buckets per 10,000). name space. When running all scanners for one month, all five of the generators find more complex names, with an average and longest length 10 characters and 17 characters longer than Continella, respectively. Token PCFG generates the most complex valid bucket names of all generators and existing cloud-storage scanning methods (Figure 7(a)). It is not surprising that a generator that combines the use of a grammar and corpus is able to find more complex names, as 60% of names found in passive sources make use of a corpus token. When our scanners fail to find complex names, it is typically due to their inability to guess the "random" component of bucket names; the median and 90th percentile of all random tokens (i.e., the part of a bucket name that was not matched to any corpus according to zxcvbn) found by active scanning is 5 and 15 orders of magnitude less complex than the random tokens found in passive data sources, respectively. To test if failing to guess the random component of a name is the primary reason active scanning methods are unable to find complex names, we replace all random tokens (i.e., Table 2) with a constant, while keeping the rest of the bucket name intact (i.e., <const>-"test"), thereby creating constant-random patterns. Token PCFG is able to find at least one valid bucket-name for 76% of all constant-random patterns found in the passive data source that occur more than once and 93% of constant-random patterns that have occurred at least 100 times (i.e., greater than 0.01% of all found bucket patterns). Continella and Grayhat, in comparison, are able to identify 61% and 67% of patterns that occur more than once, respectively. Figure 6. Generation Time– Character 5-Grams is the fastest at generating valid candidates (without training), generating a total of 146,000 valid candidates in 100,000 seconds. Training time adds nearly an order of magnitude more time to produce the same number of valid buckets. Figure 7. Guessability of Bucket Names–(a) The Token PCFG algorithm finds the most complex bucket names (over three magnitudes more complex than Continella). (b) When removing all random tokens in names, all five new scanning approaches find a similar number of complex names compared to Farsight, while existing scanning methods and black-box repositories still fail at guessing the complex structures of names. We also analyze the guessability distribution of constant-random patterns in Figure 7(b), and find that existing work fails to find a significant number of complex constant-random patterns (up to 4 magnitudes less). All five of our generators find nearly as many complex names compared to Farsight (the passive data source with the most complex names) and four of the five algorithms even find up to a magnitude more complex (i.e., guessability greater than \(10^{4}\)) constant-random patterns than Farsight. These results suggest that, with time, active scanning methods that closely model bucket name patterns (e.g., Token PCFG) are in fact searching for the correct bucket-name patterns, but require more time to accurately guess random components. ### Limitations of Predicting Bucket Names To further understand the fundamental limitations of cloud-storage active scanning, such as the ability to find complicated bucket names that are likely to be more vulnerable, we investigate the theoretical limits of active scanning and compare them to Stratosphere with Token PCFG. Bucket names found in passive DNS and search sources are typically hard to guess randomly: bucket names are a median 13 characters long, the median Shannon entropy is 50 bits, and the median guessability is \(10^{11}\) guesses (using zxcvbn's guessability heuristic ((71))). Using a dictionary-driven attack with no prior knowledge of patterns to guess a single name with a median-guessability would take approximately 7 hours on a single Intel I5 core (Zhou et al., 2017). Guessing the easiest 50% of names (Figure 1) would take 353.4 years, assuming no network overhead. However, understanding naming patterns can simplify the search space by 9 orders of magnitude, as only 29% of bucket names are truly random (Table 2). We recalculate the zxcvbn heuristic assuming that the pattern is known beforehand (i.e., removing the probability of a token occurring in a certain pattern index) and find the median guessability becomes \(10^{4}\). In other words, understanding patterns would shorten the time to guess the easiest-guessable 50% of names to a mere 4 minutes compared to 353.4 years. Using the hit-rate at which Stratosphere + Token PCFG finds bucket names equivalent to the median guessability of passive sources (\(10^{11}\)), 0.02%, it would take Stratosphere + Token PCFG 2.4 minutes, assuming no network overhead, to guess the easiest-guessable 50% of names found in all of our passive datasources. ### Summary We present Stratosphere, the first cloud-storage scanning system that uses previously discovered bucket naming patterns to search the entire cloud-storage name space and achieve a long-term higher (up to 4 times) hit-rate compared to existing active scanners. Though random components of complex bucket names create a fundamental limitation for active scanning, Stratosphere + Token PCFG, unlike existing scanners, is still able to find 93% of all popular naming patterns and nearly the same number of hard-to-guess names as Farsight when ignoring random tokens, likely approaching the fundamental limits of what a scanner can achieve. Finding complex names also allows Stratosphere + Token PCFG to find 1.4 and 2.3 times more sensitive and misconfigured buckets than Continella when scanning for an equivalent time. ## 5. Security Analysis of Found Buckets We combine all 2.1M valid cloud-storage buckets found across our eight data sources and five variations of Stratosphere to analyze the security posture of the cloud storage ecosystem. By quantifying the exposure of sensitive files, misconfigured bucket permissions, and the active exploitation of storage buckets, we discover previous work (Zhou et al., 2017) underestimates the vulnerability of cloud storage by up to 5.8 times. ### Exposed Sensitive Files Of the 2.1M valid buckets we find, 173K (13%) have publicly listable files. However, this does not directly indicate a vulnerability--some buckets may intentionally allow public access. For example, buckets used to host public websites or to share public data sets are typically public. To better understand whether public buckets pose a security risk, we investigate the types of files exposed by collecting file metadata for the first 100K files in each bucket. We use American Express EarlyBird (Farsight, 2018) (Section 3) to detect whether sensitive data is exposed. In total, 10.6% of public buckets host sensitive data--5.3 times more than reported in previous work (Zhou et al., 2017)--of which 6% contain backup files, 4% contain database dumps, and 3.6% buckets contain cryptographic keys. In aggregate, we find hundreds of thousands of sensitive files: 14.4K private keys (.p12,.key, or.pfx), 683K SQL dumps (.sql or.sql.gz), and 99.4K backups (.bak). By manually analyzing filenames, we discover 10 notably sensitive buckets that clearly belong to a private company or organization, including a travel agency publicly revealing over 8K travel documents (including passports), a popular messaging application's repository of 20K user uploads, a Point of Sale restaurant management app's collection of production RSA private keys, an identity protection company's customer invoices, and a Department of Defense contractor's internal documentation. We further detail the disclosure process of the 10 buckets in Appendix B. ### Misconfigured Bucket Permissions Beyond simply allowing public access, buckets also frequently have vulnerable permissions that allow attackers to delete or read files in what would otherwise be a private bucket. We analyze all publicly available ACLs (23.54%, 21.37%, and 0.50% of AWS, GCP, and Alibaba's public buckets, respectively) and find 13%, 6%, and 57% of AWS, GCP, and Alibaba buckets allow writing new content or deleting existing files (Table 3). Further, 5% of private AWS buckets with readable permissions (636 buckets) allow for permissions to be changed by the public, thus making their private status irrelevant. Such unintentional exposure of files is only found in AWS, and is likely due to AWS' offering of over 100 unique actions/permissions (Farsight, 2018) compared to GCP's 14 (Zhou et al., 2017). Alibaba's ACL permission model prevents the problem by allowing only 3 permission states: private, public-read, public-read-write (Farsight, 2018). We show the categories of buckets, using a categorization method detailed in Appendix C, most often misconfigured in Figure 8. Overall, previous work (Zhou et al., 2017) under-estimated by 5.8 times the number of misconfigured buckets. Vulnerable permissions are a worsening problem for AWS--the fraction of vulnerable public buckets (i.e., buckets that allow writing new content, deleting files, or changing the bucket's ACLs) in our study increases over time for Amazon while remaining relatively stable for GCP (Figure 9). In AWS, buckets that were first or last updated within the past year are on average 1.6 and 4 times more likely to be vulnerable compared to buckets created or updated 5 and 10 years ago, respectively. This may be in part to Amazon's permission scheme becoming more complex over that time period (Zhu et al., 2018). We notice a dip in vulnerable buckets in 2017-2018 that coincides with the launch of Amazon Macie, a service that applies machine learning to detect sensitive information exposed in public buckets (Brock et al., 2018) and a cluster of S3 security articles written on AWS's blog (Zhu et al., 2018; Zhu et al., 2018). Nonetheless, within a year the fraction of vulnerable buckets begins to rise again and become more prevalent in 2020, despite AWS's launch of other S3 security features in late 2018 (Brock et al., 2018; Brock et al., 2018). We note that many factors likely contribute to the apparent increase in vulnerability, including the decreased barrier of entry to using cloud services. ### Exploited Buckets in the Wild Buckets with misconfigured permissions suffer active exploitation. To estimate the fraction of buckets that have been unsolicitedly written-to, we look for evidence of scanning campaigns that upload identical files across buckets. Concretely, we manually explore the top 100 file hashes (appearing in 45K buckets), using ETags (a unique and deterministic file identifier that is often simply the MD5 digest of file contents (Zhu et al., 2018; Zhu et al., 2018)) that appear as a popular Google search result. We discover six unique messages left by scanners intended to alert owners of their bucket being publicly writable (i.e., "white-hat messages") and six unique messages containing generic content (e.g. empty file, the word "test"). Message uploads are often clustered around specific dates--the top 5 days of most warning messages uploaded across 10 years make up 54.6% of all warning messages--suggesting that messages are left as a result of bulk scans. We observe a total of 7,475 warning messages uploaded to 4,769 public buckets, with 99.8% belonging to AWS S3. In other words, 3% of all public buckets have already had their vulnerability to being publicly writable exploited. Surprisingly, 46% of bucket owners do not notice the exploitation of their bucket and continue to upload files (ranging from 1 week to Figure 8. Misconfiguration rates by bucket category—Buckets storing primarily logs are the most likely to be publicly-writable, while buckets containing documents are the most likely to allow reading objects. Figure 9. Vulnerability of buckets over time—The fraction of buckets created or last updated on AWS within the past year are on average 4x more likely to be vulnerable compared to buckets created or updated 10 years ago. However, no such pattern exists on GCP. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline & \multicolumn{2}{c}{Amazon AWS} & \multicolumn{2}{c}{Google Cloud} & \multicolumn{2}{c}{Alibaba} \\ \cline{2-7} & \% Public & \% Private & \% Public & \% Private & \% Public & \% Private \\ \hline Read Permissions (Bucket) & 23.54\% (25,587) & 1.05\% (11,753) & 21.37\% (13,539) & 25.68\% (114,817) & 0.50\% (123) & 0.03\% (92) \\ \hline List (Objects) & 100.00\% (25,587) & 0.00\% (0) & 100.00\% (13,539) & 0.00\% (0) & 100.00\% (123) & 0.00\% (0) \\ Read (Objects) & 73.86\% (18,899) & 4.90\% (576) & 76.02\% (102,929) & 1.01\% (1,157) & 100.00\% (123) & 0.00\% (0) \\ Write (Objects) & 13.40\% (3,429) & 4.82\% (566) & 6.04\% (818) & 0.06\% (66) & 56.91\% (70) & 0.00\% (0) \\ Change Permissions (Bucket) & 8.42\% (2,154) & 5.41\% (636) & 3.46\% (469) & 0.00\% (0) & 0.00\% (0) & 0.00\% (0) \\ Delete (Objects) & 13.40\% (3,429) & 4.82\% (566) & 5.35\% (725) & 0.00\% (0) & 56.91\% (70) & 0.00\% (0) \\ \hline \hline \end{tabular} \end{table} Table 3. Vulnerable Permission Configurations—Buckets with readable permissions in AWS, compared to GCP and Alibaba, are most likely to contain vulnerable permissions (i.e., buckets that allow writing new content, deleting files, or changing the bucket’s ACLs). Most concerning, 5.41% of private AWS buckets with readable permissions are configured to allow anyone to change the bucket ACL. (Note: Read permission percentages are out of the total number of public and private buckets, while other permission percentages are only out of buckets that allow read permissions.) 8 years later). To compare the effects of "warned" and "not-warned" buckets, we scan all public buckets found one month after their initial scan and conduct a hypothesis test (Zhu et al., 2019); we discover that buckets that were warned in 2020 are statistically significantly (\(p<0.05\)) more likely (1.3 times) to turn private or be deleted ("patched") than public buckets that were last updated in 2020. However, buckets warned in 2019 or years prior are statistically significantly less likely (up to 7 times) to turn private or be deleted than non-warned bucket. To further analyze bucket exploitation, we deploy bucket honeypots on the AWS, GCP, and Alibaba clouds (Appendix A); buckets see unsolicited traffic within the first 24 hours of being created. Notably, only four months into our study do we receive a warning--only from AWS--that our honeypots are publicly accessible. From October 2020 to January 2021, we receive a total of 563 requests to our honeypot buckets, with AWS buckets seeing over 4x the amount of traffic compared to GCP and Alibaba. The distribution of source traffic is different than the breakdown of IPv4 web scanning (Zhu et al., 2019): most notably, China leads large IPv4 scanning campaigns but is completely absent from scanning serverless storage. Further, 11% of bucket requests originate from known Tor exit nodes. Despite our honeypot buckets being writeable, no sources attempt to write to any of the buckets. Nonetheless, many sources do attempt to enumerate permissions, sending a total of 47 and 5 requests to the AWS and GCP buckets, respectively. Based on request user agents, we observe two instances in which sources attempt to manually view open accessible files after discovering a public bucket. ### Summary Across the three cloud providers, we discover the most popular cloud, AWS, to be the most vulnerable to insecure user configurations, and its vulnerability to be 5 times worse compared to previous work (Zhu et al., 2019). Moreover, buckets with vulnerable permissions are only becoming more prevalent on AWS, with buckets created in 2020 being 4 times more likely to be vulnerable compared to buckets created or updated 10 years ago. We see that security interventions (e.g., AWS Macie and warning buckets) do decrease the fraction of vulnerable buckets, but only in the short-term. ## 6. Discussion and Conclusion Misconfigured storage buckets continue to cause catastrophic data leaks for companies. Our results indicate that attackers are actively scanning for vulnerable buckets and that existing defensive solutions are insufficient for uncovering problems before attackers do. We show that while state-of-the-art solutions appear to initially have high hit-rates for guessing buckets, they do not effectively find the types of buckets that are misconfigured or contain sensitive data. Vulnerable buckets tend to have more complex names than previous scanners could uncover. Despite insecure buckets having more complex names, many buckets are composed of multiple tokens, incorporate common English words, and are "human-readable", which allows us to more efficiently predicting the names of buckets. We show that ML-driven approaches can uncover a significant number of buckets, but that all solutions are fundamentally limited by the frequent inclusion of opaque, random identifiers that are difficult to guess by any algorithm. Nonetheless, we stress that cloud storage security should not rely on hard-to-guess names, but rather rely on operators securely configuring bucket permissions. When considering a more comprehensive set of buckets, we find that insecure buckets are much more common than previously believed. Our perspective also helps identify that Amazon S3 buckets have considerably worse security than other providers. This is very likely due to Amazon's particularly complex permissions model. We encourage Amazon to consider either simplifying their permissions model or providing better tools to assist users in uncovering security problems and inconsistent configurations. We hope Stratosphere will help researchers more accurately and comprehensively understand and improve the cloud storage ecosystem but emphasize that cloud providers need to step up in providing more effective means for keeping bucket contents secure. ###### Acknowledgements. The authors thank Deepak Kumar, Kimberly Ruth, Katherine Izhikevich, Tatyana Izhikevich and the Stanford University Empirical Security Research Group for providing insightful discussion and comments on various versions of this work. We also thank VirusTotal, FarSight, and Zetalytics for providing academic access to their APIs. This work was supported in part by Google., Inc., NSF Graduate Fellowship DGE-1656518, and a Stanford Graduate Fellowship.
2303.17873
Continuous Lebesgue measure-preserving maps on one-dimensional manifolds: a survey
We survey the current state-of-the-art about the dynamical behavior of continuous Lebesgue measure-preserving maps on one-dimensional manifolds.
Jozef Bobok, Jernej Činč, Piotr Oprocha, Serge Troubetzkoy
2023-03-31T08:09:43Z
http://arxiv.org/abs/2303.17873v1
# Continuous Lebesgue measure-preserving maps on one-dimensional manifolds: a survey ###### Abstract. We survey the current state-of-the-art about the dynamical behavior of continuous Lebesgue measure-preserving maps on one-dimensional manifolds. Key words and phrases:Interval map, circle map, Lebesgue measure-preserving, periodic points, generic properties 2020 Mathematics Subject Classification: 37E05, 37A05, 37B65, 37C20 ## 1. Introduction Let \(M\) denote a compact connected one-dimensional manifold, namely the _unit interval_, \(I:=[0,1]\), and the _unit circle_, \(\mathbb{S}^{1}\). Define \(C(M)\) to be the set of continuous maps of \(M\). Let \(\lambda\) denote the _Lebesgue measure_ on an underlying manifold. Our survey will focus on discussion of topological and dynamical properties in the space \(C_{\lambda}(M)\subset C(M)\) of Lebesgue measure-preserving continuous maps of \(M\) with the metric of uniform convergence; the space \(C_{\lambda}(M)\) is a complete metric space, see [12, Proposition 4]. Our particular interest is in Lebesgue measure-preserving continuous maps on \(M\) which are not necessarily invertible, so the case that is not usually studied in Ergodic Theory. The class \(C_{\lambda}(M)\) contains very large spectrum of maps; on one hand nowhere differentiable ones or even without finite or infinite one-sided derivative [10] and on the other hand many piecewise monotone maps, many piecewise smooth maps and of course maps \(\operatorname{id}\) and \(1-\operatorname{id}\). Furthermore, \(C_{\lambda}(\mathbb{S}^{1})\) also contains all circle rotations. For a general compact manifold \(\mathbb{M}\), let \(H_{\lambda}(\mathbb{M})\) denote the space of Lebesgue measure-preserving homeomorphisms of \(\mathbb{M}\), which is again a complete metric space when equipped with the uniform metric. In the setting of volume preserving homeomorphisms in dimension \(1\), the dynamical behavior is simple and thus not of much interest. However, there are some similarities of the dynamics of higher dimensional homeomorphisms with the one dimensional continuous maps, so we will mention them throughout the article. There is a survey book by Alpern and Prasad on the dynamics of generic volume preserving homeomorphism [4], thus we only briefly mention some such results for comparison with \(C_{\lambda}(M)\). Our choice of \(C_{\lambda}(M)\) (and \(H_{\lambda}(\mathbb{M})\)) is motivated by the fact that they are one-dimensional versions of volume-preserving maps, or more broadly, conservative dynamical systems; ergodic maps preserving Lebesgue measure are the most fundamental examples of maps having a unique physical measure. Since generic maps in \(C_{\lambda}(M)\) are weakly mixing [17], the Ergodic Theorem implies that for a generic map in \(C_{\lambda}(M)\) the closure of a typical trajectory has full Lebesgue measure, thus the statistical properties of typical trajectories can be revealed by physical observations. Or as Karl Petersen says in the introduction of [47]: Measure-preserving systems model processes in equilibrium by transformations on probability spaces or, more generally, measure spaces. They are the basic objects of study in ergodic theory, a central part Introduction The study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. In this paper, we study the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. In this paper, we study the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of the \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. The \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps is a very important topic in the study of \(\mathrm{S}^{1}\)-limit shadowing of maps on \(M\) equipped with smooth maps. Not to disturb the flow of reading, we give in the appendix a proof of Lemma 4 which is needed to argue for Remark 1.1. ## 2. Denseness properties First one needs to note that both spaces \(C_{\lambda}(I)\) and \(C_{\lambda}(\mathbb{S}^{1})\) equipped with the metric of uniform convergence are complete. This follows from Proposition 4 of [12] which is proven for \(M=I\), however the analogous proof works also for \(\mathbb{S}^{1}\). We call a dense \(G_{\delta}\) set _residual_ and call a property _generic_ if it is attained on at least a residual set of the Baire space on which we work. In this section we will study three different notions of denseness. The strongest property that we verify is that a certain dynamical property holds for an open dense set of maps in \(C_{\lambda}(M)\). We verify the weaker notion of genericity for many other topological dynamical properties, while for certain properties we verify only the denseness of maps with certain properties in \(C_{\lambda}(M)\). The study of generic properties in dynamical systems was initiated in the article by Oxtoby and Ulam from 1941 [46] in which they showed that for a finite-dimensional compact manifold with a non-atomic measure that is positive on open sets, the set of ergodic measure-preserving homeomorphisms is a generic set in the strong topology. Later Halmos in 1944 [32],[33] introduced approximation techniques to a purely metric setting. Namely, he studied interval maps which are invertible almost everywhere and preserve the Lebesgue measure. He showed that the generic invertible map is weakly mixing (i.e., has continuous spectrum). Subsequently, Rohlin in 1948 [50] showed that Halmos' result is optimal in a sense that the set of strongly mixing measure-preserving invertible maps is of the first category in the same underlying space. It took until 1967 that this line of research was continued when Katok and Stepin [37] introduced the notion of a speed of approximation. One of the most notable applications of their methods is the proof of genericity of ergodicity and weak mixing for certain classes of interval exchange transformations (IETs). More details on the follow-up history of approximation theory can be found in the surveys [26], [5] and [54]. Now we restrict to our particular context. The roots for studying generic properties on \(C_{\lambda}(I)\) come from the paper [10] and this line of study was continued recently in [17], [11], [12], [13]. The first observation we can make about maps from \(C_{\lambda}(I)\) is that they have dense set of periodic points. This follows directly from the Poincare Recurrence Theorem and the fact that in dynamical systems given by an interval map the closures of recurrent points and periodic points coincide [25]. Furthermore, except for the two exceptional maps \(\operatorname{id}\) and \(1-\operatorname{id}\), every such map has positive metric entropy. In fact, except for these two exceptional maps every map is non-invertible on a set of positive measure and thus by a well known theorem (see for example [55, Corollary 4.14.3]) has positive metric entropy and thus positive topological entropy as well. ### Main tools There are two main tools that are used in most of the results from this section. Building on the work of Bobok [10, Lemma 1] the following lemma was proven in [17, Proposition 12] for the interval case, for the definition of a leo map we refer the reader to Subsection 2.3. **Lemma 2.1.1**.: _The set of maps that are piecewise affine Markov and leo are dense in \(C_{\lambda}(M)\)._ Proof.: The proof in the circle case follows by combining various known results. First of all the density of a special collection of maps in \(PA_{\lambda}(\mathbb{S}^{1})\) was shown in Lemmas 13 and 14 from [11], we can assume all of the absolute values of slopes of these maps are at least 4. Then using Lemma 12 from [13] we can find a dense set of maps in \(PA_{\lambda}(\mathbb{S}^{1})\) all of whose critical values are distinct and again of whose slopes are at least 4. Finally using Lemma 12 from [17] whose proof also works without change for the circle we have that there is a dense set of Markov maps in \(PA_{\lambda}(\mathbb{S}^{1})\). In this proof it is implicitly left as an exercise to the reader to show that the resulting map is leo, which we do here. Using the notation of the proof of this lemma in [17], let \(J:=(p_{i-1},p_{i+1})\) and \(U\) an arbitrary non-empty open interval. Since the slopes of \(f\) and \(g_{1}\) are at least 4 we have \(\lambda(g_{1}(A))/\lambda(A)\geq 4/2=2\) for any non-empty interval which contains at most one critical point, thus we can find an \(n\) such that \(V:=g_{1}^{n}(V)\) contains at least two consecutive critical points. From the construction of \(g_{1}\) we have \(f(V\setminus J)=g_{1}(V\setminus J)\), \(f(V\cap J)\subset g_{1}(V\cap J)\) provided that \(\{p_{i-1},p_{i+1}\}\neq\emptyset\), and if \(V\subset J\) then since \(V\) contains two consecutive critical points we have \(g_{1}(V)=g_{1}(J)=f(J)\supset f(V)\); and thus \(g_{1}\) is leo. The proofs in this section also use window perturbations as the other main tool. Let \(J\) be an arc in \(M\) (i.e., a homeomorphic image of \([0,1]\)). Let \(m\) be an odd positive integer and \(\{J_{i}\in M:1\leq i\leq m\}\) a finite collection of arcs satisfying \(\cup_{i=1}^{m}J_{i}=J\) and \(\operatorname{int}(J_{i})\cap\operatorname{int}(J_{j})=\emptyset\) when \(i\neq j\). We will refer to this as a _partition_ of \(J\). Fix \(f\in C_{\lambda}(M)\) an arc \(J\in M\) and a partition \(\{J_{i}\}\) of \(J\). A map \(h\in C(M)\) is _an \(m\)-fold window perturbation of \(f\) with respect to \(J\) and the partition_\(\{J_{i}\}\) if * \(h|_{J^{c}}=f|_{J^{c}}\) * for each \(1\leq i\leq m\) the map \(h|_{J_{i}}\) is an affinely scaled copy of \(f|_{J}\) with the orientation reversed for every second \(i\), with \(h|_{J_{1}}\) having the same orientation as \(f|_{J}\). The essence of this definition is illustrated by Figure 1. ### Measure-theoretic properties Let \(\mathcal{B}\) denote the Borel sets in \(M\). The measure-preserving transformation \((f,M,\mathcal{B},\mu)\) is called * _ergodic_ if for every \(A\in\mathcal{B}\), \(f^{-1}(A)=A\)\(\mu\)-a.e. implies that \(\mu(A)=0\) or \(\mu(A^{c})=0\). * _weakly mixing_, if for every \(A,B\in\mathcal{B}\), \[\lim_{n\to\infty}\frac{1}{n}\sum_{j=0}^{n-1}|\mu(f^{-j}(A)\cap B)-\mu(A)\mu(B )|=0.\] * _strongly mixing_ if for every \(A,B\in\mathcal{B},\lim_{n\to\infty}\mu(f^{j}(A)\cap B)=\mu(A)\mu(B)\). Figure 1. For \(f\in C_{\lambda}(M)\) shown on the left, we show on the right the graph of \(h\) which is a 3-fold piecewise window perturbation of \(f\) on the interval \([a,b]\). * _measure-theoretically exact_ if for each set \(A\in\cap_{n\geq 0}f^{-n}(\mathscr{B})\) it holds that \(\mu(A)\mu(A^{c})=0\). In this paper we will mainly focus on the case of particular invariant measure, the Lebesgue measure \(\lambda\). The following measure-theoretic properties of \(C_{\lambda}(I)\)-generic functions were proven in [17]. **Theorem 2.2.1**.: _The \(C_{\lambda}(I)\)-generic function_ 1. _is weakly mixing with respect to Lebesgue measure_ \(\lambda\)__[_17_, Th. 15]__,_ 2. _maps a set of Lebesgue measure zero onto_ \(I\)__[_17_, Cor. 22]__. In analogy to Rohlin's classical result [50], the following result was proven for the interval in [17, SS3]. For the circle case we need several modifications, first for the density [17, Proposition 12] is replaced by Lemma 2.1.1. Except for the application of the Block Coven result ([6, Theorem 9], [17, Theorem 18]), all the other steps work for the circle without change. To apply [6, Theorem 9] in the circle case it suffices to start with a sufficiently fine Markov partition such that the diameter of the image of each element of the partition is less than the diameter of the circle. **Theorem 2.2.2**.: _The set of mixing maps in \(C_{\lambda}(M)\) is dense and is of the first category._ Returning to the case when \(M=\mathbb{S}^{1}\), the following result was obtained in [13, Theorem 2]. For each \(\alpha\in[0,1)\)], let \(r_{\alpha}\colon\mathbb{S}^{1}\to\mathbb{S}^{1}\) be a circle rotation for the angle \(\alpha\). Define the operator \(T_{\alpha,\beta}\colon C_{\lambda}(\mathbb{S}^{1})\mapsto C_{\lambda}(\mathbb{ S}^{1})\) by \(T_{\alpha,\beta}(f)=r_{\alpha}\circ f\circ r_{\beta}\). **Theorem 2.2.3**.: _There exists a dense \(G_{\delta}\) subset \(\mathscr{G}\) of \(C_{\lambda}(\mathbb{S}^{1})\) such that_ 1. _each_ \(g\in\mathscr{G}\) _is weakly mixing with respect to_ \(\lambda\)_,_ 2. _each_ \(g\in\mathscr{G}\) _maps a set of Lebesgue measure zero onto_ \(\mathbb{S}^{1}\)_, and_ 3. _for each pair_ \(\alpha,\beta\in[0,1)\) _and each_ \(g\in\mathscr{G}\) _the map_ \(T_{\alpha,\beta}(g)\in\mathscr{G}\)_._ Point (2) was shown in [17, Cor. 22] for the interval, the proof holds without change in the case of the circle, and furthermore if \(g\) maps a set of Lebesgue measure zero onto \(\mathbb{S}^{1}\), then so does \(T_{\alpha,\beta}(g)\). In [12, Theorem 2] it was shown that there is a dense set of non-ergodic maps in \(C_{\lambda}(I)\); its proof works with natural modifications also in \(C_{\lambda}(\mathbb{S}^{1})\). Thus Theorem 2.2.3 is optimal since there is no nonempty open set of maps satisfying nice mixing properties for any \(\alpha,\beta\). For volume preserving homeomorphisms we have the following classical result on the unit cube \(I^{n}\) due to Oxtoby and Ulam [46] (see also[4, Theorem 7.1]): **Theorem 2.2.4**.: _The generic \(f\in H_{\lambda}(I^{n})\) is ergodic for all \(n\geq 2\)._ The improvement of Theorem 2.2.4 was shown in [2, Theorem 1.2]: **Theorem 2.2.5**.: _The generic \(f\in H_{\lambda}(\mathbb{M})\) is weakly mixing and periodic points of \(f\) are dense in \(\mathbb{M}\) for any compact manifold \(\mathbb{M}\) of dimension at least 2._ ### Topological expansion properties We call a map \(f\in C_{\lambda}(M)\) * _transitive_ if for all nonempty open sets \(U,V\subset M\) there exists \(n\geq 0\) so that \(f^{n}(U)\cap V\neq\emptyset\), * _totally transitive_ if \(f^{n}\) is transitive for all \(n\geq 1\). * _(topologically) weakly mixing_ if its Cartesian product \(f\times f\) is transitive, * _topologically mixing_ if for all nonempty open sets \(U,V\subset M\) there exists \(n_{0}\geq 0\) so that \(f^{n}(U)\cap V\neq\emptyset\) for every \(n\geq n_{0}\), * _leo_ (_locally eventually onto_, also known as _topologically exact_) if for every nonempty open set \(U\subset M\) there exists \(n\in\mathbb{N}\) so that \(f^{n}(U)=M\). By the usual hierarchy in topological dynamics, every leo map is topologically mixing, topologically weakly mixing, totally transitive and transitive. In [17, Theorem 9] the following theorem was shown: **Theorem 2.3.1**.: _The \(C_{\lambda}(I)\)-generic function is leo._ Theorem 2.3.1 was recently improved in [14] where the following result was shown: **Theorem 2.3.2**.: _There is an open dense set of leo maps in \(C_{\lambda}(I)\)._ If we take any continuous map of \(I\) with attracting fixed point in the interior of \(I\) then it is clear that all sufficiently close maps in \(C(I)\) cannot be transitive. For any map in \(C(I)\) we can blow up an attracting to an invariant set with attracting fixed point. Thus non-transitive continuous interval maps and also continuous interval maps with \(\overline{\operatorname{Per}(f)}\neq I\) form open dense sets. By the result of Blokh [9, Theorem 8.7], for interval maps topological mixing implies periodic specification property (cf. proof of Buzzi [20, Appendix A]). Therefore, we obtain the following corollary. **Corollary 2.3.3**.: _There is an open dense set of maps from \(C_{\lambda}(I)\) satisfying the periodic specification property._ Now we turn to the the circle case, where a more detailed study has been realized recently in [13]. **Theorem 2.3.4**.: _There is an open dense set of maps \(\mathcal{O}\subset C_{\lambda}(\mathbb{S}^{1})\) such that:_ 1. _each map_ \(f\in\mathcal{O}\) _is leo._ 2. _for each pair of_ \(\alpha,\beta\in[0,1)\) _and each_ \(f\in\mathcal{O}\) _the map_ \(T_{\alpha,\beta}(f)\in\mathcal{O}\)_._ In fact, Blokh's results about specification property mentioned above generalize to all topologically mixing maps on topological graphs, in particular to the circle (see [7], cf. [8, 3]; another approach for proving the generalization of Blokh's results on topological graphs can be found in [34] and was inspired by the techniques of Buzzi for interval maps [20, Appendix A]. **Corollary 2.3.5**.: _There is an open dense set of maps from \(C_{\lambda}(\mathbb{S}^{1})\) satisfying the periodic specification property._ By the same argument as for the interval maps, this result does not extend to the whole set \(C(\mathbb{S}^{1})\). We turn to the case of Lebesgue measure-preserving homeomorphisms. A map \(f:X\to X\) of a compact metric space \(X\) is called _maximally chaotic_ if 1. \(f\) is topologically transitive, 2. the periodic points of \(f\) are dense in \(X\), and 3. \(\limsup_{k\to\infty}\operatorname{diam}(f^{k}(U))=\operatorname{diam}(X)\) for any non-empty open set \(U\subset X\). Notice that maximal chaos implies the well known Devaney chaos. While the leo property is impossible for homeomorphisms, we have the following result in the setting of the \(n\)-dimensional cube \(I^{n}\), which summarizes the results found in [4, Theorems 4.5 and D]: **Theorem 2.3.6**.: _For \(n\geq 2\) the generic \(f\in H_{\lambda}(I^{n})\) is topologically weakly mixing and maximally chaotic._ It is not hard to see that conditions (1) and (3) in the definition of maximal chaos are immediate consequence of weak mixing. Condition (2) is not in general consequence of weak mixing, since there exist weakly mixing minimal systems. ### Periodic points and dimension properties The Hausdorff, lower and upper box dimension of the graph of a function is a way to describe the "roughness" of the function. The following theorem by Schmeling and Winkler [53] was stated for maps from \(C_{\lambda}(I)\), but it holds in any dimension1; in particular, in dimension one we have the following theorem. Footnote 1: Personal communication from Jörg Schmeling. **Theorem 2.4.1**.: _A graph of a generic map \(f\in C_{\lambda}(M)\) has Hausdorff dimension equal to the lower box dimension, both equal to \(1\) and the upper box dimension equal to \(2\)._ Understanding of the structure of the set of periodic points of the function under consideration is among the fundamental tasks in dynamical systems theory. Since generic maps from \(C_{\lambda}(I)\) are weakly mixing with respect to \(\lambda\) it follows that the Lebesgue measure of the set of periodic points is equal to \(0\). It is interesting to study the finer structure of the set of periodic points for generic maps, in particular its cardinality and dimension. The set of periodic points of period \(k\) for \(f\) is denoted \(\operatorname{Per}(f,k)\), the set of fixed points of \(f^{k}\) is denoted by \(\operatorname{Fix}(f,k)\) and of the union of all periodic points of \(f\) is denoted by \(\operatorname{Per}(f)\). In [12] the authors studied the cardinality and structure of the set of periodic points and its respective lower box, upper box and Hausdorff dimensions: **Theorem 2.4.2**.: _For a generic map \(f\in C_{\lambda}(I)\), for every \(k\geq 1\):_ 1. \(\operatorname{Fix}(f,k)\) _is a Cantor set,_ 2. \(\operatorname{Per}(f,k)\) _is a relatively open dense subset of the set_ \(\operatorname{Fix}(f,k)\)_,_ 3. _the set_ \(\operatorname{Fix}(f,k)\) _has lower box dimension and Hausdorff dimension zero. In particular,_ \(\operatorname{Per}(f,k)\) _has lower box dimension and Hausdorff dimension zero. As a consequence, the Hausdorff dimension of_ \(\operatorname{Per}(f)\) _is also zero._ 4. _the set_ \(\operatorname{Per}(f,k)\) _has upper box dimension one. Therefore,_ \(\operatorname{Fix}(f,k)\) _has upper box dimension one as well._ In the setting of \(C_{\lambda}(\mathbb{S}^{1})\), due to the presence of rotations, degree \(1\) maps need to be treated separately. Denote the set of degree \(d\) maps in \(C_{\lambda}(\mathbb{S}^{1})\) by \(C_{\lambda,d}(\mathbb{S}^{1})\). The proof of Theorem 2.4.2 shows: **Theorem 2.4.3**.: _Conclusions of Theorem 2.4.2 hold also for generic maps in \(C_{\lambda,d}(\mathbb{S}^{1})\) for each \(d\in\mathbb{Z}\setminus\{1\}\)._ The proofs of these two theorems can easily be adapted to show that the generic map in \(C(I)\) and degree \(d\) maps in \(C(\mathbb{S}^{1})\) (i.e., not necessarily measure preserving) have the same properties (see [12][Remark 14 and 18]). For \(C_{\lambda,1}(\mathbb{S}^{1})\) the situation is more complicated. A periodic point \(x\in\operatorname{Per}(f,k)\) is called _transverse_, if the graph of \(f^{k}\) crosses the diagonal at \(x\) (possibly coincides with the diagonal on an interval containing \(x\)). Consider the open set \[C_{p}:=\{f\in C_{\lambda,1}(\mathbb{S}^{1}):f\text{ has a transverse periodic point of period }p\}.\] In this setting the proof of Theorem 2.4.2 yields a similar result from [12]. **Theorem 2.4.4**.: _For any \(f\) in a dense \(G_{\delta}\) subset of \(\overline{C}_{p}\) we have for each \(k\in\mathbb{N}\)_ 1. \(\operatorname{Fix}(f,kp)\) _is a Cantor set,_ 2. \(\operatorname{Per}(f,kp)\) _is a relatively open dense subset of_ \(\operatorname{Fix}(f,kp)\)_,_ 3. \(\operatorname{Fix}(f,kp)\) _has lower box dimension and Hausdorff dimension zero. In particular,_ \(\operatorname{Per}(f,kp)\) _has lower box dimension and Hausdorff dimension zero. As a consequence, the Hausdorff dimension of_ \(\operatorname{Per}(f)\) _is zero as well._ _._ 4. _the set_ \(\mathrm{Per}(f,kp)\) _has upper box dimension_ \(1\)_. Thus,_ \(\mathrm{Fix}(f,kp)\) _and_ \(\mathrm{Per}(f)\) _both have upper box dimension also_ \(1\)_._ To interpret this result the set \(C_{\infty}:=C_{\lambda,1}(\mathbb{S}^{1})\setminus\cup_{p\geq 1}\overline{C_{p}}\) was studied in [12]. It turns out that a periodic point can be transformed to a transverse periodic point by an arbitrarily small perturbation of the map, thus the set \(C_{\infty}\) consists of maps without periodic points. Using the same argument one can see that \(\cup_{p\geq 1}\overline{C_{p}}\) contains an open dense set. Therefore, \(C_{\infty}\) is nowhere dense in \(C_{\lambda,1}(\mathbb{S}^{1})\). Furthermore, in [12] the following complete characterization of the set \(C_{\infty}\) was obtained: **Proposition 2.4.5**.: _The set \(C_{\infty}\) consists of irrational circle rotations._ Periodic points of generic homeomorphisms have been well studied also in other contexts. Akin et. al. proved that the set of periodic points is a Cantor set for generic homeomorphisms of \(\mathbb{S}^{1}\)[1, Theorems 9.1 and 9.2(a)]. On the other hand, Carvalho et. al. have shown that the upper box dimension of the set of periodic points is full (i.e., of the same dimension as the dimension of the underlying manifold) for generic homeomorphisms on compact manifolds of dimension at least one [21]2. Footnote 2: this statement only appears in the published version of [21]. Let us recall that generic maps from \(C_{\lambda}(I)\) will necessarily have Lebesgue measure \(0\) on the set of periodic points since \(\lambda\) is weakly mixing. Nonetheless the following somewhat surprising result is proven in [12, Theorem 2]. **Theorem 2.4.6**.: _The set of leo maps in \(C_{\lambda}(I)\) whose periodic points have full Lebesgue measure and whose periodic points of period \(k\) have positive Lebesgue measure for each \(k\geq 1\) is dense in the set \(C_{\lambda}(I)\)._ The following result about volume preserving homeomorphisms is proven in an unpublished sketch by Guiheneuf [30]. **Theorem 2.4.7**.: _The set of periodic points of a generic \(f\in H_{\lambda}(\mathbb{M})\) for a compact manifold \(\mathbb{M}\) of dimension at least two is a dense set of zero measure and for every \(\ell\geq 1\) the set of fixed points of \(f^{\ell}\) is either empty or perfect._ More generally Guiheneuf's result holds for homeomorphisms preserving a "good" measure in the sense of Oxtoby and Ulam [46]. ### Shadowing properties One of the classical notions from topological dynamics is the so-called shadowing property. It is of particular importance in systems possessing sensitive dependence on initial conditions. In such systems, very small errors could potentially lead to large divergence of orbits. Shadowing is a notion arising from computer science and is used as a tool for determining if any hypothetical orbit is indeed close to some real orbit of a topological dynamical system. It assures that the dynamics of maps which satisfy it can be realistically observed through computer simulations. Let us first give definitions that are important for this subsection. For \(\delta>0\), we call a sequence \((x_{n})_{n\in\mathbb{N}_{0}}\subset I\) a _\(\delta\)-pseudo orbit_ of \(f\in C(I)\) if \(d(f(x_{n}),x_{n+1})<\delta\) for every \(n\in\mathbb{N}_{0}\). A _periodic \(\delta\)-pseudo orbit_ is a \(\delta\)-pseudo orbit for which there is \(N\in\mathbb{N}_{0}\) so that \(x_{n+N}=x_{n}\), for every \(n\in\mathbb{N}_{0}\). The sequence \((x_{n})_{n\in\mathbb{N}_{0}}\) is called an _asymptotic pseudo orbit_ if \(\lim_{n\to\infty}d(f(x_{n}),x_{n+1})=0\). Provided a sequence \((x_{n})_{n\in\mathbb{N}_{0}}\) is a \(\delta\)-pseudo orbit and an asymptotic pseudo orbit we say it is an asymptotic \(\delta\)-pseudo orbit. **Definition 2.5.1**.: _We say that a map \(f\in C(M)\) has the:_ * shadowing property _if for every_ \(\varepsilon>0\) _there exists_ \(\delta>0\) _satisfying the following: given any_ \(\delta\)_-pseudo orbit_ \(\mathbf{y}:=(y_{n})_{n\in\mathbb{N}_{0}}\) _we can find a corresponding point_ \(x\in M\) _that_ \(\varepsilon\)_-traces_ \(\mathbf{y}\)_, i.e.,_ \[d(f^{n}(x),y_{n})<\varepsilon\text{ for every }n\in\mathbb{N}_{0}.\] * periodic shadowing property _if for every_ \(\varepsilon>0\) _there exists_ \(\delta>0\) _satisfying the following condition: given any periodic_ \(\delta\)_-pseudo orbit_ \(\mathbf{y}:=(y_{n})_{n\in\mathbb{N}_{0}}\) _we can find a corresponding periodic point_ \(x\in M\)_, which_ \(\varepsilon\)_-traces_ \(\mathbf{y}\)_._ * limit shadowing property _if for every_ asymptotic pseudo-orbit_, i.e. sequence_ \((x_{n})_{n\in\mathbb{N}_{0}}\subset M\)_, so that_ \[d(f(x_{n}),x_{n+1})\to 0\text{ when }n\to\infty\] _there exists_ \(p\in M\)_, so that_ \[d(f^{n}(p),x_{n})\to 0\text{ as }n\to\infty.\] * s-limit shadowing property _if for every_ \(\varepsilon>0\) _there exists_ \(\delta>0\) _so that_ 1. _for every_ \(\delta\)_-pseudo orbit_ \(\mathbf{y}:=(y_{n})_{n\in\mathbb{N}_{0}}\) _we can find a corresponding point_ \(x\in M\) _which_ \(\varepsilon\)_-traces_ \(\mathbf{y}\)_,_ 2. _for every asymptotic_ \(\delta\)_-pseudo orbit_ \(\mathbf{y}:=(y_{n})_{n\in\mathbb{N}_{0}}\) _of_ \(f\)_, there exists_ \(x\in M\) _which_ \(\varepsilon\)_-traces_ \(\mathbf{y}\) _and_ \[\lim_{n\to\infty}d(y_{n},f^{n}(x))=0.\] The following theorem was proved by the authors in [12, Theorem 3]. **Theorem 2.5.2**.: _The shadowing and periodic shadowing properties are generic for maps from \(C_{\lambda}(I)\)._ For comparison, in the larger space \(C(M)\) Mizera proved in [42] that the shadowing property is generic. Several other results that shadowing is generic in topology of uniform convergence in more general settings were established (see [43, 38]) using the techniques of Pilyugin and Plamenevskaya [48] initially developed for proving genericity of shadowing property of homeomorphisms on any smooth compact manifolds without a boundary. **Proposition 2.5.3**.: _The set of maps \(f\in C_{\lambda}(I)\) that have s-limit shadowing property is dense in the set \(C_{\lambda}(I)\)._ The main result in [11] for the setting \(C_{\lambda}(\mathbb{S}^{1})\) is even stronger than the preceding two statements. **Theorem 2.5.4**.: _The s-limit shadowing property is generic in \(C_{\lambda}(\mathbb{S}^{1})\)._ **Corollary 2.5.5**.: _The limit shadowing, periodic shadowing and shadowing property are generic in \(C_{\lambda}(\mathbb{S}^{1})\)._ Actually, Theorem 2.5.4 is somewhat surprising since \(C_{\lambda}(\mathbb{S}^{1})\) is the first environment in which s-limit shadowing was proven to be generic. Up to now, only denseness of s-limit shadowing was established in the setting of compact topological manifolds [41]. Theorem 2.5.4 also holds in the setting of \(C(\mathbb{S}^{1})\). However, the methods used in [11] do not work in the setting of \(C_{\lambda}(I)\). This motivates the following. **Question A**.: _Is s-limit shadowing generic in \(C_{\lambda}(I)\)?_ Possible positive answer to the above question will require some new techniques than the ones used in [11]. On the other hand, a standard technique to disprove that a condition is generic is to find an open set without the required property. Such approach is again impossible, because of Proposition 2.5.3. In the view of Theorem 2.5.4 and the result from [31] it is also natural to ask the following question. **Question B**.: _Is s-limit shadowing generic also for the volume preserving homeomorphisms on manifolds of dimension greater than \(1\)?_ In the context of volume preserving homeomorphisms on manifolds of dimension at least two (with or without boundary), the genericity of shadowing was recently proven by Guiheneuf and Lefeuvre [31]. ### Knot points We define the upper, lower, left and right _Dini derivatives_ of \(f\) at \(x\): \[D^{+}f(x) :=\limsup_{t\to x^{+}}\frac{f(t)-f(x)}{t-x} D_{+}f(x) :=\liminf_{t\to x^{+}}\frac{f(t)-f(x)}{t-x}\] \[D^{-}f(x) :=\limsup_{t\to x^{-}}\frac{f(t)-f(x)}{t-x} D_{-}f(x) :=\liminf_{t\to x^{-}}\frac{f(t)-f(x)}{t-x}.\] We call a point \(x\in M\) a _knot point_ of function \(f\in C(M)\) if suprema and infima of the right and left derivatives at point \(x\) satisfy \(D^{+}f(x)=D^{-}f(x)=\infty\) and \(D_{+}f(x)=D_{-}f(x)=-\infty\). The following theorem states a consequence of a more general result proved in [10] for the interval, the circle case can be treated analogously. **Theorem 2.6.1**.: _The \(C_{\lambda}(M)\)-generic function has a knot point at \(\lambda\)-almost every point._ The next result generalizes a classical result of Saks [51] saying that the set of Besicovitch functions is a meager set in \(C(I)\). Its circle version follows from the fact that a monotonicity result can be applied separately on arcs partitioning the circle. A _Besicovitch function_\(f\in C(M)\) is a map such that for every \(x\in M\), no unilateral finite or infinite derivative exists at \(x\). **Corollary 2.6.2**.: _The set of Besicovitch functions is a meager set in \(C_{\lambda}(M)\)._ Proof.: We use the following well known result (see [52, Theorem 7.3]): if for an arc \(A\subset M\) and \(f\colon A\to M\) we have \(D^{+}f(x)\geq 0\)_for a.e. \(x\in A\) and \(D^{+}f(x)>-\infty\) for every \(x\in A\), then \(f\) is non-decreasing._ By Theorem 2.6.1 there is a residual set \(K\subset C_{\lambda}(M)\) such that each element of \(K\) has a knot point at \(\lambda\) almost every point of \(M\). Fix \(f\in K\) and an arc \(A\subset M\); we have \(D^{+}f(x)=+\infty\geq 0\) a.e. on \(A\) hence \(f\) can not be non-decreasing. Applying the above result, we conclude that \(D^{+}(x_{0})=-\infty\) for at least one point \(x_{0}\in A\); in particular \(f\) is not a Besicovitch function. A _Morse function_\(f\in C(M)\) satisfies \[\max\{|D^{+}f(x)|,|D_{+}f(x)|\}=\max\{|D^{-}f(x)|,|D_{-}f(x)|\}=\infty,\ x\in M;\] in the interval case, if \(x\) is an endpoint of \(I\), the only max is taken over the derivatives from inside \(I\). There exists a function \(f\in C(M)\) which is Besicovitch and Morse. The following question remains open. **Question C**.: _Does there exist a Besicovitch-Morse function in \(C_{\lambda}(M)\)?_ ### Crookedness We begin by defining the crookedness property which is one of the central properties in Continuum Theory. We explain its importance later. **Definition 2.7.1**.: _Let \(f\in C(I)\), \(a,b\in I\) and let \(\delta>0\). We say that \(f\) is \(\delta\)-crooked between \(a\) and \(b\) if for any two points \(c,d\in I\) so that \(f(c)=a\) and \(f(d)=b\), there is a point \(c^{\prime}\) between \(c\) and \(d\) and there is a point \(d^{\prime}\) between \(c^{\prime}\) and \(d\) so that \(|b-f(c^{\prime})|<\delta\) and \(|a-f(d^{\prime})|<\delta\). We will say that \(f\) is \(\delta\)-crooked if it is \(\delta\)-crooked between any pair of points._ In [23, Theorem 1] and [24] the authors proved the following generic property of maps from \(C_{\lambda}(M)\), which might be the most surprising of the generic properties proven yet: **Theorem 2.7.2**.: _There is a dense \(G_{\delta}\) set \(\mathcal{T}\subset C_{\lambda}(M)\) such that if \(f\in\mathcal{T}\) then for every \(\delta>0\) there exists a positive integer \(n\) so that \(f^{n}\) is \((f,\delta)\)-crooked._ The \(\delta\)-crookedness condition is a topological condition that imposes strong requirements on values of the map. Piecewise smooth maps do not verify the crookedness condition, thus Theorem 2.7.2 cannot hold for any open collection of maps in \(C_{\lambda}(M)\). The pseudo-arc is a very curious object arising from Continuum Theory (see the survey of Lewis [39] and the introduction of [18] for the overview of results involving the pseudo-arc), which was first discovered by Knaster over a century ago. On one hand side, its complicated structure is reflected by the fact that it is _hereditarily indecomposable_, i.e., there are no proper subcontinua \(A,B\subset H\) such that \(A\cup B=H\) for every proper subcontinuum \(H\) of the pseudo-arc \(P\). On the other hand, the pseudo-arc is _homogeneous_, i.e., for every two points \(x,y\in P\) there exists a homeomorphism \(h:P\to P\) such that \(h(x)=y\). Homogeneity is a property possessed by the spaces with locally identical structure. Non-trivial examples of homogeneous spaces are the Cantor set, solenoids and manifolds without boundaries, for instance the circle. Let \(\{Z_{i}\}_{i\geq 0}\) be a collection of compact metric spaces. For a collection of continuous maps \(f_{i}:Z_{i+1}\to Z_{i}\) we define \[\varprojlim(Z_{i},f_{i}):=\{\hat{z}:=\big{(}z_{0},z_{1},\dots\big{)}\in Z_{0} \times Z_{1},\dots\big{|}z_{i}\in Z_{i},z_{i}=f_{i}(z_{i+1}),\forall i\geq 0\}.\] We equip inverse limit \(\varprojlim(Z_{i},f_{i})\) with the subspace metric which is induced from the _product metric_ in \(\widetilde{Z_{0}\times Z_{1}\times\dots}\), where \(f_{i}\) are called the _bonding maps_. **Corollary 2.7.3**.: _The inverse limit with any \(C_{\lambda}(I)\)-generic map as a single bonding map is the pseudo-arc._ This corollary is a direct consequence of Theorem 2.7.2 and a result of Minc and Transue [44, Proposition 4] connecting crookedness with pseudo-arc as inverse limit. ### Entropy The property that the topological entropy of generic maps on the Lebesgue measure-preserving maps is \(\infty\) can be deduced from the methods of the article of Yano [56]. Moreover, another way to see it from general theory is to combine Theorem 2.7.3 with results of [45]. The connection between [56] and \(C_{\lambda}(I)\) was explicitly done in [17, Proposition 26]: **Theorem 2.8.1**.: _The generic value of topological entropy in \(C_{\lambda}(I)\) is \(\infty\)._ The proof from [17] easily extends to \(\mathbb{S}^{1}\), by replacing the fixed point in the proof with a periodic point. The generic value of topological entropy for the volume preserving continuous maps seems not to have been studied for other manifolds. On the other hand Guiheneuf [29, Theoreme 3.17] proved the analogous result holds for generic homeomorphisms of compact, connected manifold of dimension at least \(2\) preserving a "good" measure in the sense of Oxtoby and Ulam [29]. The analogous result in the setting of homeomorphisms on manifolds of dimension greater than \(1\) was proven by Yano [56]. Recently, Yano's result was strengthened to show that the generic homeomorphism or continuous map in most settings has an ergodic measure of infinite entropy [22]. It would be interesting to know if this result holds also in \(C_{\lambda}(M)\). On the other hand the question about generic value of metric entropy is unclear. Let \(PAM_{\lambda}(M)\) denote the set of piecewise affine Markov maps that preserve Lebesgue measure. In [17, Proposition 24] the following theorem is proven for \(I\), the proof is analogous in the circle case: **Theorem 2.8.2**.: _For every \(c\in(0,\infty)\) the set \(PAM_{\lambda}(M)_{entr=c}\) is dense in \(C_{\lambda}(M)\)._ Furthermore, the following theorem is proven in [17, Proposition 25] for \(C_{\lambda}(I)\) and can be analogously done with the help of Lemma 2.1.1 for the circle case: **Theorem 2.8.3**.: _The set of maps from \(C_{\lambda}(M)\) having metric entropy \(\infty\) is dense in \(C_{\lambda}(M)\)._ **Question D**.: _Does there exist a generic value of metric entropy for maps from \(C_{\lambda}(M)\)? If such value indeed exists, what is it? If there is no such value, are all non-zero values attained by the metric entropy on every generic set?_ ## 3. Smoothness versus nowhere differentiability ### Connections between topological and measure-theoretic properties In [13] an effort has been made to understand natural conditions when topological dynamical properties imply the corresponding measure-theoretic properties in \(C_{\lambda}(M)\) (and vice versa). The assumptions in the next two theorems come directly from the articles of Li and Yorke [40] and Bowen [19] respectively. The first result was proven in [13, Theorem 3]. **Theorem 3.1.1**.: _Let \(f\in C_{\lambda}(M)\) be a piecewise \(C^{2}\) map with a slope strictly greater than \(1\). Then \(f\) is transitive if and only if \((f,M,\lambda)\) is ergodic._ It is well known that measure-theoretic exactness implies measure-theoretic strong mixing, and since \(\lambda\) is positive on open sets this furthermore implies topological mixing. The first three points of the next result were proven in [13, Theorem 2], which shows that these are equivalent in a smooth enough one dimensional settings. They are also an important ingredient of the proof of Theorem 2.2.3. **Theorem 3.1.2**.: _Let \(f\in C_{\lambda}(M)\) be a piecewise \(C^{2}\) map. Then the following conditions are equivalent:_ 1. \(f\) _is topologically mixing map,_ 2. \(f\) _is strongly mixing,_ 3. \((f,M,\lambda)\) _is measure-theoretically exact,_ 4. \(f\) _is leo._ To see that (4) is equivalent to (1)-(3) note that if \(f\in C_{\lambda}(M)\) and is piecewise \(C^{2}\) then it can have only a finite number of turning points (which in fact, must be endpoints of pieces on which the map is \(C^{2}\)). But by [34], a mixing interval map \(f\) which is not leo has infinitely many turning points. Theorems 3.1.1 and 3.1.2 are in strong contrast to Theorem 2.4.6, which displays a difference between piecewise smooth and non-differentiable settings. ### Nowhere differentiability, knot points and topological entropy In [15] Bobok and Soukenka studied continuous piecewise affine interval maps with countably many pieces of monotonicity that preserve the Lebesgue measure. By taking limits of such maps they proved the following theorem: **Theorem 3.2.1**.: _There exists a map \(g\in C_{\lambda}(I)\) such that:_ 1. \(g\) _is nowhere monotone,_ 2. _knot points of g are dense in_ \(I\) _and for a dense_ \(G_{\delta}\) _set_ \(Z\) _of_ \(z\)_'s, the set_ \(g^{-1}(z)\) _is infinite;_ 3. _topological entropy_ \(h_{\mathrm{top}}(g)\leq\log(2)+\varepsilon\)_._ Furthermore, the following two (yet unanswered) questions arose from their study: **Question E**.: _Does every continuous nowhere differentiable interval map from \(C_{\lambda}(I)\) have infinite topological entropy?_ The next question relates topological entropy with knot points (see Subsection 2.6). **Question F**.: _Does every map from \(C_{\lambda}(I)\) with a knot point \(\lambda\)-a.e. have infinite topological entropy?_ Bobok and Soukenka continued their study in [16] where they studied a special conjugacy class \(\mathcal{F}\) of continuous piecewise monotone interval maps with countably many laps (including Lebesgue measure-preserving maps), which are locally eventually onto and all have topological entropy \(\log(9)\). They show that there exist maps from \(\mathcal{F}\) with knot points in its fixed point \(1/2\). ## 4. Other topologies The aspects we discuss above are also interesting with other topologies on the spaces of Lebesgue measure-preserving maps on one-dimensional compact manifolds as well as higher dimensional analogues. de Faria et. al. showed that topological entropy is infinite for homeomorphisms in two different settings [27, 28]. As above, let \(\mathbb{M}\) be compact \(d\)-dimensional manifold and \(\mathcal{H}^{1}(\mathbb{M})\) be the space of homeomorphisms which are bi-Lipschitz in all local charts. Let \(\mathcal{H}^{1}_{\alpha}\) denote the closure of \(\mathcal{H}^{1}\) with respect to the \(\alpha\)-Holder-Whitney topology. Their first result is that topological entropy is generically infinite in \(\mathcal{H}^{1}_{\alpha}\) whenever \(d\geq 2\) and \(0<\alpha<1\). For \(1\leq p,p^{*}<\infty\) let \(S^{p,p^{*}}(\mathbb{M})\) denote the space of homeomorphisms on \(\mathbb{M}\) which in all local charts are of Sobolev class \(W^{1,p}\) and whose inverse is of Sobolev class \(W^{1,p^{*}}\) together with the \((p,p^{*})\)-Sobolev-Whitney topology. Their second result is that topological entropy is generically infinite in \(S^{p,p^{*}}(\mathbb{M})\) when \(d\geq 2\) and \(d-1<p,p^{*}<\infty\). In [35] Hazard constructed interesting examples of noninvertible maps with infinite topological entropy in these topologies, however he did not study generic behavior. These results are the first dynamical genericity results for intermediate smoothness. Generic values of topological entropy in these topologies has not yet been studied in the volume preserving case. In fact no other dynamical properties have been studied and it would be interesting to understand which of the results of this survey hold in analogous topologies for continuous Lebesgue measure preserving maps as well as simply for the continuous maps. ## Appendix Let \(\operatorname{Rec}(f)\) denote the set of recurrent points of a map \(f\in C(M)\). It is proved in [36] that for graph maps (in particular for \(\mathbb{S}^{1}\)) \(\overline{\operatorname{Rec}(f)}=\overline{\operatorname{Per}(f)}\cup \operatorname{Rec}(f)\). The following lemma confirms intuition that recurrence without dense periodicity is possible only for irrational rotations of \(\mathbb{S}^{1}\). The following lemma is the crucial step in the proof of Remark 1.1 in the case of \(\mathbb{S}^{1}\). **Lemma A.1**.: _Let \(f\in C(\mathbb{S}^{1})\). If \(\overline{\operatorname{Rec}(f)}=\mathbb{S}^{1}\) and \(\operatorname{Per}(f)\neq\emptyset\), then \(\overline{\operatorname{Per}(f)}=\mathbb{S}^{1}\)._ Proof.: Assume towards contradiction that there is a nonempty open set \(U:=\mathbb{S}^{1}\setminus\overline{\operatorname{Per}(f)}\). Take a point \(x\in U\cap\operatorname{Rec}(f)\) and its maximal omega limit set \(W\supset\omega(x)\). Then by [7] (cf. [49]) \(W\) is one of the following four types: a periodic orbit, a basic set, a solenoidal set or a circumferential set. We claim that none of these can occur, and so we have a contradiction, thus \(\overline{\operatorname{Per}(f)}=\mathbb{S}^{1}\). Recall that \(x\in\omega(x)\), so \(W\cap U\neq\emptyset\), thus \(W\) can not be a periodic orbit. Since orbit of \(x\) is infinite, there are nonnegative integers \(k<n<m\) such that \(f^{k}(x),f^{n}(x),f^{m}(x)\in U\). If \(W\) is a solenoidal set, then there is a periodic interval \(J\) such that each of these three points belongs to a different iterate of \(J\). In particular, there is a non-negative integer \(s\) such that \(f^{s}(J)\subset U\). But since \(J\) is a periodic interval, \(f^{s}(J)\cap\operatorname{Per}(f)\neq\emptyset\) which is again impossible. If \(W\) is a circumferential set, then there is a a connected set \(K\supset W\) and a monotone factor map \(\phi\colon K\to\mathbb{S}^{1}\) semi-conjugating \(f|_{K}\) with an irrational rotation and such that each non-singleton fiber of \(\phi\) is a wandering interval. In particular, if \(\phi\) is not one-to-one then \(\mathbb{S}^{1}\neq\overline{\operatorname{Rec}(f)}\) which is a contradiction. But if \(\phi\) is one-to-one then \(K\) is homeomorphic to \(\mathbb{S}^{1}\) which is again impossible, since \(W=K\) is a subset of \(\mathbb{S}^{1}\setminus\operatorname{Per}(f)\neq\mathbb{S}^{1}\) but interval is not homeomorphic to a circle. The last case is that \(W\) is a basic set. By repeating the argument from the circumferential set case, we can find a map \(\phi\) which conjugates \(f^{n}|_{W}\) with a mixing interval map, for some \(n\). But a consequence of this conjugacy is that periodic points are dense in \(W\), in particular there is a periodic point arbitrarily close in \(x\), so also in \(U\), which is again a contradiction. ## Acknowledgements The authors thank the European Regional Development Fund, project No. CZ 02.1.01/0.0/0.0/16_019/0000778 for financing their one week research meeting in Prague where this survey article was finalized.
2309.12788
Investigating high redshift short GRBs: signatures of collapsars?
The conventional classification of Gamma-Ray Bursts (GRBs) as short or long bursts based on their duration is widely accepted as arising from different progenitor sources identified as compact object mergers and collapsars, respectively. However, recent observational shreds of evidence challenged this view, with signatures of collapsars in short GRBs and mergers in long GRBs. We conduct a comparative analysis of the characteristics of short and long GRBs, both at low and high redshifts, taking into account the locations and environments of their host galaxies. Our analysis suggests that some short GRBs at higher redshifts exhibit features similar to long GRBs, indicating a possible collapsar origin. Further investigation, utilizing multi-messenger observations, could provide a resolution to this issue.
Dimple, Kuntal Misra, Lallan Yadav
2023-09-22T11:00:05Z
http://arxiv.org/abs/2309.12788v1
# Investigating high redshift short GRBs: signatures of collapsars? ###### Abstract The conventional classification of Gamma-Ray Bursts (GRBs) as short or long bursts based on their duration is widely accepted as arising from different progenitor sources identified as compact object mergers and collapsars, respectively. However, recent observational shreds of evidence challenged this view, with signatures of collapsars in short GRBs and mergers in long GRBs. We conduct a comparative analysis of the characteristics of short and long GRBs, both at low and high redshifts, taking into account the locations and environments of their host galaxies. Our analysis suggests that some short GRBs at higher redshifts exhibit features similar to long GRBs, indicating a possible collapsar origin. Further investigation, utilizing multi-messenger observations, could provide a resolution to this issue. G 1 Aryabhatta Research Institute of Observational Sciences (ARIES), Manora Peak, Nainital-263002, India. 2 Department of Physics, Deen Dayal Upadhyaya Gorakhpur University, Gorakhpur-273009, India. * Corresponding author: [email protected] ## 1 Introduction The bimodality in the duration distribution of GRBs suggested two broad classes as long and short determined by the \(T_{90}\) duration (the time interval of integrated counts between 5% to 95%) with a boundary at 2 seconds (Kouveliotou et al., 1993). The long-duration GRBs, with \(T_{90}>2\) sec, were postulated to stem from the death of the massive stars (Woosley, 1993; Meszaros, 2006), while the short-duration, with \(T_{90}\leq 2\) sec, were believed to stem from mergers involving compact objects (Paczynski, 1986; Meszaros and Rees, 1992). Multi-wavelength observations of GRBs provided evidence in support of these predictions, as several long GRBs have been discovered to be linked with Type Ic supernovae (Woosley, 1993; MacFadyen and Woosley, 1999; Hjorth et al., 2003; Woosley and Bloom, 2006; Cano et al., 2017), and the origin of short-duration GRBs from compact object mergers is supported by the coincident discovery of GW170817/GRB 170817A and the associated kilonova AT 2017gfo (Abbott et al., 2017; Valenti et al., 2017). The established theory of long and short GRBs origin became questionable after the detection of a supernova bump associated with a short-duration GRB in August 2020, GRB 200826A (Zhang et al., 2021; Ahumada et al., 2021; Rossi et al., 2022), and a long-duration GRB identified with a kilonova bump in December 2021, GRB 211211A (Rastinejad et al., 2022; Troja et al., 2022; Yang et al., 2022). Recently, another long GRB 230307A is found to be associated with a Kilonova (Levan et al., 2023). The dichotomous separation of GRBs based on duration has been questioned time and again (Fynbo et al., 2006; Zhang et al., 2009). In the past, numerous efforts have been undertaken to create novel classification systems utilizing criteria distinct from the traditional \(T_{90}\) classification. For instance, Zhang (2006) categorized GRBs into Type I (arising from compact binary mergers) and Type II (arising from death of massive stars) groups. Bromberg et al. (2013) employed a classification based on collapsar and non-collapsar probabilities. Additionally, Minaev and Pozanenko (2020) utilized the \(E_{\gamma,\rm iso}\) - \(E_{\rm p,i}\) correlation to divide GRBs into two distinct classes. Additional parameters such as hardness ratio, spectral lag, and variability time scales in light curves were identified to differentiate between distinct progenitors of GRBs (Fishman and Meegan, 1995; Bernardini et al., 2015; McInnes et al., 2018). However, these parameters have a significant overlap for the two classes, making classification challenging as argued by Dimple et al. (2022). Distinction between long and short GRBs still remains challenging. The redshift distributions of long and short GRBs provide essential clues about their progenitor systems (Guetta and Piran, 2005; Berger et al., 2007; Ghirlanda et al., 2009; D'Avanzo, 2015). The median redshifts observed for long and short GRBs strongly suggest that these events originate from different types of progenitors. The higher redshift of long GRBs agrees with the predictions of rapidly evolving massive star progenitors. In contrast, the lower redshift of short GRBs matches with the longer timescales of compact object mergers (Berger et al., 2013). However, a fraction of short GRBs is found to lie at high redshifts contradicting their proposed progenitors. Given the overlap observed between the two classes, it is reasonable to investigate the properties of short and long GRBs at both low and high redshifts. Our recent work on the comparison of low and high redshift short GRBs suggested that they could be arising from different progenitor systems (Dimple et al., 2022). We expand on this work with an updated sample and identify their position on the Amati plot (Amati, 2006) as well as compare their offset (The distance between the burst and the centre of its host galaxy) and number density (the density of the ambient medium) distributions. Both offset and number density aid in inferring the GRB progenitors. The description of the sample and comparison of the short GRB properties (Amati correlation, offset, number density, non-collapsar probability; \(F_{\rm nc}\) and star formation rate; SFR of GRB hosts) are given in Section 2. The key findings are summarised in Section 3. ## 2 Are short GRBs at high redshift similar to long GRBs? The redshift distribution (the left panel of Figure 1) shows that short GRBs are found at significantly lower redshifts as compared to long GRBs. The sample is compiled from Jochen Greiner's long GRB table1 with known spectroscopic redshifts and is complete upto April 2023. We estimated a median redshift of short GRBs to be \(z=0.51\), much lower than the median redshift of long GRBs at \(z=1.65\), consistent with the expectations for their progenitors. However, a notable proportion of short GRBs is observed to occur at higher redshifts (\(z>0.7\)). Out of these, GRBs 200826A (\(z=0.7481\); Rossi et al., 2022) and 090426 (\(z=2.609\); Antonelli et al., 2009), have been suggested to originate from collapsars rather than compact object mergers. It is crucial to investigate whether the progenitors of short GRBs at low and high redshifts differ and if the high redshift short GRB progenitors share similarities with those of long GRBs. As suggested by Berger et al. (2007) and Bromberg et al. (2013) that the short GRBs detected at higher redshifts may represent a distinct population of GRBs, our investigation explores the similarities and dissimilarities between short GRBs lying at low and high redshifts and that with long GRBs. Since, a short GRB has been observed to be associated with supernova at a redshift of 0.7481 (GRB 200826A) and around 43% of short GRB lie at \(z>0.7\). Therefore, we have put a cut at \(z=0.7\). For this, we plot the Amati correlation and examine their offset and number density distributions, and the \(F_{\rm nc}\) and SFR estimations of the hosts to investigate their progenitor systems. ### Amati correlation For a long time, correlations in prompt emission have been utilized to categorize GRBs. In the Amati correlation plane, which plots \(E_{\rm\gamma,iso}\) against \(E_{\rm p}\) (the peak energy in the source frame), two distinct classes of GRBs follow different tracks, and occupy different positions (Amati et al., 2002; Amati, 2006). To check the Amati correlation of GRBs lying at various redshifts, we took the isotropic and peak energy values from Minaev and Pozanenko (2020) and divided the sample according to the redshift of the data with a divider at a redshift of 0.7. Short GRBs lying at redshift \(>0.7\) are located on the long GRB track, while some of them are in the overlapping \(2\sigma\) correlation region (cyan shaded region) between short and long GRBs, as can be noticed from the right panel of Fig. 1. Similarly, some of the long GRBs lying at lower redshifts are located on the short GRB track. The presence of short GRBs at high redshifts aligning with the long GRB track and low-redshift long GRBs falling on the short GRB track can be due to the selection effects or these GRBs originate from progenitors that differ from their identified class based on duration. ### Offset Distribution The offset of a GRB from its host galaxy can provide meaningful insight into its progenitor system. Binary compact stellar remnants are expected to merge far from their birth sites due to the kicks imparted to the neutron stars during supernova explosions (Belczynski et al., 2002, 2006). As a result, GRBs arising from these progenitors are expected to have large offsets from their host galaxies (Berger et al., 2013). However, in the case of massive star collapse, GRBs are expected to occur near their birth site, likely in an active star-forming region (Fruchter et al., 2006). As a result, GRBs arising from collapsars are likely to have smaller offsets from their host galaxies which has also been confirmed observationally (Bloom et al., 2002). Figure 2 illustrates the offset distributions of long and short GRBs using data from Blanchard et al. (2016) and Fong et al. (2022), respectively. From the left panel of the figure, we deduce that the projected offsets of short GRBs range from 0.23 to 76.19 kpc, with a median offset of approximately 9.62 kpc. This median offset is approximately six times larger than the median offset observed for long GRBs, which is approximately 1.38 kpc. The distribution of short GRBs aligns with the predictions of population synthesis models for compact object mergers, especially regarding the fraction of events displaying significant offsets. Conversely, long GRBs exhibit significantly lower offsets, consistent with the expectations of massive star progenitors exploding in close proximity to their birth site within the host galaxy. The right panel of Figure 2 presents the variation of offset with the redshift of GRBs. It is apparent that short GRBs at higher redshifts tend to have slightly smaller offsets compared to those at lower redshifts, although they still remain noticeably larger than the offsets observed for long GRBs. While a few short GRBs display offsets similar to those of long GRBs, the overall offset distribution strongly supports the existence of two distinct progenitor populations for long and short GRBs. Further investigation of short GRBs with lower offset values would be of great interest to understand their unique characteristics and progenitor systems. Figure 1: **Left:** The redshift distributions of GRBs up to April 2023. Short GRBs occupy the lower end of the distribution, with a median redshift of 0.47. In contrast, long GRBs span a wider range of redshifts, with a median of 1.68. **Right:** Short and long GRBs in the Amati correlation plane. The solid blue and grey lines in the plot depict the best-fit correlation for long and short GRBs, respectively, while the dotted lines illustrate the corresponding \(2-\sigma\) regions. The shaded region shows the overlapping \(2-\sigma\) regions between long and short GRBs. It can be observed that at redshifts z \(>\) 0.7, some short GRBs follow the long GRB track, while some fall within the overlapping \(2-\sigma\) region. Similarly, some long GRBs at lower redshifts follow the short GRB track ### GRB Environments: Number densities Both the progenitors (collapsars and mergers) also differ in the number densities of their environments. Long-duration GRBs are often linked to star-forming regions exhibiting high gas densities, indicating the presence of dense environments surrounding these events. Conversely, short-duration GRBs are found in low-density environments (O'Connor et al., 2020), possibly due to natal kicks that propel them into the wider interstellar medium or even the intergalactic medium (Berger et al., 2013). In order to examine the variation of number density for long and short GRBs, we utilized a sample of long GRBs from Chrimes et al. (2022) and short GRBs from Fong et al. (2015). Both the studies have estimated the environmental number densities by fitting the afterglow data with standard afterglow model (the forward-reverse shock model, Sari et al. 1998). The results are illustrated in Figure 3. The median number densities for long and short GRBs are 0.315 \(cm^{-3}\) and 0.0068 \(cm^{-3}\), respectively. There is a substantial overlap observed in the number densities of long and short GRBs; however, the short GRB distribution is skewed more towards low number densities and long GRB towards high number densities. The right panel of Figure 3 shows the variation of number densities with redshift. It can be seen that the short and long GRBs show similar trends with low redshift short GRBs typically favouring lower densities up to \(10^{-5}cm^{-3}\). ## 3 Discussions To investigate whether long and short GRBs with known redshifts have similar progenitor systems, we compared various properties between them in detail. We located low and high redshift GRBs on the Amati plane. Overall, short and long lie on two different tracks, however, some of the GRBs show peculiarity and share a different track. Interestingly, these are high redshift short GRBs and low redshift long GRBs. Further, we found similarities between Figure 2: **Left:** Offset distribution of GRBs. Short GRBs display a median offset of approximately 9.62 kpc, which is roughly six times larger than the median offset observed for long GRBs (1.38 kpc). **Right:** Variation of offsets with redshifts for GRBs with a cut at redshift = 0.7. high-redshift short GRBs and long GRBs in terms of offsets and environmental densities with redshift. Also, the low values of \(F_{\rm nc}\) and high values of SFR of short GRBs support the argument that high-redshift short GRBs could have progenitors similar to those of long GRBs (Dimple et al., 2023; Dichiara et al., 2021; Bromberg et al., 2013). However, this can also be an observational bias. More studies are needed to understand better the role of redshift in GRB classification and the reliability of inferring progenitors based on empirical correlations. Future multimessenger observations, including optical and near-infrared observations, will be instrumental in detecting bumps in GRBs' light curves, which can help in identifying supernovae/kilonovae associated with collapsars/compact binary mergers. These observations, accompanied by gravitational wave detections, will provide additional information on the properties of the progenitor. In addition, machine learning algorithms play a crucial role in clustering the GRBs based on fine structures present in their light curves. Jespersen et al. (2020) and Steinhardt et al. (2023) have used t-distributed Stochastic Neighbor Embedding (tSNE) and Uniform Manifold Approximation and Projection (UMAP), respectively, to identify clustering in the population of GRBs. Luo et al. (2022) also made use of supervised machine learning techniques to distinguish between different progenitor systems of GRBs. However, these studies focused more on the general classification of GRBs based on their light curves and did not survey any specific subpopulation, such as that of the KN-associated GRBs. Recently, Garcia-Cifuentes et al. (2023) has used tSNE to investigate the extended emission GRBs. We employ the Principal Component Analysis (PCA) in conjunction with tSNE and UMAP to the light curves to cluster the GRB population (Dimple et al., 2023). We will investigate the short GRBs at high redshifts using machine learning algorithms in near future. ## Acknowledgments The authors wish to thank Prof. K. G. Arun and Prof. L. Resmi for their useful discussions. Figure 3: **Left:** Number density distribution for long and short GRB environments. Median number densities are 0.315 \(cm^{-3}\) for long GRBs and 0.0068 \(cm^{-3}\) for short GRBs. **Right:** Variation of the number density of GRB environments with redshift with a cut at z = 0.7. ## Further Information ### ORCID identifiers of the authors 0000-0001-9868-9042 (Dimple ) 0000-0003-1637-267X (Kuntal Misra) ### Author contributions Toward the completion of this work, all authors have made significant contributions. ### Conflicts of interest The authors declare no conflict of interest.
2302.02819
Nonlinear electromagnetic response for Hall effect in time-reversal breaking materials
It is known that materials with broken time-reversal symmetry can have Hall response. Here we show that in addition to the conventional currents, a new type of nonlinear Hall current can occur in the time-reversal breaking materials within the second-order response to in-plane electric and vertical magnetic fields. Such a Hall response is generated by the oscillation of the electromagnetic field and has a quantum origin arising from a novel dipole associated with the Berry curvature and band velocity. We demonstrate that the massive Dirac model of LaAlO3/LaNiO3/LaAlO3 quantum well can be used to detect this Hall effect. Our work widens the theory of the Hall effect in the time-reversal breaking materials by proposing a new kind of nonlinear electromagnetic response.
Anwei Zhang, Jun-Won Rhim
2023-01-31T05:41:42Z
http://arxiv.org/abs/2302.02819v2
# Nonlinear electromagnetic response for Hall effect in time-reversal breaking materials ###### Abstract It is known that materials with broken time-reversal symmetry can have Hall responses. Here we show that in addition to the conventional currents, either linear or nonlinear in the electric field, another Hall current can occur in the time-reversal breaking materials within the second-order response to in-plane electric and vertical magnetic fields. Such a Hall response is generated by the oscillation of the electromagnetic field and has a quantum origin arising from a novel dipole associated with the Berry curvature and band velocity. We demonstrate that the massive Dirac model of LaAlO3/LaNiO3/LaAlO3 quantum well can be used to detect this Hall effect. Our work widens the theory of the Hall effect in the time-reversal breaking materials by proposing a new kind of nonlinear electromagnetic response. **Introduction** Since the discovery of the Hall effect [1], which is the phenomenon of the voltage drop perpendicular to the applied magnetic field and charge current, the transport properties under the magnetic field have been one of the essential solid-state research areas. As the magnetic field becomes strong enough, the system enters the quantum regime, where the Hall conductivity is quantized into \(\nu e^{2}/h\) with the filling factor \(\nu\) and the Planck constant \(h\)[2; 3; 4]. Depending on the strength of the electron-electron interaction, the quantum Hall effect is classified into the integer and fractional quantum Hall effects, where \(\nu\) is integral and rational number, respectively. The TKNN description of the integer quantum Hall effect [5] brought a new paradigm of the topological analysis in condensed matter physics and has led to many intriguing phenomena, such as the quantum anomalous Hall effect [6], spin Hall effect [7], and diverse topological phases such as topological insulators [8; 9] and Weyl semimetals [10; 11]. Most of the studies mentioned above have been performed within a linear response theory by expanding transport quantities with respect to the applied electric field. Recently, the second-order nonlinear Hall effects, i.e., the current is proportional to the square of the electric field: \(j_{a}\propto E_{b}E_{c}\), have attracted great interest due to their geometric aspects. The second-order responses can be divided into Ohmic-type and Hall-type [12]. The Ohmic-type refers to the nonlinear Drude current induced by the intraband effects and depends quadratically on the relaxation time \(\tau\)[13; 14]. The Hall-type contains two kinds of responses. One is attributed to the intrinsic mechanism of the band structure. It is independent of the relaxation time and the oscillating frequency \(\omega\) of the fields, and requires time-reversal broken [15; 16; 17]. And another comes from the Berry curvature dipole. It is related to the relaxation time and the oscillating frequency, and respects time-reversal symmetry [18; 19; 20; 21; 22; 23; 24; 25; 26; 27]. In this Letter, we predict a novel Hall effect by using the second-order nonlinear response theory and expanding the current with respect to the electric and magnetic fields. Here, the electromagnetic field is in the form of a plane wave, and we assume that the polarization of the electric and magnetic fields are along \(y\)- and \(z\)-direction, respectively. The Hall current in the \(x\)-axis is given by \[j_{x}(2\omega)=\alpha_{xyz}E_{y}B_{z}, \tag{1}\] where \(E_{y}\) and \(B_{z}\) are the amplitudes of the electromagnetic fields. Most importantly, we show that the electromagnetic coefficient \(\alpha_{xyz}\) is proportional to a new kind of dipole associated with the Berry curvature and band velocity. The Hall current found in this work also depends on the oscillating frequency of the external fields, which is similar to the current arising from the Berry curvature dipole [18]. However, for finite current, our system should be time-reversal broken, since in the presence of time-reversal symmetry, the Berry curvature and band velocity are all odd functions of momentum, and then the dipole vanishes. Such a Hall current can be detected in a massive Dirac model of LaAlO\({}_{3}\)/LaNiO\({}_{3}\)/LaAlO\({}_{3}\) quantum well. Our Letter widens the theory for the Hall effect in the time-reversal breaking system. **Two-dimensional model** In this Letter, we consider Figure 1: Schematic of the two-dimensional system under consideration. An electric field \(\mathbf{E}\) in the plane and a magnetic field \(\mathbf{B}\) perpendicular to the plane are applied. The induced Hall current \(\mathbf{j}\) is perpendicular to the direction of the electric field. The fields propagate in the same direction as the Hall current. a two-dimensional clean system and apply an in-plane electric field polarized along \(y\)-direction and propagating along \(x\)-direction, i.e. \({\bf E}=E_{y}\hat{y}e^{iqx-i\omega t}\)+c.c., and a magnetic field polarized along \(z\)-direction, i.e. \({\bf B}=B_{z}\hat{z}e^{iqx-i\omega t}\)+c.c., as illustrated in Fig. 1. Here, \(q\) and \(\omega\) are the moduli of the momentum and frequency of the fields, respectively. Under the presence of such external fields, the system can be described by the vector potential \[{\bf A}=A_{y}\hat{y}e^{iqx-i\omega t}+{\rm c.c.}, \tag{2}\] where \(A_{y}\) satisfies the relations \(A_{y}=E_{y}/i\omega=B_{z}/iq\). In this paper, we consider a generic linear continuum Hamiltonian targeting various two-dimensional Dirac systems in the limit of the low temperature. Then, the corresponding Hamiltonian of the sample \(H_{0}({\bf k})\) satisfies the restriction \(\partial_{k_{z}}\partial_{k_{z}}H_{0}({\bf k})=\partial_{k_{z}}\hat{v}_{j}=0\), where \(\hat{v}_{j}=\partial_{k_{j}}H_{0}({\bf k})\) is the velocity operator in \(j\)-direction and we have set \(\hbar=1\). As a result, the full Hamiltonian of the system in the presence of the external fields can be written as \(H({\bf k})=H_{0}({\bf k})+H^{{}^{\prime}}({\bf k})\). Here \(H^{{}^{\prime}}({\bf k})\) is the perturbed Hamiltonian, which is given by \[H^{{}^{\prime}}({\bf k})=e\hat{\bf v}\cdot{\bf A}, \tag{3}\] where \(-e\) is the charge of the electron. The second-order term of the vector potential in the perturbed Hamiltonian, i.e. \(e^{2}\partial_{\bf k}({\bf v}\cdot{\bf A})\cdot{\bf A}/2\), vanishes in the linear system. **Second-order nonlinear response theory** Now we consider the second-order response with respect to the vector potential. The current operator in our system is described by \(\hat{j}_{x}=-e\partial_{k_{x}}H({\bf k})=-e\hat{v}_{x}\). By using Fourier transformation and second quantization, the expectation value of the current (\(\hat{j}_{x}(2{\bf q},2i\omega)\)) in momentum-frequency space can be written as [28] \[j_{x}(2{\bf q},2i\omega)=\frac{1}{S\beta}\sum_{{\bf k},i\omega_{n}}{\rm Tr}[ \hat{j}_{x}\delta G({\bf k},i\omega_{n};{\bf k}-2{\bf q},i\omega_{n}-2i\omega)], \tag{4}\] where \(S\) is the area of the system, \(\beta=1/k_{B}T\) is the inverse temperature, \({\bf q}=q\hat{x}\), \(\omega_{n}=(2n+1)\pi/\beta\) and \(\omega=2m\pi/\beta\) are the fermionic and bosonic Matsubara frequencies, respectively. For second-order response, the perturbation of the Green function is given by \[\delta G({\bf k},i\omega_{n};{\bf k}-2{\bf q},i\omega_{n}-2i\omega)=e^{2}G({ \bf k},i\omega_{n})\hat{v}_{y}A_{y}G({\bf k}-{\bf q},i\omega_{n}-i\omega)\hat{ v}_{y}A_{y}G({\bf k}-2{\bf q},i\omega_{n}-2i\omega), \tag{5}\] where the Green function has the form \[G({\bf k},i\omega_{n})=\sum_{a}\frac{|u_{a}({\bf k})\rangle\langle u_{a}({\bf k })|}{i\omega_{n}+\mu-\varepsilon_{a}({\bf k})}. \tag{6}\] Here \(|u_{a}({\bf k})\rangle\) is the periodic part of Bloch wave functions for band \(a\), \(\mu\) is the chemical potential, and \(\varepsilon_{a}({\bf k})\) is the energy dispersion of band \(a\). For linear Hamiltonian, the second-order nonlinear response forms a triangle diagram, as shown in Fig. 2. Unlike the previous treatment of the second-order response [25; 29], here the momentum \({\bf q}\) of the external fields is taken into account. By taking the Matsubara sum over \(i\omega_{n}\) in Eq. (4) and performing the analytical continuation \(i\omega\rightarrow\omega+i0\), one obtains \[j_{x}(2{\bf q},2\omega)=\frac{-e^{3}}{S}\Pi({\bf q},\omega)A_{y}A_{y}, \tag{7}\] with \[\Pi({\bf q},\omega)=\sum_{{\bf k},a,b,c}M_{abc}({\bf k},{\bf q})F_{abc}({\bf k },{\bf q},\omega). \tag{8}\] Here \[M_{abc}({\bf k},{\bf q})=\langle u_{c}({\bf k}-2{\bf q})|\hat{v} _{x}|u_{a}({\bf k})\rangle\langle u_{a}({\bf k})|\hat{v}_{y}|u_{b}({\bf k}-{\bf q })\rangle\] \[\times\langle u_{b}({\bf k}-{\bf q})|\hat{v}_{y}|u_{c}({\bf k}-2{ \bf q})\rangle \tag{9}\] is the correlation function with band indices \(a\), \(b\), and \(c\), \(\Pi({\bf q},\omega)=\Pi(0,\omega)+q\Pi(\omega)\), where \(\Pi(\omega)=\partial_{q}\Pi({\bf q},\omega)|_{q=0}\). Then we expand \(\Pi(\omega)\) in terms of \(\omega\) to zero-order, i.e. \(\Pi(\omega)=\Pi(0)=\lim_{\omega\to 0}\Pi(\omega)\). As a result, \(\Pi({\bf q},\omega)A_{y}\) in Eq. (7) becomes \(-i\Pi(0)B_{z}\). Here we actually take the uniform limit, i.e. sending \({\bf q}\to 0\) before \(\omega\to 0\). The function \(\Pi(\mathbf{q},\omega)\) satisfies the following relation [30] \[\Pi(\mathbf{q},\omega)=\Pi^{*}(-\mathbf{q},-\omega). \tag{11}\] Then one can get \[\Pi(0)=-\Pi^{*}(0)=i\mathrm{Im}\Pi(0). \tag{12}\] The current thus becomes \[j_{x}(2\omega)=\frac{ie^{3}}{S\omega}\mathrm{Im}\Pi(0)E_{y}B_{z}. \tag{13}\] For two-band system, the energy bands \(a,b,c\) in \(M_{abc}(\mathbf{k},\mathbf{q})\) and \(F_{abc}(\mathbf{k},\mathbf{q},\omega)\) can only be upper or lower bands. If the bands \(a,b,c\) are the same band, \(\mathrm{lm}M_{aaa}(\mathbf{k},0)\) and \(F_{aaa}(\mathbf{k},0,\omega)\) will all be zero. Then \(\mathrm{lm}\Pi(\omega)=\partial_{q}\mathrm{lm}M_{aaa}(\mathbf{k},\mathbf{q}) |_{q=0}F_{aaa}(\mathbf{k},0,\omega)+\mathrm{lm}M_{aaa}(\mathbf{k},0)\partial _{q}F_{aaa}(\mathbf{k},\mathbf{q},\omega)|_{q=0}=0\), i.e. the full intraband term's contribution to the second-order response vanishes. Note that for the linear response in the uniform limit, the intra-band terms also give zero contributions [31; 32]. For nonzero response, the bands \(a,b,c\) should take \(a,b,b\); \(a,b,a\); \(a,a,b\). One can decompose \(\Pi(\mathbf{q},\omega)\) in Eq. (8) into the intraband part \(\Pi_{\mathrm{intra}}(\mathbf{q},\omega)\) and interband part \(\Pi_{\mathrm{inter}}(\mathbf{q},\omega)\). The intraband part \(\Pi_{\mathrm{intra}}(\mathbf{q},\omega)\) is further split into three terms such that \(\Pi_{\mathrm{intra}}(\mathbf{q},\omega)=\sum_{\alpha=1}^{3}\Pi_{\mathrm{intra }}^{(\alpha)}(\mathbf{q},\omega)\), where \[\Pi_{\mathrm{intra}}^{(1)}(\mathbf{q},\omega)=\sum_{\mathbf{k},a\neq b}\frac {-M_{abb}(\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{0},\mathbf{q}}( \mathbf{k})-\omega}\frac{f_{a,b}^{\mathbf{q},2\mathbf{q}}(\mathbf{k})}{ \varepsilon_{b,b}^{\mathbf{q},2\mathbf{q}}(\mathbf{k})-\omega}, \tag{14}\] \[\Pi_{\mathrm{intra}}^{(2)}(\mathbf{q},\omega)=\sum_{\mathbf{k},a\neq b}\frac {M_{aba}(\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{0},\mathbf{q}}( \mathbf{k})-\omega}\frac{f_{a,a}^{\mathbf{0},2\mathbf{q}}(\mathbf{k})}{ \varepsilon_{a,a}^{\mathbf{0},2\mathbf{q}}(\mathbf{k})-2\omega}, \tag{15}\] and \[\Pi_{\mathrm{intra}}^{(3)}(\mathbf{q},\omega)=\sum_{\mathbf{k},a\neq b}\frac {M_{abb}(\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{q},2\mathbf{q}}( \mathbf{k})-\omega}\frac{f_{a,b}^{\mathbf{0},\mathbf{q}}(\mathbf{k})}{ \varepsilon_{a,a}^{\mathbf{0},\mathbf{q}}(\mathbf{k})-\omega}, \tag{16}\] Here we use the abbreviations \(f_{a,b}^{\mathbf{q},\mathbf{q}^{\prime}}(\mathbf{k})=f(\varepsilon_{a}( \mathbf{k}-\mathbf{q}))-f(\varepsilon_{b}(\mathbf{k}-\mathbf{q}^{\prime}))\) and \(\varepsilon_{a,b}^{\mathbf{q},\mathbf{q}^{\prime}}(\mathbf{k})=\varepsilon_{ a}(\mathbf{k}-\mathbf{q})-\varepsilon_{b}(\mathbf{k}-\mathbf{q}^{\prime})\). From these, we expand the imaginary part of \(\Pi_{\mathrm{intra}}(\mathbf{q},\omega)\), take the uniform limit and obtain [30] \[\mathrm{lm}\Pi_{\mathrm{intra}}(0)=-\sum_{\mathbf{k},a\neq b}F_{xy}^{ab}v_{ay} \partial_{k_{x}}f(\varepsilon_{a}(\mathbf{k})), \tag{17}\] where \(F_{xy}^{ab}=-2\mathrm{lm}\langle\partial_{k_{x}}u_{a}(\mathbf{k})|u_{b}( \mathbf{k})\rangle\langle u_{b}(\mathbf{k})|\partial_{k_{y}}u_{a}(\mathbf{k})\rangle\) is the Berry curvature which is the imaginary part of quantum geometric tensor [33; 34] and \(v_{iy}=\partial_{k_{y}}\varepsilon_{i}(\mathbf{k})\) is the band velocity along \(y\)-axis. The interband part \(\Pi_{\mathrm{inter}}(\mathbf{q},\omega)\) is also composed of three terms such that \(\Pi_{\mathrm{inter}}(\mathbf{q},\omega)=\sum_{\alpha=1}^{3}\Pi_{\mathrm{ inter}}^{(\alpha)}(\mathbf{q},\omega)\). Since the interband term is irrespective of the order of limits \(\mathbf{q}\to 0\) and \(\omega\to 0\), we take \(\omega=0\) before \(\mathbf{q}\to 0\) and have \[\Pi_{\mathrm{inter}}^{(1)}(\mathbf{q},0)=\sum_{\mathbf{k},a\neq b}\frac{M_{abb} (\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{0},\mathbf{q}}(\mathbf{k} )}\frac{f_{a,b}^{\mathbf{0},2\mathbf{q}}(\mathbf{k})}{\varepsilon_{a,b}^{ \mathbf{0},2\mathbf{q}}(\mathbf{k})}, \tag{18}\] \[\Pi_{\mathrm{inter}}^{(2)}(\mathbf{q},0)=-\sum_{\mathbf{k},a\neq b}\frac{M_{ aba}(\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{0},\mathbf{q}}( \mathbf{k})}\frac{f_{a,b}^{2\mathbf{q},\mathbf{q}}(\mathbf{k})}{\varepsilon_{ a,b}^{2\mathbf{q},\mathbf{q}}(\mathbf{k})}, \tag{19}\] and \[\Pi_{\mathrm{inter}}^{(3)}(\mathbf{q},0)=-\sum_{\mathbf{k},a\neq b}\frac{M_{ abab}(\mathbf{k},\mathbf{q})}{\varepsilon_{a,b}^{\mathbf{q},2\mathbf{q}}( \mathbf{k})}\frac{f_{a,b}^{0,2\mathbf{q}}(\mathbf{k})}{\varepsilon_{a,b}^{0,2\mathbf{q}}(\mathbf{k})}. \tag{20}\] From these, we expand \(\mathrm{lm}\Pi_{\mathrm{inter}}(\mathbf{q},0)\) and get [30] \[\mathrm{lm}\Pi_{\mathrm{inter}}(0)=\sum_{\mathbf{k},a\neq b}f(\varepsilon_{a}( \mathbf{k}))\partial_{k_{x}}\big{[}F_{xy}^{ab}(v_{ay}-v_{by})\big{]}. \tag{21}\] Combining the contribution of the intraband and interband terms, the Hall current becomes \[j_{x}(2\omega)=\frac{ie^{3}}{S\omega}\sum_{\mathbf{k},a\neq b}f(\varepsilon_{a} (\mathbf{k}))\partial_{k_{x}}\big{[}F_{xy}^{ab}(2v_{ay}-v_{by})\big{]}E_{y}B_{z}. \tag{22}\] Here we have taken integration by parts for the Fermi surface term in Eq. (17). Eq. (22) is the main result of this paper. It shows that in the Hall device, there is a second-order Hall response which is proportional to a dipole, i.e. \[D=\int\frac{d^{2}k}{(2\pi)^{2}}\sum_{a\neq b}f(\varepsilon_{a}(\mathbf{k})) \partial_{k_{x}}\big{[}F_{xy}^{ab}(2v_{ay}-v_{by})\big{]}. \tag{23}\] Under time-reversal symmetry, such a dipole vanishes, since the partial differential, the Berry curvature, and Figure 2: The triangle diagram of the second-order response current. The wavy lines refer to the external vector potential, the solid lines are the electron propagator, i.e., the Green function, the solid vertexes represent the operator \(e\hat{v}_{y}\) in the perturbed Hamiltonian, and the hollow vertex denotes the Hall current operator. the band velocity are all odd functions of momentum in such a case. It is worth noting that there is a similar second-order response to the electromagnetic field, i.e. the magnetononlinear anomalous Hall effect [15]. However, this effect comes from the intrinsic mechanism of the band structure, has time-reversal symmetry, and has no Drude-like dependency on \(\omega\tau\). Here for our clean system, the relaxation time \(\tau\) approaches infinity. **Massive Dirac model** As an example, let us consider a Dirac Hamiltonian with broken time-reversal symmetry motivated by the LaAlO\({}_{3}\)/LaNiO\({}_{3}\)/LaAlO\({}_{3}\) quantum well system with spin-orbit coupling [35]. The low-energy physics of the material around Dirac points can be described by the Hamiltonian \[H_{0}(\mathbf{k})=\alpha(k_{x}-k_{y})\sigma_{x}+\beta(k_{x}+k_{y})\sigma_{z}+ \Delta\sigma_{y}. \tag{24}\] Here \(\mathbf{k}\) is momentum defined near the Dirac point, \((\alpha,\beta)\) are expansion coefficients determined by the band parameters. The gap \(\Delta\) is generated by turning on the spin-orbit coupling. The energy dispersion of upper band (\(+\)) and lower band (\(-\)) are respectively given by \(\varepsilon_{\pm}(\mathbf{k})=\pm\varepsilon_{0}\), where \(\varepsilon_{0}=\sqrt{k_{x}^{{}^{\prime}2}+k_{y}^{{}^{\prime}2}+\Delta^{2}}\), \(k_{x}^{{}^{\prime}}=\alpha(k_{x}-k_{y})\), and \(k_{y}^{{}^{\prime}}=\beta(k_{x}+k_{y})\). Then the band velocity is \(v_{\pm y}=\pm(-\alpha k_{x}^{\prime}+\beta k_{y}^{{}^{\prime}})/\varepsilon_ {0}\) and the Berry curvature is given by \(F_{xy}^{-+}=-\alpha\beta\Delta/\varepsilon_{0}^{3}\). In the case of zero temperature and the chemical potential \(\mu<-\Delta\), the integrand function in the dipole, i.e. Eq. (23), becomes \(3\alpha\beta\Delta[k_{x}^{{}^{\prime}2}(3\alpha^{2}+\beta^{2})-k_{y}^{{}^{ \prime}2}(\alpha^{2}+3\beta^{2})+\Delta^{2}(\beta^{2}-\alpha^{2})]/(4\pi^{2} \varepsilon_{0}^{6})\), \(dk_{x}dk_{y}=dk_{x}^{\prime}dk_{y}^{\prime}/(2\alpha\beta)\), and the integral is over the region \(\varepsilon_{-}(\mathbf{k})<\mu\), i.e. \(k_{x}^{{}^{\prime}2}+k_{y}^{{}^{\prime}2}>\mu^{2}-\Delta^{2}\). By using polar coordinate, the dipole is found to be \[D=\frac{3(\alpha^{2}-\beta^{2})(\mu^{2}-\Delta^{2})\Delta}{8\pi\mu^{4}}. \tag{25}\] At the chemical potential \(\mu=-\sqrt{2}\Delta\), the dipole takes its maximum \(D=3(\alpha^{2}-\beta^{2})/(32\pi\Delta)\), which is shown in Fig. 3. **Discussion** Here we mainly restrict our system to be linear. If we relax the restriction, the perturbed Hamiltonian and Hall current operator can contain second-order and first-order terms of vector potential \(\mathbf{A}\), respectively. As a consequence, in addition to the considered triangle term, two additional terms will appear in the second-response current. Up to the second order of the electric and magnetic fields, the total Hall current in time-reversal breaking materials can be expressed in the form \(j_{x}=\sigma_{xy}E_{y}+\gamma_{xij}E_{i}E_{j}+\alpha_{xij}E_{i}B_{j}\). Note that the first two terms do not depend on the frequency of the external fields. Therefore, the dipole in this nonlinear electromagnetic response can be extracted from the derivative of the current with respect to the oscillation period of the external fields. In this work, we consider the two-dimensional system where electric and magnetic fields are placed as shown in Fig. 1. Following a similar method, the results can be generalized to a three-dimensional system where electric and magnetic fields are placed in other ways. Besides, by replacing the Hall current operator in the response with the spin current operator, the research can be extended to the spin system too [36; 37]. **Acknowledgements** A.Z. and J.-W.R. were supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (Grant No. 2021R1A2C1010572). J.-W.R. was supported by the National Research Foundation of Korea (NRF) Grant funded by the Korea government (MSIT) (Grant No. 2021R1A5A1032996) and Creation of the Quantum Information Science R&D Ecosystem (Grant No. 2022M3H3A106307411) through the National Research Foundation of Korea (NRF) funded by the Korean government (Ministry of Science and ICT).
2305.19520
Covariant description of the colloidal dynamics on curved manifolds
Brownian motion is a universal characteristic of colloidal particles embedded in a host medium, and it is the fingerprint of molecular transport or diffusion, a generic feature of relevance not only in Physics but also in several branches of Science and Engineering. Since its discovery, Brownian motion or colloid dynamics has been important in elucidating the connection between the molecular details of the diffusing macromolecule and the macroscopic information of the host medium. However, colloid dynamics is far from being completely understood. For example, the diffusion of non-spherical colloids and the effects of geometry on the dynamics of either passive or active colloids are a few representative cases that are part of the current challenges in Soft Matter Physics. In this contribution, we take a step forward to introduce a covariant description of the colloid dynamics in curved spaces. This formalism will allow us to understand several phenomena, for instance, the effects of curvature on the kinetics during spinodal decomposition and the thermodynamic properties of the colloidal dispersion, just to mention a few examples. This theoretical framework will also serve as the starting point to highlight the role of geometry on colloid dynamics, an aspect that is of paramount importance to understanding more complex phenomena, such as the diffusive mechanisms of proteins embedded in cell membranes.
Pavel Castro-Villarreal, César O. Solano-Cabrera, Ramón Castañeda-Priego
2023-05-31T03:10:41Z
http://arxiv.org/abs/2305.19520v1
# Covariant description of the colloidal dynamics on curved manifolds ###### Abstract Brownian motion is a universal characteristic of colloidal particles embedded in a host medium, and it is the fingerprint of molecular transport or diffusion, a generic feature of relevance not only in Physics but also in several branches of Science and Engineering. Since its discovery, Brownian motion or colloid dynamics has been important in elucidating the connection between the molecular details of the diffusing macromolecule and the macroscopic information of the host medium. However, colloid dynamics is far from being completely understood. For example, the diffusion of non-spherical colloids and the effects of geometry on the dynamics of either passive or active colloids are a few representative cases that are part of the current challenges in Soft Matter Physics. In this contribution, we take a step forward to introduce a covariant description of the colloid dynamics in curved spaces. This formalism will allow us to understand several phenomena, for instance, the effects of curvature on the kinetics during spinodal decomposition and the thermodynamic properties of the colloidal dispersion, just to mention a few examples. This theoretical framework will also serve as the starting point to highlight the role of geometry on colloid dynamics, an aspect that is of paramount importance to understanding more complex phenomena, such as the diffusive mechanisms of proteins embedded in cell membranes. Diffusion, Brownian motion, Colloids, Smoluchowski equation, curved manifold [email protected]\({}^{1}\), [email protected]\({}^{2}\) [email protected]\({}^{3}\) Introduction Since the pioneering work of Einstein [16], Brownian motion has become the paradigm for the description and understanding of a large variety of diffusion processes that are present in a large variety of physical, biological, and chemical systems. In recent years, the dynamics of macromolecules and nano-particles on surfaces or curved spaces has been the subject of intensive investigations, mainly due to the particle diffusion shows a richer dynamical behavior at different time scales [3, 30] than its counterpart in open and flat geometries; it can be either subdiffusive (\(\alpha<1\)) or superdiffusive (\(\alpha>1\)), i.e., the mean-square displacement \(\left\langle x^{2}\left(t\right)\right\rangle\) does not increase strictly linearly in time, but it behaves as \(\propto t^{\alpha}\). Particularly, diffusion plays a key role in the dynamics of molecular motors moving along heterogeneous substrates [21], in the transport of biomacromolecules in the cell due to crowding [5, 2], and in the lateral diffusion of proteins on fluctuating membranes [25, 1], just to mention a few examples. Most of the diffusion properties on surfaces depend strongly on the generic features of the surface or, strictly speaking, on the surface geometry [29]. Of course, the particle dynamics is not only influenced by geometrical constrictions but also local and thermodynamic properties experience the effects of the manifold where the particles are embedded [26, 24, 27]. A great effort for understanding Brownian motion on surfaces can be found in colloidal soft matter, where the dynamics of colloidal particles on quasi-two-dimensional geometries have been both experimentally and theoretically investigated by using optical techniques, like digital videomicroscopy, computer simulations, and theoretical approximations [28, 31]. Nonetheless, such investigations deal basically with (almost) flat surfaces, i.e., without including curvature effects. The interest in the use of colloids resides in the fact that they are tiny (nanometer- to micrometer-sized) particles and typically are considered model systems because, among other interesting features [6], their characteristic time and length scales are experimentally accessible, which allow us to follow the colloidal dynamics and transport processes in real-time [6]. Furthermore, since the colloidal interactions are relatively weak, colloids are highly susceptible to external forces, and hence their static and dynamical properties can be controlled through the application of external fields or by imposing geometrical restrictions, see, e.g., Ref. [6] and references therein. Then, colloids represent an ideal model system to account for the effects of geometry on the nature and dynamics of many-body systems. In particular, it has already been demonstrated, and experimentally corroborated, that the curvature dependence of a fluctuating membrane affects the diffusion processes of molecules on the membrane surface [20, 17, 32]. These geometrical effects, although important, are still difficult to interpret. The lack of a precise interpretation resides in the fact that, unfortunately, there is no formal or unique way to define diffusion on a curved surface (see, for instance, [20, 17]). In fact, the description of the colloid dynamics in curved spaces is a non-trivial task; it represents a formidable physical and mathematical challenge. Recently, one of us proposed the generalization of the Smoluchowski equation on curved spaces [7]. Furthermore, Castro-Villarreal also put forward different geometrical observables to quantify the displacement of a single colloidal particle [8]. Within this approach, it was shown that the geodesic mean-square displacement captures the intrinsic elements of the manifold, whereas the Euclidean displacement provides extrinsic information from the surface. An interesting extension of the theory now provides the description of the motion of active Brownian particles [11], where the mean-square geodesic displacement captures the relationship between the curvature and the activity of the active colloid. This theoretical framework provided evidence that an active Brownian particle experiences a dynamical transition in any compact surface from a monotonic to an oscillating behavior observed in the mean squared geodesic displacement [11]; a theoretical prediction of a dynamic transition of this type can be established using a _run-and-tumble_ active particle confined on a circle \(S^{1}\). Recently, this prediction was corroborated in experiments using a nonvibrating magnetic granular system, see, e.g., Ref. [22]. However, we still face challenges in colloid dynamics on curved manifolds, for example, the generalization of this approach to the situation where the colloids interact not only with other macromolecules, i.e., direct forces, but also the inclusion of all those geometrical mechanisms originating from the curvature. The aforementioned theoretical formalism has also allowed us to determine the equation of motion of interacting colloids in curved spaces; a generalized Ermack-McCammon algorithm has been developed to study a broader class of transport phenomena in curved manifolds [12]. Interestingly, the predictions of the particle transport in non-Euclidean spaces have been partially corroborated in a series of experiments [30, 32]; superparamagnetic colloids embedded in a circle and subjected to external magnetic fields [30] and polystyrene nanoparticles diffusing on highly curved water-silicone oil interfaces [32]. However, further experimental, computational, and theoretical studies are needed to better understand the rich diffusion mechanisms, particle distribution, and thermodynamic properties that emerge in colloidal dispersions when the curvature of the space plays an important role. To this end, we propose, as a starting point, the covariance description of the colloid dynamics, which is explicitly explained in the next section. ## 2 Covariance description of the colloid dynamics As discussed above, one of the main challenges to understanding the effects of geometry on the dynamics of colloids embedded in a curved space is to develop experimental tools and theoretical frameworks that account for the transport properties that occur on the manifold. Below, we then provide the first preliminary steps to build a covariant theoretical formulation of the dynamics of an interacting colloidal system based on the many-body Langevin equation in the so-called overdamped limit [12], which allows us to deduce a Smoluchowski equation [14] for the interacting system on the manifold. Before starting with the covariant formulation, let us introduce our notation. Let us consider the colloidal system confined on a \(d-\)dimensional manifold \(\mathbb{M}\) embedded in a \(d+1-\)dimensional Euclidean space \(\mathbb{R}^{d+1}\) described with the parameterization \(\mathbf{X}:U\subset\mathbb{R}^{d}\rightarrow\mathbb{R}^{d+1}\), where a particular point in \(\mathbb{M}\) is given by \(\mathbf{X}(x)\), being \(x\equiv\left(x^{1},x^{2},\cdots,x^{d}\right)\in U\) local coordinates of the neighborhood \(U\). Using the embedding functions \(\mathbf{X}(x)\), one can define a Riemannian metric tensor by \(g_{\alpha\beta}=\mathbf{e}_{\alpha}\cdot\mathbf{e}_{\beta}\), where \(\mathbf{e}_{\alpha}=\frac{\partial}{\partial x^{\alpha}}\mathbf{X}\left(x\right)\), with \(\alpha=1,\cdots,d\). Further notions like normal vector, extrinsic curvature tensor, and Weingarten-Gauss equations are introduced in Appendix A from [8]. Typically, spatial dimensions of interest are \(d=1\) and \(d=2\). As we have pointed out above, our starting point to describe the dynamics of colloids confined in a curved manifold is based on a previous contribution [12], where it is posed the many-body Langevin stochastic equations in the overdamped regime, _i.e._, the diffusive time scale, in local coordinates, \[\dot{x}_{i}^{\alpha}=\frac{1}{\zeta}\mathbf{e}^{\alpha}\left(x_{i}\right)\cdot \left[\mathbf{f}_{i}\left(t\right)+\sum_{i\neq j}\mathbf{F}_{ij}\left(x_{i},x _{j}\right)\right], \tag{1}\] where \(\zeta\) is the friction coefficient, and with \(x_{i}^{\alpha}\) being the \(i-\)th particle position with \(i=1,\cdots,N\) and \(\dot{x}_{i}^{\alpha}\equiv\frac{dx_{i}^{\alpha}}{dt}\). The quantity \(\mathbf{f}_{i}\left(t\right)\) represents the collective effects of the solvent molecules on the colloid, and it is expressed by a stochastic force over the \(i\)-th particle, which satisfies the fluctuation-dissipation theorem in the Euclidean space \(\mathbb{R}^{d+1}\), that is, \(\left\langle\mathbf{f}_{i}\left(t\right)\right\rangle=0\) and \(\left\langle\mathbf{f}_{i}(t)\mathbf{f}_{j}(\tau)\right\rangle=2\zeta k_{B}T \mathbf{1}\delta_{ij}\delta\left(t-\tau\right)\), where \(k_{B}T\) is the thermal energy being \(T\) the temperature and \(k_{B}\) the Boltzmann's constant. The term \(\mathbf{F}_{ij}\left(x_{i},x_{j}\right)\) is the force that the \(i\)-th particle experiences at the position \(x_{i}\) and is due to the interaction with the \(j\)-th particle located at the position \(x_{j}\). In Eq. (1), the tangent vector \(\mathbf{e}_{\alpha}\equiv\partial_{\alpha}\mathbf{X}\) projects the dynamics on the tangent space \(T_{X}\left(\mathbb{M}\right)\), since the dynamics is occurring intrinsically on the manifold. Note that rising and lowering indices are done by the standard fashion using the metric tensor to lowering indices and inverse metric tensor \(g^{\alpha\beta}\) for rising indices, for instance, \(v^{\alpha}=g^{\alpha\beta}v_{\beta}\) for certain vector \(v\). In the present exposition, we adopt the consideration that Eq. (1) is a set of \(N\) stochastic differential equations in the Stratonovich's sense [18], \[dx_{i}^{\alpha}=\frac{1}{\zeta}\mathcal{F}_{i}^{\alpha}dt+\sqrt{2D_{0}}e_{i,a }^{\alpha}dW_{i,a}(t), \tag{2}\] where \(\mathcal{F}_{i}^{\alpha}\equiv\sum_{j\neq i}F_{ij}^{\alpha}\), with \(F_{ij}^{\alpha}\) as the tangent projection of the interacting term \(\mathbf{F}_{ij}\), and \(D_{0}=k_{B}T/\zeta\) is the self-diffusion coefficient. Also, there is an implicit sum over the indices \(a=1,\cdots,d+1\), to take into account the tangent projection with the stochastic term in Eq. (1), which has been identified with a Wiener process for each particle \(d\mathbf{W}_{i}(t)=\left(dW_{i,1}(t),dW_{i,2}(t),\cdots,dW_{i,d+1}(t)\right)\), so that the total Wiener process \(d\mathbf{W}(t)\) is such that \(\dim[d\mathbf{W}(t)]=(d+1)N\). Since the dynamics occurs on the curved space, the Wiener process should also be projected on it. So, we introduce a block diagonal projection operator \(\mathbf{\hat{P}}=\mathrm{diag}(e_{1,a}^{\alpha},e_{2,a}^{\alpha},...,e_{N,a}^{ \alpha})\), with \(e_{i,a}^{\alpha}\equiv\left(\mathbf{e}^{\alpha}\left(x_{i}\right)\right)_{a}\), where the blocks are individual operators for each particle given by the tensorial product of the basis of the tangent space and the basis of the Euclidean space. It is a well-known fact that given a differential stochastic equation in the Stratonovich form such as Eq. (2), one can find its associate Chapman-Kolmogorov differential equation for the joint probability density function \(p:\mathbb{M}^{N}\times\mathbb{R}\rightarrow\mathbb{R}\)[18]. For this, we only have to identify the components of the drift vector, and the diffusion matrix, which in this case are, \(A_{i}^{\alpha}=\mathcal{F}_{i}^{\alpha}/\zeta\), and \(B_{i,a}^{\alpha}=\sqrt{2D_{0}}e_{i,a}^{\alpha}\), respectively. Then, we obtain the following expression, \[\partial_{t}p=-\frac{1}{\zeta}\sum_{i=1}^{N}\partial_{\alpha}\left(\mathcal{F }_{i}^{\alpha}p\right)+D_{0}\sum_{i=1}^{N}\partial_{\alpha}\left[e_{i,a}^{ \alpha}\partial_{\beta}\left(e_{i,a}^{\beta}p\right)\right]. \tag{3}\] In this equation, let us note that the partial derivation \(\partial_{\alpha}=\frac{\partial}{\partial x_{i}^{\alpha}}\) depends on the index \(i\), which is associated to the particle label. Although this last equation has information on the geometry of the surface through the tangent vectors, it is not written in a covariant form yet. To this end, we define the probability density appropriately normalized with the volume element \(dV=\prod_{i=1}^{N}dv_{g}^{i}\), where \(dv_{g}^{i}\) is the Riemannian volume element defined by \(dv_{g}^{i}\equiv d^{d}x_{i}\sqrt{g(x_{i})}\) for each particle. Thus, it is convenient to define a covariant joint probability density function \(\rho\left(x_{1},\cdots,x_{N};t\right)\) such as \(p\left(x_{1},\cdots,x_{N};t\right)=\left(\prod_{i=1}^{N}\sqrt{g(x_{i})}\right) \rho\left(x_{1},\cdots,x_{N};t\right)\), where \(g(x_{i})\) is the determinant of the metric tensor \(g_{\alpha\beta}\left(x_{i}\right)\). After this change and using the Weingarten-Gauss equation mentioned above, Eq. (3) takes the following mathematical form, \[\partial_{t}\rho=-\frac{1}{\zeta}\sum_{i=1}^{N}\nabla_{\alpha,i}\left(\mathcal{ F}_{i}^{\alpha}\rho\right)+D_{0}\sum_{i=1}^{N}\frac{1}{\sqrt{g}}\partial_{ \alpha}\left[g^{\alpha\beta}\sqrt{g}\partial_{\beta}\rho+g^{\alpha\beta}\rho (\partial_{\beta}\sqrt{g}-\sqrt{g}\Gamma_{\nu\beta}^{\nu})\right],\] where the covariant derivative acting on a vector field \(v^{\alpha}\) is \(\nabla_{\alpha,i}v^{\alpha}=\frac{1}{\sqrt{g}}\partial_{\alpha}\left(\sqrt{g }\;v^{\alpha}\right)\) using the coordinates of the \(i-\)th particle, \(x_{i}^{\alpha}\), and \(\Gamma_{\nu\beta}^{\alpha}\) are the Christoffel symbols [23]. Additionally, applying the identity \(\Gamma_{\nu\beta}^{\nu}=\partial_{\beta}\log\sqrt{g}\), and identifying that the Laplace-Beltrami operator acts on scalars, \(\Delta_{g,i}=\left(\sqrt{g}\right)^{-1}\partial_{\alpha}(g^{\alpha\beta}\sqrt {g}\partial_{\beta})\) (also using local coordinates, \(x_{i}^{\alpha}\)), it is straightforward to obtain the desired covariant expression, \[\partial_{t}\rho=D_{0}\sum_{i=1}^{N}\Delta_{g,i}\rho-\frac{1}{\zeta}\sum_{i=1 }^{N}\nabla_{\alpha,i}\left(\mathcal{F}_{i}^{\alpha}\rho\right). \tag{4}\] Equation (4) represents the covariant formulation of the Smoluchowski equation of a colloidal system of interacting particles constrained to a curved space \(\mathbb{M}\), where all the geometrical features are included in the Laplace-Beltrami operator and the covariant derivative. This equation is reduced to the standard Smoluchowski equation when the manifold \(\mathbb{M}\) is the open Euclidean space \(\mathbb{R}^{d}\), where the metric tensor is \(g_{\alpha\beta}=\delta_{\alpha\beta}\). Furthermore, one can write down equation (4) in a more compact form that allows us to prove that both systems shown in figure (1), that is, the system of \(N\) interacting particles confined to a \(d\)-dimensional manifold \(\mathbb{M}\), and the system of a single particle in an external force confined to a \(\mathcal{D}\)-dimensional manifold \(\mathcal{M}\) represent equivalent systems. For this purpose, let us define a hyper-dimensional Riemannian geometry by \(N\) cartesian products of the manifold \(\mathbb{M}\), that is, \(\mathcal{M}=\mathbb{M}\times\mathbb{M}\times\cdots\times\mathbb{M}\equiv \mathbb{M}^{N}\) of dimension \(\mathcal{D}=Nd\), where a local patch is described with the local coordinates \(\xi^{A}=\{x_{i}^{\alpha}\}\), where we run the local indices \(\alpha\) and the particles indices \(i\), and \(A=1,\cdots,\mathcal{D}\). Now, this manifold \(\mathcal{M}\) is equipped with a Riemannian metric defined through the following line element, \[ds^{2}=\sum_{i=1}^{N}g_{\alpha\beta}\left(x_{i}\right)dx_{i}^{\alpha}dx_{i}^{ \beta}, \tag{5}\] in terms of the metric tensor \(g_{\alpha\beta}\) of the coordinates of each particle. Thus, the metric tensor associated with the line element (5) for the manifold \(\mathcal{M}\) is given by the block diagonal matrix \(G_{AB}=\text{diag}\left(g_{\alpha\beta}\left(x_{1}\right),\cdots,g_{\mu\nu}(x_{N})\right)\). It is not difficult to see that the covariant derivative compatible with the metric \(G_{AB}\) for the manifold \(\mathcal{M}\) can be written as, \[\mathbf{\nabla}_{A}=\left(\nabla_{\alpha,1},\nabla_{\beta,2},\cdots,\nabla_{\mu.N} \right), \tag{6}\] and the corresponding Laplace-Beltrami acting on scalars is simply the sum of each Laplace-Beltrami operator, \[\mathbf{\Delta}_{G}=\mathbf{\nabla}_{A}\mathbf{\nabla}^{A}=\sum_{i=1}^{N}\Delta_{g,i}. \tag{7}\] Now, defining \(\mathbf{\mathcal{F}}^{A}=\left(\mathcal{F}_{1}^{\alpha},\mathcal{F}_{2}^{\beta}, \cdots,\mathcal{F}_{N}^{\mu}\right)\) as the components of a vector field at the point \(\xi\in\mathcal{M}\), it is straightforward to write down the Smoluchowski equation for the full \(N-\)particle colloidal system confined on the curved space (4) as, \[\partial_{t}\rho=D_{0}\mathbf{\Delta}_{G}\rho-\frac{1}{\zeta}\mathbf{\nabla}_{A}\left( \mathbf{\mathcal{F}}^{A}\rho\right). \tag{8}\] By expressing the Smoluchowski equation in this compact manner, it is now clear in what sense one can interpret the problem of the interacting colloidal system as the Brownian motion of a single particle in an external field \(\mathbf{\mathcal{F}}\) but in a hyper-dimensional space \(\mathcal{M}\). This identification Figure 1: Left: Schematic representation of a set of particles embedded a manifold \(\mathbb{M}\) of dimension \(d\). The position of the particles is given by the embedding function \(\mathbf{X}(x_{i})\), and the force of interaction depends on the Euclidean distance measured in \(\mathbb{R}^{d+1}\). Right: Schematic representation of a single particle in the manifold \(\mathcal{M}=\mathbb{M}^{N}\). The particle is carried by an external force given by vector field \(\mathbf{\mathcal{F}}^{A}\). Although both situations, left and right, seem to represent different systems, they are exactly the same physical problem. was already implemented in a previous contribution [9], where an interacting colloidal system was studied on the line. Moreover, if we suppose that the interaction forces encoded in \(\boldsymbol{\mathcal{F}}^{A}\) can be written as \(\boldsymbol{\mathcal{F}}_{A}=-\nabla_{A}\Phi\), where \(\Phi\) is certain interacting potential, one can see that the expected equilibrium distribution is satisfied at long times, namely, \(\rho\left(\xi,t\right)=\frac{1}{\mathcal{Z}}e^{-\beta\Phi\left(\xi\right)}\), where \(\mathcal{Z}\) is the partition function for the particle system confined to the curved manifold, \[\mathcal{Z}=\int\left(\prod_{i=1}^{N}dv_{g}^{i}\right)e^{-\beta\Phi\left( \xi\right)}, \tag{9}\] where \(\beta=1/(\zeta D_{0})=1/(k_{B}T)\). Let us note that the expression of this partition function can also be obtained by integrating out the momentum \(p_{i}^{\alpha}\) variables from the Boltzmann weight using the Hamiltonian \(\mathcal{H}=\sum_{i=1}^{N}\frac{1}{2m}p_{i}^{\alpha}g_{\alpha\beta}(x_{i})p_{ i}^{\beta}+\Phi\left(\xi\right)\). Usually, the potential \(\Phi\left(\xi\right)\) is considered pairwise additive; thus, one can carry on the usual cluster diagrammatic expansion for the colloidal system in the curved space in static conditions [29]. Consequently, equations (4) and (8) represent the starting point of a covariant description that allows us to study in detail the colloid dynamics in curved spaces. In the following paragraphs, we will discuss a simple application of this formulation, and highlight some challenges and future perspectives that can be tackled within this approach. Application of the covariant formulation: general behavior of the short-time dynamics in a dilute colloidal system In this section, we study an application of the advantage to writing down the Smoluchowski equation in curved spaces in a covariant formulation (8). This consists in providing a general behavior of the probability density function \(\rho\left(\xi,\xi^{\prime},t\right)\) at the short-time regime, or equivalently in a neighborhood around a point of the manifold \(\mathcal{M}\). Since \(\mathcal{M}\) is a Riemannian manifold with metric tensor \(G_{AB}\), one can explore the curvature effects on the colloidal interacting system using the Riemann normal coordinates (RNC), see, e.g., Refs. [19, 23], in the neighborhood of a point \(p\in\mathcal{M}\), in an entirely analog manner as it has been performed for a single particle [7]. To derive an approximate expression for the density probability function (PDF) \(\rho\left(\xi,\xi^{\prime},t\right)\) at a short time, it is common to write the Smoluchowski equation (8) as a heat-kernel equation, \[\left(\partial_{t}+\hat{\mathcal{O}}\right)\rho\left(\xi,\xi^{\prime},t\right) =\frac{1}{\sqrt{G}}\delta\left(\xi-\xi^{\prime}\right)\delta\left(t\right), \tag{10}\] where the operator \(\hat{\mathcal{O}}\) is defined as \(\hat{\mathcal{O}}=-D_{0}\boldsymbol{\Delta}_{G}+\frac{1}{\zeta}\boldsymbol{ \nabla}_{A}\left(\boldsymbol{\mathcal{F}}^{A}.\right)\). At the initial condition, \(t\to 0\), the PDF acquires the form of a Dirac delta: \(\rho(\xi,\xi^{\prime},t\to 0)=\frac{1}{\sqrt{G}}\delta\left(\xi-\xi^{\prime}\right)\). This initial condition establishes that the system is at the configuration \(\xi^{\prime}\) at the starting time. Then, by performing a Fourier transform on the time parameter, the above equation can be written as \(\left(iE+\hat{\mathcal{O}}\right)\rho\left(\xi,\xi^{\prime},E\right)=\delta\left( \xi-\xi^{\prime}\right)/\sqrt{G}\). In the following, we use the De Witt procedure [13]; that is, we first separate the points to write the term \(\sqrt{G}\) in front of the delta Dirac as the expression \(\sqrt{G}\to G^{\frac{1}{4}}(\xi)G^{\frac{1}{4}}\left(\xi^{\prime}\right)\). Now, we redefine the PDF as \(\overline{\rho}\left(\xi,\xi^{\prime},t\right)=G^{\frac{1}{4}}\left(\xi\right) \rho\left(\xi,\xi^{\prime},t\right)G^{\frac{1}{4}}\left(\xi^{\prime}\right)\). Thus, after some algebraic rearrangements, the above equation (10) can be rewritten as, \[\left(iE+\hat{H}\right)\overline{\rho}\left(\xi,\xi^{\prime},E\right)=\delta \left(\xi-\xi^{\prime}\right), \tag{11}\] where \(\hat{H}=G^{\frac{1}{4}}\hat{\mathcal{O}}G^{-\frac{1}{4}}\), or explicitly, this operator is given by \[\hat{H}=-D_{0}\left[\partial_{A}G^{AB}\partial_{B}+G^{-\frac{1}{4}}\partial_ {A}\left(G^{\frac{1}{2}}G^{AB}\partial_{B}G^{-\frac{1}{4}}\right)-\beta\left( \partial_{A}\left(\mathbf{\mathcal{F}}^{A}\mathbf{\cdot}\right)+\frac{1}{4}G^{-1}( \partial_{A}G)\mathbf{\mathcal{F}}^{A}\right)\right]. \tag{12}\] Next, we choose Riemann normal coordinates \(y^{A}\) in a local neighborhood \(N_{\xi^{\prime}}\in\mathcal{M}\) centered at \(\xi^{\prime}\). In RNC, the neighborhood \(N_{\xi^{\prime}}\) looks like Euclidean space, so we choose \(\xi^{\prime}\) to be the origin of this Euclidean space. The advantage of these coordinates is that one can express the metric tensor as \(G_{AB}=\delta_{AB}+\frac{1}{3}\mathbf{\mathcal{R}}_{ACDB}\)\(y^{C}y^{D}+\cdots\), where \(\mathbf{\mathcal{R}}_{ACDB}\) is the Riemann curvature tensor of \(\mathcal{M}\) evaluated at \(\xi^{\prime}\). In addition, we express the interaction terms in a Taylor expansion around the origin of the neighborhood \(\mathbf{\mathcal{F}}^{A}\left(\xi\right)=\left(\mathbf{\mathcal{F}}^{A}\right)\left( \xi^{\prime}\right)+\left(\mathbf{\nabla}_{B}\mathbf{\mathcal{F}}^{A}\right)\left(\xi ^{\prime}\right)y^{B}+\frac{1}{2}\left(\mathbf{\nabla}_{B}\mathbf{\nabla}_{C}\mathbf{ \mathcal{F}}^{A}\right)\left(\xi^{\prime}\right)y^{B}y^{C}+\cdots\), where the coefficients are evaluated at the point \(\xi^{\prime}\). In the subsequent, we have all the pieces to split the operator (12) as \(\hat{H}=\hat{H}_{0}+\hat{H}_{I}\), where \[\hat{H}_{0}=D_{0}\hat{\mathbf{p}}^{2}-\frac{D_{0}}{6}\mathbf{\mathcal{R}}+D_{0} \beta\mathbf{\nabla}_{A}\mathbf{\mathcal{F}}^{A}, \tag{13}\] is a free "Hamiltonian" and \[\hat{H}_{I} = D_{0}\beta\left(\mathbf{\nabla}_{B}\mathbf{\nabla}_{A}\mathbf{\mathcal{F}}^ {A}-\frac{1}{6}\mathbf{\mathcal{R}}_{BA}\mathbf{\mathcal{F}}^{A}\right)y^{B}-\frac{D_{ 0}\beta}{6}\left(\mathbf{\mathcal{R}}_{BA}\mathbf{\nabla}_{C}\mathbf{\mathcal{F}}^{A} \right)y^{B}y^{C}+iD_{0}\beta\mathbf{\mathcal{F}}^{A}\hat{p}_{A} \tag{14}\] \[+ iD_{0}\beta\left(\mathbf{\nabla}_{B}\mathbf{\mathcal{F}}^{A}\right)y^{B} \hat{p}_{A}+i\frac{D_{0}\beta}{2}\left(\mathbf{\nabla}_{B}\mathbf{\nabla}_{C}\mathbf{ \mathcal{F}}^{A}\right)y^{B}y^{C}\hat{p}_{A}-\frac{D_{0}}{3}\mathbf{\mathcal{R}}_ {CABD}\ \hat{p}^{A}y^{C}y^{D}\hat{p}^{B},\] an interacting "Hamiltonian", where we have defined a "momentum operator" as \(\hat{p}_{A}=-i\partial_{A}\) in analogy with quantum mechanics. Now, the solution for the probability density function can be obtained by identifying \(\delta(\xi-\xi^{\prime})=\left\langle\xi\right|\left.\xi^{\prime}\right\rangle\) and solving equation (11) as follows \(\overline{\rho}\left(\xi,\xi^{\prime},E\right)=\left\langle\xi\left|\hat{K} \right|\xi^{\prime}\right\rangle\), where \(\hat{K}=1/(iE+\hat{H})\) is the resolvent operator. Now, we carry on a standard perturbation theory at first order in an entire analogy with quantum mechanics; thus, the approximation of the resolvent operator through the perturbation theory is \(\hat{K}=\hat{K}_{0}+\hat{K}_{0}\hat{H}_{I}\hat{K}_{0}+\cdots\). At this approximation, there are just 6 terms to evaluate corresponding to quantities of the form \(I_{i}\left(\xi,\xi^{\prime}\right)=\left\langle\xi\left|\hat{K}_{0}\hat{ \mathcal{O}}_{i}\hat{K}_{0}\right|\xi^{\prime}\right\rangle\), with \(i=1,\cdots,6\), where \(\hat{\mathcal{O}}_{i}\) is one of the six terms: \(y^{B}\), \(y^{B}y^{C}\) \(\hat{p}_{A}\), \(y^{B}\hat{p}_{A}\), \(y^{B}y^{C}\hat{p}_{A}\), and \(\hat{p}^{A}y^{C}y^{D}\hat{p}^{B}\), respectively. Since \(\hat{K}_{0}\) depends just on the "momentum operator" \(\hat{\mathbf{p}}\), it is convenient to introduce two completeness relations using the momentum basis \(\{\left|\mathbf{p}\right\rangle\}\) to compute the contributions from the interacting Hamiltonian. Thus, one can write \[I_{i}\left(\xi,\xi^{\prime}\right)=\int\frac{d^{\mathcal{D}}p}{ \left(2\pi\right)^{\mathcal{D}}}\int d^{\mathcal{D}}q\ K_{0}\left(p,\alpha_{* }\right)e^{i\xi\cdot p}\left\langle\mathbf{p}\left|\hat{\mathcal{O}}_{i}\right| \mathbf{q}\right\rangle K_{0}(q,\alpha_{*})e^{-i\xi^{\prime}\cdot q}, \tag{15}\] where \(\alpha_{*}=-\frac{D_{0}}{6}\boldsymbol{\mathcal{R}}+D_{0}\beta\boldsymbol{ \nabla}_{A}\boldsymbol{\mathcal{F}}^{A}\) and \(K_{0}(p,\alpha)=1/(iE+D_{0}p^{2}+\alpha)\) is simply a function of value of the "momentum" \(p=\sqrt{p_{A}p^{A}}\) and energy \(E\). In addition, we have used the transformation from the position to momentum basis as usual \(\left\langle\xi\,\right|\,\mathbf{p}\right\rangle=e^{i\xi\cdot p}/(2\pi)^{ \frac{\mathcal{D}}{2}}\). We should recall that we have chosen \(\xi^{\prime}=0\) as the origin of the neighborhood \(N_{\xi^{\prime}}\); this allows us to simplify the calculation of the integrals \(I_{i}\left(\xi,\xi^{\prime}\right)\). In the appendix, we explicitly explain the procedure implemented to evaluate these integrals. After a straightforward calculation, the short-time approximation for the probability density function \(\rho(\xi,0,t)\) of the full interacting system can be written as, \[\sqrt{G}\rho(\xi,0,t) = \frac{1}{\left(4\pi D_{0}t\right)^{\mathcal{D}/2}}e^{-\frac{\xi^ {2}}{4D_{0}t}}\left\{1+\tau^{(0)}+\tau_{B}^{(1)}\xi^{B}+\tau_{BC}^{(2)}\xi^{B} \xi^{C}+\cdots\right\}, \tag{16}\] where the terms \(\tau^{(0)}\), \(\tau_{B}^{(1)}\), \(\tau_{BC}^{(2)}\) are tensors given by \[\tau^{(0)} = \left(D_{0}t\right)\left[\frac{1}{6}\boldsymbol{\mathcal{R}}- \frac{1}{2}\beta\boldsymbol{\nabla}_{A}\boldsymbol{\mathcal{F}}^{A}\right], \tag{17}\] \[\tau_{B}^{(1)} = \frac{\beta}{2}\left[G_{BA}\left(1+D_{0}t\left(\frac{\boldsymbol{ \mathcal{R}}}{6}-\beta\boldsymbol{\nabla}_{C}\boldsymbol{\mathcal{F}}^{C} \right)\right)+\frac{D_{0}t}{6}\left(\boldsymbol{\mathcal{R}}_{BA}+G_{BA} \boldsymbol{\nabla}_{G}-16\boldsymbol{\nabla}_{B}\boldsymbol{\nabla}_{A} \right)\right]\boldsymbol{\mathcal{F}}^{A},\] (18) \[\tau_{BC}^{(2)} = \frac{\beta}{4}\left[\left(1+D_{0}t\left(\frac{\boldsymbol{ \mathcal{R}}}{6}-\beta\boldsymbol{\nabla}_{A}\boldsymbol{\mathcal{F}}^{A} \right)\right)\boldsymbol{\nabla}_{B}\boldsymbol{\mathcal{F}}_{C}-\frac{2D_{0 }t}{9}\boldsymbol{\mathcal{R}}_{BA}\boldsymbol{\nabla}_{C}\boldsymbol{\mathcal{ F}}^{A}\right]\] (19) \[- \frac{1}{12}\left(1+D_{0}t\left(\frac{\boldsymbol{\mathcal{R}}}{6 }-\frac{1}{2}\beta\boldsymbol{\nabla}_{C}\boldsymbol{\mathcal{F}}^{C}\right) \right)\boldsymbol{\mathcal{R}}_{BC}.\] Equation (16) represents the probability distribution function of the interacting particle system at the short-time regime1; it can be appreciated that the leading term, \(\rho_{0}(\xi,0,t)\equiv\exp\left[-\xi^{2}/(4D_{0}t)\right]/\left(4\pi D_{0}t \right)^{\mathcal{D}/2}\), is given by the Gaussian probability density valid for a very dilute system, while the subleading terms capture the corrections due to the curvature effects and interactions. Footnote 1: Note that one can show that \(\rho(\xi,0,t)\) is normalized order by order in the perturbation theory of powers of \((D_{0}t)^{2}\). Indeed, using the above expectation values, \(\left\langle 1\right\rangle=1+\tau^{(0)}+2D_{0}t\tau^{(2)}\), where \(\tau^{(2)}=G^{AB}\tau_{AB}^{(2)}\), thus at the first order \(D_{0}t\), one has \(\left\langle 1\right\rangle=1+(D_{0}t)\left[\frac{1}{6}\boldsymbol{\mathcal{R}} -\frac{1}{2}\beta\boldsymbol{\nabla}_{A}\boldsymbol{\mathcal{F}}^{A}\right]- \frac{1}{6}D_{0}t\boldsymbol{\mathcal{R}}+\frac{\beta}{2}D_{0}t\boldsymbol{ \nabla}_{A}\boldsymbol{\mathcal{F}}^{A}\approx 1\). Expectation values of observables can be calculated using the standard definition \(\left\langle O(\xi)\right\rangle=\int_{\mathcal{M}}d^{\mathcal{D}}\xi\sqrt{G} \rho(\xi,0,t)O(\xi)\). Within the above approximation (16), the expectation values can be estimated in the short time using expectation values \(\left\langle O(\xi)\right\rangle_{0}\) using the leading term \(\rho_{0}(\xi,0,t)\); that is \(\left\langle O(\xi)\right\rangle=\left\langle O(\xi)\right\rangle_{0}\left(1+\tau^{ (0)}\right)+\tau_{B}^{(1)}\left\langle\xi^{B}O(\xi)\right\rangle_{0}+\tau_{BC}^ {(2)}\left\langle\xi^{B}\xi^{C}O(\xi)\right\rangle_{0}+\cdots\). Expectation values of polynomial observables are particularly easy to compute due to the Gaussian structure of \(\rho_{0}\left(\xi,0,t\right)\). Here, we are interested in the estimation of the mean-square geodesic displacement \(\left\langle s^{2}\right\rangle\), where \(s=\sqrt{\delta_{AB}\xi^{A}\xi^{B}}\) is the geodesic displacement in RNC [19]. Also, it is interesting to estimate the expectation value of the coordinate itself \(\xi^{B}\). For these expectation values, it is not a very difficult task to show by the standard calculation of the moments of a Brownian motion in a \(\mathcal{D}\) dimensional space that \(\left\langle 1\right\rangle_{0}=1\), meaning the normalization of the leading distribution \(\rho_{0}(\xi,0,t)\), the vanish of the odd products \(\left\langle\xi^{A_{1}}\xi^{A_{2}}\cdots\xi^{A_{2k+1}}\right\rangle_{0}=0\), for any positive integer \(k\), and for even products \(\left\langle\xi^{B}\right\rangle_{0}=\left\langle\xi^{B}\xi^{2}\right\rangle_ {0}=\left\langle\xi^{B}\xi^{C}\xi^{A}\right\rangle_{0}=0\), \(\left\langle\xi^{A}\xi^{B}\right\rangle_{0}=2D_{0}tG^{AB}\), and \(\left\langle\xi^{A}\xi^{B}\xi^{2}\right\rangle_{0}=4\left(\mathcal{D}+2 \right)\left(D_{0}t\right)^{2}G^{AB}\), where \(G^{BC}\) is evaluated at \(\xi^{\prime}\). Since the above approximation neglects quadratic curvature effects that correspond to prefactors of the order of \(\left(D_{0}t\right)^{3}\) in the mean-square displacement [7], we just present the result up to order of \((D_{0}t)^{2}\). This means that we basically neglect the linear terms of \(D_{0}t\) in \(\tau_{BC}^{(2)}\); thus, the mean-square displacement for the full \(N\)-particle system is given by, \[\left\langle s^{2}\right\rangle=2\mathcal{D}D_{0}t-\left[\frac{2}{3}\mathbf{\mathcal{R}}-2\beta\mathbf{\nabla}_{A}\mathbf{ \mathcal{F}}^{A}\right]\left(D_{0}t\right)^{2}+\cdots\,. \tag{20}\] One can notice that in absence of the interaction term, that is, when \(\mathbf{\mathcal{F}}^{A}=0\), the mean-square displacement reduces to the previous result at the order \((D_{0}t)^{2}\)[7]. In addition, it is not difficult to elucidate that the subsequent correction of the order of \((D_{0}t)^{3}\) involves pre-factors where curvature and interactions are coupled, for instance, the terms \(\mathbf{\mathcal{R}}\mathbf{\nabla}_{A}\mathbf{ \mathcal{F}}^{A}\) and \(\mathbf{\mathcal{R}}_{BA}\mathbf{\nabla}_{C}\mathbf{ \mathcal{F}}^{A}\) from the tensor \(\tau_{BC}^{(2)}\) appear as pre-factors; the cubic correction will be computed elsewhere in a future communication. Also, note that similar terms appear in the expectation value of \(\xi_{B}\), \(\left\langle\xi_{B}\right\rangle=2D_{0}t\tau_{B}^{(1)}\), explicitly \[\left\langle\xi_{B}\right\rangle=\beta D_{0}t\mathbf{ \mathcal{F}}_{B}+\beta\left(D_{0}t\right)^{2}\left[G_{BA}\left(\frac{\mathbf{\mathcal{R}}}{6}-\beta\mathbf{\nabla}_{C}\mathbf{ \mathcal{F}}^{C}\right)+\frac{1}{6}\left(\mathbf{\mathcal{R}}_{BA}+ G_{BA}\mathbf{\nabla}_{G}-16\mathbf{\nabla}_{B}\mathbf{ \nabla}_{A}\right)\right]\mathbf{\mathcal{F}}^{A}. \tag{21}\] One can notice that in absence of the curvature, \(\left\langle\xi_{B}\right\rangle\) reduces to the well-known term \(\beta D_{0}t\mathbf{\mathcal{F}}_{B}\), which establishes, on average, a preferential direction of the Brownian motion. Also, this equation shows how the curvature is coupled to the interaction term within the \((D_{0}t)^{2}\) approximation. Finally, given an interacting force \(\mathbf{F}_{ij}\) and specific manifold \(\mathbb{M}\), one can compute the mean-square displacement for a tagged particle of the colloidal system by defining \(\mbox{MSD}(t)=\frac{1}{N}\left\langle\xi^{2}\right\rangle\), which is a quantity calculated in several computer simulations [6]. ## 4 Colloid dynamics in non-Euclidean spaces: some challenges and perspectives The covariant form of the Smulochowski equation (4) opens up the possibility of developing a theoretical framework to study different interesting phenomena that cannot be understood with the standard Statistical Mechanics approximations based on a Euclidean formulation. For example, one of the topics that can be tackled with this approach is the initiation of spinodal separation of particles interacting with short-ranged attractive forces and constrained to curved space, in analogy with the procedure presented by Dhont in the case of a Euclidean space [14]. Following these ideas, we need to convert Eq. (4) into an expression for the probability density of one particle instead of the joint probability of all the particles. To this end, it is necessary to perform a hierarchy of equations that allows us to marginalize the joint probability. Once obtained the reduced Smoluchowski equation, it is necessary to take advantage of the short-range interactions to relate out-of-equilibrium phenomena with their counterparts in equilibrium. The connection between both cases, as usual, is made through approximations concerning the equilibrium values; at this point, there exists a wide range of ways to proceed. For instance, a perturbation approach can be combined with the Riemann normal coordinates formalism, Monge's parameterization, or covariant Fourier series to calculate all the relevant observables. On the other hand, a covariant Taylor expansion [4] approach can also be performed to compare the results with their flat counterparts [15]. In addition, the covariant formalism provided by equation (4) can be straightforwardly employed to highlight the role of the geometry on the equilibrium equation of the state of colloidal dispersions embedded in a curved space, to elucidate the geometrical contributions during the onset of non-equilibrium states, such as gels and glasses, to study the dynamics of either passive or active colloidal particles on manifolds and to investigate the curvature affects on the structural, kinetic and phase transitions of attractive colloids, just to mention a few examples of interest in the Colloidal Soft Matter domain. As mere speculation and motivated by the recent contribution presented in Ref. [22], the formalism here presented can also be considered to study the dynamics of granular matter in curved manifolds. Furthermore, the covariant compact form of the Smoluchowski equation (8) allowed us to obtain an expression for the joint probability density function for the full system at the short-time regime. The method implemented can be extended to capture corrections of the order of \((D_{0}t)^{3}\). The short-time expression of the PDF can be used to give the curvature effects in the mean-square displacement and the search role of the coupling between the curvature and the interactions; for instance, using this procedure, we can choose specific interaction force and specific manifold \(\mathbb{M}\) and give an estimation of the mean-square geodesic displacement of a tagged particle of the colloidal system at short times. Moreover, the short-time expression of the PDF (16) can also serve to define a computational scheme to study the behavior of the full system using a modified Monte Carlo simulation that considers curvature effects. Additionally, the covariant compact form (8) allows us to formulate the \(N\)-particle system using a Feynman path integral representation following the steps already implemented in [9]. Last but not least, the study of some limiting cases of equation (4) and (8) will also serve as a benchmark to computational or molecular simulation schemes adapted to study the behavior of colloids in non-Euclidean spaces. ## Author Contributions All authors contributed equally to this work. ## Funding Authors acknowledge financial support from CONACyT (Grants Nos. 237425, 287067 and A1-S-9098), PRODEP (Grant No. 511-6/17-11852), and the University of Guanajuato (Grant No. 103/2023). ## Acknowledgments The authors acknowledge interesting and stimulating scientific discussions with Dr. Alejandro Villada-Balbuena and Prof. Jose M. Mendez-Alcaraz. ## Appendix In this section, we present the calculations that allowed us to present the general behavior of the probability distribution function in the short-time regime discussed in section 3. It is not difficult to show that \(I_{0}(\xi,0):=\left\langle\xi\left|\hat{K}_{0}\right|0\right\rangle=\mathcal{ J}_{1}\left(\xi\right)\) and the first five integrals (15) are given by \[I_{1}\left(\xi,0\right) = -\frac{1}{2}\xi^{B}\mathcal{J}_{2}\left(\xi\right), \tag{22}\] \[I_{2}\left(\xi,0\right) = 8D^{2}\partial_{B}\partial_{C}\mathcal{J}_{4}\left(\xi\right)+2 D\delta_{BC}\mathcal{J}_{3}\left(\xi\right),\] (23) \[I_{3}\left(\xi,0\right) = -i\partial_{A}\mathcal{J}_{2}\left(\xi\right),\] (24) \[I_{4}\left(\xi,0\right) = \frac{i}{2}\xi^{B}\partial_{A}\mathcal{J}_{2}\left(\xi\right)- \frac{i}{2}\delta_{A}^{B}\mathcal{J}_{2}\left(\xi\right),\] (25) \[I_{5}\left(\xi,0\right) = \frac{i}{2}\left[\xi^{C}\delta_{A}^{B}+\xi^{B}\delta_{A}^{C} \right]\mathcal{J}_{2}\left(\xi\right)+i\partial_{A}I_{2}\left(\xi,0\right), \tag{26}\] where \(\mathcal{J}_{n}=\frac{\left(-1\right)^{n-1}}{\left(n-1\right)!}\frac{\partial ^{n-1}}{\partial\alpha^{n-1}}\mathcal{J}_{1}\Big{|}_{\alpha=\alpha*}\), and \[\mathcal{J}_{1}(\mathbf{y})=\int\frac{d^{\mathcal{D}}p}{\left(2\pi\right)^{ \mathcal{D}}}K_{0}(p,\alpha)e^{\mathbf{y}\cdot\mathbf{p}}. \tag{27}\] Also, we have not included \(I_{6}\left(\xi,0\right)\), since it is possible to prove that contracting with Riemann curvature tensor \(\boldsymbol{\mathcal{R}}_{CABD}\) the resulting term vanishes as it is proved in the appendix of [10]. Now, returning to the expression of the probability distribution function \(\overline{\rho}(\xi,\xi^{\prime},t)\), one can write as follows, \[\overline{\rho}(\xi,0,t)=\int_{-\infty}^{\infty}\frac{dE}{2\pi}e^{iEt}\left(\sum_{i =0}^{5}I_{i}\left(\xi,0\right)\right). \tag{28}\] Now, each integration on \(E\) can be carried out by promoting \(E\) to a complex variable \(z\). Thus, the \(E\) integration can be carried on using Cauchy theorem in a complex variable after identifying the integral with \[\oint_{\gamma}\frac{dz}{2\pi i}\frac{e^{zt}}{z+A}=e^{-At}, \tag{29}\] where \(\gamma\) is a clockwise semi-circular contour in the left half complex plane enclosed the pole \(z_{0}=-A\) on the negative real line. Now, by using the approximation for \(G^{\frac{1}{4}}\left(\xi\right)\simeq 1-\frac{1}{12}\mathbf{\mathcal{R}}_{AB} \xi^{A}\xi^{B}\) in expression \(\overline{\rho}(\xi,\xi^{\prime},t)=G^{\frac{1}{4}}\left(\xi\right)\rho(\xi, \xi^{\prime},t)G^{\frac{1}{4}}\left(\xi^{\prime}\right)\), one can get the final form of the probability density function \(\rho(\xi,0:t)\). Then, by putting all terms in powers of \(\xi\), we finally get the short-time approximation given by Eq. (16).
2310.20117
Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery
In this study, we present a large-scale earth surface reconstruction pipeline for linear-array charge-coupled device (CCD) satellite imagery. While mainstream satellite image-based reconstruction approaches perform exceptionally well, the rational functional model (RFM) is subject to several limitations. For example, the RFM has no rigorous physical interpretation and differs significantly from the pinhole imaging model; hence, it cannot be directly applied to learning-based 3D reconstruction networks and to more novel reconstruction pipelines in computer vision. Hence, in this study, we introduce a method in which the RFM is equivalent to the pinhole camera model (PCM), meaning that the internal and external parameters of the pinhole camera are used instead of the rational polynomial coefficient parameters. We then derive an error formula for this equivalent pinhole model for the first time, demonstrating the influence of the image size on the accuracy of the reconstruction. In addition, we propose a polynomial image refinement model that minimizes equivalent errors via the least squares method. The experiments were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7. The results demonstrated that the reconstruction accuracy was proportional to the image size. Our polynomial image refinement model significantly enhanced the accuracy and completeness of the reconstruction, and achieved more significant improvements for larger-scale images.
Hong Danyang, Yu Anzhu, Ji Song, Cao Xuefeng, Quan Yujun, Guo Wenyue, Qiu Chunping
2023-10-31T01:30:57Z
http://arxiv.org/abs/2310.20117v1
# Refined Equivalent Pinhole Model for Large-scale 3D Reconstruction from Spaceborne CCD Imagery ###### Abstract In this study, we present a large-scale earth surface reconstruction pipeline for linear-array charge-coupled device (CCD) satellite imagery. While mainstream satellite image-based reconstruction approaches perform exceptionally well, the rational functional model (RFM) is subject to several limitations. For example, the RFM has no rigorous physical interpretation and differs significantly from the pinhole imaging model; hence, it cannot be directly applied to learning-based 3D reconstruction networks and to more novel reconstruction pipelines in computer vision. Hence, in this study, we introduce a method in which the RFM is equivalent to the pinhole camera model (PCM), meaning that the internal and external parameters of the pinhole camera are used instead of the rational polynomial coefficient parameters. We then derive an error formula for this equivalent pinhole model for the first time, demonstrating the influence of the image size on the accuracy of the reconstruction. In addition, we propose a polynomial image refinement model that minimizes equivalent errors via the least squares method. The experiments were conducted using four image datasets: WHU-TLC, DFC2019, ISPRS-ZY3, and GF7. The results demonstrated that the reconstruction accuracy was proportional to the image size. Our polynomial image refinement model significantly enhanced the accuracy and completeness of the reconstruction, and achieved more significant improvements for larger-scale images. keywords: CCD imagery, 3D reconstruction, multi-view stereo, Rational Function Model, Rational Polynomial Coefficients, polynomial refinement + Footnote †: journal: ISPRS Journal of Photogrammetry and Remote Sensing ## 1 Introduction Numerous sources are available for producing digital surface models (DSMs), including UAV, aerial, and satellite imagery, and point data can be obtained from laser scanner. Despite there exist the large amount of source utilized to obtain DSMs, extracting DSM results from satellite images is the most cost-effective option given the constant influx of terabyte-scale image data. Since satellites with various high-resolution (VHR) sensors, such as the WorldView-3 and Gaofen-7 series, were first launched, optical satellite images have achieved a resolution of better than 1 m. VHR satellite imagery has the potential to enhance the precision of DSM reconstruction and facilitate the 3D reconstruction of urban areas Bosch et al. (2017); Poullis (2020); Stucker et al. (2022). Large-scale reconstruction of the earth's surface from satellite images for the purpose of obtaining complete and accurate DSM results remains a challenge. Typically, a linear-array pushbroom is used to acquire satellite imagery, and a generalized rational functional model (RFM) Tao and Hu (2000, 2001) is used for the imaging equation. Hence, pinhole camera images differ significantly from the imaging methods and imaging equations of linear array charge-coupled device (CCD) images. However, traditional methods for 3D reconstruction Bosch et al. (2016); de Franchis et al. (2014a); Michel et al. (2020) of optical satellite images often rely on RFM. The 3D reconstruction of satellite optical images based on RFM involves several essential steps, such as bundle adjustment Mari et al. (2019); Tang and Tan (2019), epipolar rectification de Franchis et al. (2014b); Liao et al. (2022); Tatar and Arefi (2019), dense matching Xia et al. (2020); Zhang et al. (2019), point cloud generation Gong and Fritsch (2018), and DSM fusion filtering Gomez et al. (2023); Wang and Frahm (2017). Many studies have focused on improving these steps. Additionally, numerous commercial software packages (e.g., ERDAS Imagine LPS Leica (2023), RSP Qin (2016), MicMac Rupnik et al. (2017), Pixel Factory Factory (2023), SURE nFrames (2023), SOCET SET BAESystem (2023), Agisoft Metashape Agisoft (2023), and Catalyst professional Catalyst (2023)) and open source pipelines (e.g., S2P de Franchis et al. (2014a), CARS Michel et al. (2020), and ASP Beyer et al. (2018)) have been developed to photogrammetrically process satellite images using RFM. ERDAS Imagine LPS (Leica Photogrammetry Suite) Leica (2023) is a well-established and robust photogrammetric processing package for aerial and orbital imagery. Nearly every orbital sensor is supported by rigorous information describing the camera model. For most other sensors, rational polynomial coefficient (RPC) processing is also supported. Pixel Factory Factory (2023) generates 3D mesh models from satellite images. By using multiple images for each model, it is able to process very large areas. One feature exclusive to Pixel Factory is that homogeneity and consistency are guaranteed throughout the world. SURE Software nFrames (2023) transforms imagery from classic aerial cameras, multi-head oblique systems, drone cameras, and most consumer-grade terrestrial cameras into 2.5D or 3D data, including point clouds, photorealistic textured meshes, and true orthophotos, via a streamlined fully automatic and integrated image processing technique. Min-Mac Rupnik et al. (2017) is free open-source photogrammetric software for 3D reconstruction, and it solved the multi-view fusion with a multi-directional dynamic programming technique for dense matching of VHR satellite images (Rupnik et al., 2017, 2018). S2P de Franchis et al. (2014a) is a fully automated modular pipeline designed for affine reconstruction of line-array satellite images. Furthermore, the NASA Ames Research Center proposed the NASA Ames Stereo Pipeline Beyer et al. (2018), which is a suite of free and open-source automated geodesy and stereogrammetry tools for processing stereo images captured from satellites, in which rigorous physical sensor models are obtained by querying ephemerides and interpolating camera poses. The fully automatic and modular stereo pipeline S2P de Franchis et al. (2014a) utilizes an affine model in the image space to optimize positioning based on PRC model. Facciolo et al. (2017) proposed a method that relies on local affine approximation Grodecki and Dial (2003) and considers multi-date images, making it a multi-modal technique for reconstructing 3D models. Michel et al. (2020) designed a new scalable, robust, high-performance stereo pipeline for satellite images called CARS. Wang et al. (2022) proposed a hierarchical reconstruction framework that consists of an affine dense reconstruction stage and an affine-to-Euclidean upgrading stage based on multiple optical satellite images, which needs only four ground control points (GCPs). To attain a simple and speedy dense matching outcome, the 3D reconstruction pipelines detailed above execute stereo correction before the dense matching stage. However, it is difficult to accurately stereo-correct large-scale satellite stereo image pairs de Franchis et al. (2014); Jannati et al. (2018); Liao et al. (2022); Tatar and Arefi (2019) because spaceborne optical sensors always follow the linear-array pushbroom imaging process, during which there are differences between the epipolar geometries of different linear-array images. Most importantly, the imaging model for linear-array CCD images is complicated and relies on a generic RPC model fitted with polynomials, which introduces difficulties in constraining the reconstruction results with the original rigorous imaging model. Zhang et al. (2019) proposed the Adapted COLMAP to fit the RFM to the pinhole camera model (PCM), which fundamentally solves the problem of relying on RFM for linear-array CCD images. After resolving the disparity between the imaging models of linear array CCDs and pinhole cameras, ZhangKai et al. successfully accomplished 3D reconstruction of line-array CCD images utilizing the first-rate open-source computer vision software, COLMAP. To this end, in this study we propose the RFM is equivalent to PCM (REPM) pipeline, which is based on the Adapted CLOMAP pipeline Zhang et al. (2019) and relies heavily on the idea of RFM is equivalent to PCM to expand its applicability to linear-array CCD imagery. To enhance the reconstruction accuracy of the REPM pipeline, we construct a image refinement model that minimizes equivalent errors. We also incorporate an image-partitioning module and improve the DSM fusion module by enabling it to process large-scale images. In summary, our main contributions are as follows: * We introduce the RFM is equivalent to PCM model and mathematically derive the error formula of the equivalent pinhole model, which enables large-scale 3D reconstruction with linear array imagery using most exisiting 3D reconstruction pipelines. In addition, we further propose the image refinement model to improve the accuracy of 3D reconstruction. * We present a multi-view 3D reconstruction pipeline for large-scale linear-array CCD imagery based on REPM, which encompasses the whole process from image intake to DSM product output. * We have proven through formula derivation and experiments that the error of the equivalent pinhole model is directly proportional to the image size. Additionally, our pipeline shows excellent potential on four datasets. Remarkably, incorporating a polynomial image refinement model highlights a 15% accuracy advancement on large surface format images. ## 2 Methods for 3D reconstruction method of satellite images In this section, we focus on three satellite image-based 3D reconstruction methods: RFM-based, PCM-based, and REPM-based 3D reconstruction. ### RFM-based 3D reconstruction Satellite images are commonly captured by using linear-array CCD sensors in a pushbroom manner, the projection method of which is significantly different from that of the traditional pinhole camera model. Most satellite optical images reconstruct DSMs through the RPC models, which include 80 polynomial coefficients (78 polynomial coefficients to be solved) and 10 normalized constants (for a total of 90 parameters in the RPC file corresponding to each image), which are defined as \[\begin{cases}u=\mu_{u}+\sigma_{u}g(\frac{lat-\mu_{lat}}{\sigma_{lat}},\frac{lon- \mu_{lon}}{\sigma_{lon}},\frac{alt-\mu_{alt}}{\sigma_{alt}})\\ v=\mu_{v}+\sigma_{v}h(\frac{lat-\mu_{lat}}{\sigma_{lat}},\frac{lon-\mu_{lon}}{ \sigma_{lon}},\frac{alt-\mu_{alt}}{\sigma_{alt}})\end{cases}, \tag{1}\] where \(u,v\) represent the row and column pixel coordinates, respectively; \(lat,lon,alt\) denote the latitudes, longitudes, and altitudes of the locations within the WGS-84 coordinate system, respectively; \(\mu_{i}(i=u,v,lat,lon,alt)\) denotes five translation normalization parameters; \(\sigma_{i}(i=u,v,lat,lon,alt)\) denotes five scaling normalization parameters; and the functions \(g(\cdot)\) and \(h(\cdot)\) are the cubic polynomial functions of the RPC model, each with 40 parameters. Because the numerators and denominators of the \(g(\cdot)\) functions are simultaneously divided by a polynomial coefficient, the ratio remains unchanged. The same applies to the \(h(\cdot)\) functions. Therefore, out of the 80 polynomial coefficients, there are two constant values of 1. The traditional RFM-based 3D reconstruction method Tao and Hu (2002) involves initially acquiring the transformation between image pairs with homologous points using matching techniques. Then a 3D model of the reconstructed ground information is derived by accounting for various coordinate standardization parameters based on the RFM of stereo image pairs. However, the complexity of the RFM renders the entire reconstruction process extremely cumbersome. The S2P pipeline de Franchis et al. (2014a) offers a solution for decoupling the 3D reconstruction process from the intricacies of satellite imaging. The S2P pipeline utilizes the relative pointing error correction between RPC models to replace the complicated nonlinear bundle adjustment. This process recovers the 3D structure of the paired satellite images using a simple RPC-based elevation iteration. Michel et al. (2020) presented a new stable and efficient pipeline for multi-view stereo called CARS. In this pipeline, a colocalization function that employs the epipolar constraint, is fitted with a geometry based on a nonrigid iterative approximation. Then it jointly and recursively estimates two resampling grids mapped from the estimated epipolar geometry to the input images. Wang et al. (2022) proposed a hierarchical reconstruction framework based on multiple optical satellite images called AE-Rec, which reconstructs the affine and Euclidean scene structures sequentially. In the first stage, an affine dense reconstruction approach is used to obtain the 3D affine structure from the input satellite images, and local small-sized tiles in the satellite images are approximately subject to an affine camera model. This affine approach is performed under an incremental reconstruction strategy and does not use any GCP. In the second stage, the obtained 3D affine structure is upgraded to a Euclidean structure by fitting a global transformation matrix with at least four GCPs. ### PCM-based 3D reconstruction Most 3D reconstructions based on the PCM adopt the idea that the overlapping image first performs feature point matching. The matched feature points are then used to obtain the ground point coordinates via space resection. However, because of the diversity of constraints, it is difficult to unify many 3D reconstruction methods. For example, 3D reconstruction based on local stereo matching Bleyer et al. (2011); Loghman and Kim (2013) uses the consistency of parallax in a small range for constraints, 3D reconstruction based on global stereo matching Felzenszwalb and Huttenlocher (2004) explicitly uses smooth assumption constraints to solve the matching results of all pixels as a whole, and 3D reconstruction based on semiglobal stereo matching Hirschmuller (2005) uses mutual information as the method for computing the similarity measure. Additional priori knowledge can be used as constraints to improve the accuracy of the 3D reconstruction. Furthermore, the combination of structure from motion (SfM) Schonberger and Frahm (2016) and multi-view stereo (MVS) Schonberger et al. (2016) is considered a favorable vision-based reconstruction framework for 3D scene restoration. SfM estimates the camera position, orientation and reconstructs the sparse point clouds. Subsequently, MVS generates dense point clouds based on the SfM results to reconstruct the 3D scene. However, the linear-array CCD images do not rigorous constraints of camera position and orientation like pinhole camera images because of the significant differences between the imaging models of pinhole cameras and those of satellite linear-array sensors. Thus, this framework primarily focuses on the 3D reconstruction of images captured using pinhole cameras. ### REPM-based 3D reconstruction Statistical analyses in the literature Zhao et al. (2023) show that traditional 3D reconstruction methods are still significant in the field of 3D reconstruction of satellite images. However, these satellite-based 3D reconstruction methods rely on RFM. Because of the RPC function's complexity, incompatibility with many novel 3D reconstruction pipelines, and lack of a rigorous epipolar correction model, a recent innovative approach suggested fitting the RFM to the PCM. The Adapted COLMAP Zhang et al. (2019) method approximates a weak perspective projection model using a well-established RFM and enhances the original COLMAP Schonberger and Frahm (2016); Schonberger et al. (2016) visual reconstruction pipeline for satellite images with a depth reparameterization technique, thereby improving the accuracy of the depth values. However, the Adapted COLMAP method is only applicable to small-scale images. In this pipeline, the equivalent error is larger for large-scale images, the reconstruction effect is poorer, and NOT AVAILABLE (NA) even occurs. To reduce the equivalent error, we corrected image using polynomial function, which is called the image refinement model. In addition, to address the NA phenomenon in large-scale images, we added an image partition module and improved the DSM generation module. ## 3 REPM-based 3D reconstruction pipeline This section presents the REPM pipeline framework, the algorithmic and error formulas of the equivalent pinhole model, and the principle of the image refinement model. The design of the REPM pipeline includes the RPC model's approximation of the pinhole camera model, image refinement model and relies heavily on features included in Adapted COLMAP Zhang et al. (2019), including the SfM Schonberger and Frahm (2016) and MVS Schonberger et al. (2016) frameworks. ### REPM pipeline An overview of the reconstruction process is presented in Fig.1. Under the assumption of weak perspective projection, the REPM pipeline performs multi-view image processing by using \(n\) source images and their RPC parameters to compute the internal matrix, rotation matrix, and translation vectors \(\{\mathbf{K}_{i},\mathbf{R}_{i},\mathbf{t}_{i}\}_{i=0}^{N}\) corresponding to the input views through the equivalent pinhole model. The corrected image is then obtained using a image refinement model that minimizes the error of the equivalent pinhole model. The SfM framework performs sparse reconstruction with the given \(\{\mathbf{K}_{i},\mathbf{R}_{i},\mathbf{t}_{i}\}_{i=0}^{N}\) and corrected images. The sparse point cloud and optimized camera poses jointly participate in the MVS phase to estimate the depth map, and then fuse to generate the dense point cloud and DSM. **REPM phase.** In the REPM phase, we introduce the RFM is equivalent to PCM algorithm and implement a image refinement model to rectify the image by minimizing the equivalent error. We first perform image partitioning and image enhancement operations, in which the accuracy of the equivalent pinhole model is influenced by the size of the satellite image and the equivalent error is decreased by clipping the image. Image enhancement is necessary because the long-tail distribution of the brightness values of the satellite image is detrimental to image feature matching. We set a brightness threshold and perform the image enhancement technique when the brightness values exceed this threshold. After image partitioning and enhancement, the images and RPC parameters are fed into the equivalent pinhole model described in Section 3.2 and the image refinement model described in Section 3.3. **SfM and MVS phases.** For the SfM and MVS phases, we employ the same approach as that used in Adapted COLMAP Zhang et al. (2019). In the field of computer vision, the SfM and MVS frameworks are well-developed 3D reconstruction frameworks, which are used in the excellent reconstruction pipeline COLMAP. During the SfM stage, feature extraction and matching are initially performed to match the homonymous points of the stereo image. Next, triangulation is conducted to calculate the 3D point coordinates from the homonymous points. Finally, global optimization is executed using beam method leveling to generate an optimal camera parameter model. SfM produces the internal parameters, position, sparse point cloud, and co-visual relationship of the point cloud. Using this information, MVS executes pixel-by-pixel dense matching to create a depth map that matches the corresponding source images. In addition, to address the issue of inaccuracy caused by large depth values, the reparameterization approach is adopted and then the depth map is estimated using the classical PatchMatch Stereo (PMS) Bleyer et al. (2011) algorithm. **DSMs aggregation.** We restructured the DSM generation model to align it with the processing of the image partition module. Because the image is partitioned into multiple image tiles, the reconstructed DSMs from the same viewpoint are first mosaicked, and then DSMs from different perspectives are aggregated. Because the image-matching process does not guarantee 100% accuracy, the reconstructed DSM results contain incorrect height values. Therefore, outlier removal is incorporated into the process of aggregating DSMs, including two methods: 1) the median absolute deviation (MAD), which is used to filter out DSM outliers, and 2) radius point cloud filtering, which removes noise. However, a disadvantage of radius point cloud filtering is the absence of a standard parameter, which necessitates combining the reconstruction results to determine the parameter. ### Equivalent pinhole model #### 3.2.1 Introduction to the equivalent pinhole model The equivalent pinhole model is based on the principle of weak perspective projection. First, let the range of the ground altitude variation be denoted by \(Z_{range}\) and the distance from the satellite sensor to the ground point be denoted by the depth \(D\). For remote sensing images, Figure 1: Overview of the reconstruction process of the REPM pipeline. The procedure involves four phases: REPM, SfM, MVS, and DSM generation. The inputs consist of several overlapping images and their corresponding RPC models. The outputs include height maps, georeferenced 3D point clouds, and digital surface models. the satellite sensor is far from the ground point (i.e., \(D\gg Z_{range}\)). In this case, the average scene depth can be used in the projection calculation instead of the depth; this substitution is the theoretical basis for approximating the perspective camera as a weak perspective camera. Furthermore, Zhang et al. (2019) proved that a linear pushbroom camera can be reduced to a weak perspective camera under the same conditions. Therefore, a linear pushbroom camera can be approximated as a perspective camera, which we refer to as the equivalent pinhole model. The procedure of the equivalent pinhole model algorithm is presented in **Algorithm 1**. Given the RPC parameters of the images, the projection matrix of the perspective camera model (i.e., the internal matrix \(K\) and the external matrices \(R\) and \(t\)) can be estimated using the equivalent pinhole model algorithm. #### 3.2.2 Error formula of the equivalent pinhole model The equivalent pinhole model algorithm equates the RPC parameters of the RFM with the internal and external parameters of the PCM via \[\begin{pmatrix}u\\ v\end{pmatrix}=F(lat,lon,alt)\Rightarrow\] \[Z_{cam}\begin{pmatrix}u\\ v\\ 1\end{pmatrix}=P_{3\times 4}\begin{pmatrix}X\\ Y\\ Z\\ 1\end{pmatrix}=K_{3\times 3}\left[R|t|_{3\times 4}\right]\begin{pmatrix}X\\ Y\\ Z\\ 1\end{pmatrix}, \tag{2}\] where \(F(\cdot)\) denotes the cubic polynomial function of the RPC model; \(u,v\) denote the pixel coordinates' columns and rows, respectively; \(lat,lon,alt\) indicate the latitude, longitude, and altitude, respectively, in the WGS-84 coordinate system; \(\begin{pmatrix}u&v&1\end{pmatrix}^{T}\) is the homogeneous coordinate in the pixel coordinate system; \(\begin{pmatrix}X&Y&Z&1\end{pmatrix}^{T}\) is the homogeneous coordinate in the east-north-up (ENU) coordinate system; \(Z_{cam}\) denotes the \(Z\)-coordinate of the object point in the camera coordinate system (which can also be interpreted as the depth value of the object point); \(P_{3\times 4}\) is the projection matrix that converts the object point coordinates into pixel coordinates; and \(K,R,t\) are obtained by factorizing the projection matrix \(P\). Both the RFM and PCM are utilized to model the relationship between 2D pixel coordinates and 3D object point coordinates. The RFM uses a cubic polynomial function, which is computationally complex. In contrast, the PCM employs homogeneous coordinates and matrix multiplication, which significantly simplify operations such as rotation and translation in 3D space. The RFM and PCM use different world coordinate systems, and the equivalent pinhole model algorithm utilizes the ENU coordinate system because of its relative compatibility with the conventional PCM (compared to the WGS-84 coordinate system). The weak perspective projection principle is utilized to approximate a linear pushbroom camera as a perspective camera. Accordingly, a weak perspective projection formula is used to derive the equivalent error formula. The weak perspective projection formula is \[\begin{cases}x=\frac{f_{x}X_{cam}}{Z}\\ y=\frac{f_{y}Y_{cam}}{Z}\end{cases}, \tag{3}\] where \(x,y\) are the image coordinates that are projected onto the object point coordinates within the image-space coordinate system; \(f_{x},f_{y}\) represent the mapping of the sensor focal length on the \(x\) and \(y\) axes, respectively; (\(X_{cam}\), \(Y_{cam},Z_{cam}\)) denote the object point coordinates in the image space coordinate system; and \(\bar{Z}\) denotes the average value of all object points within the image area. We consider the perspective projection formula for the x-axis as an example for further derivation. For digital images, the multiple-order derivatives of the perspective projection formula can be obtained; thus, the formula satisfies the conditions for Taylor expansion. Because we approximate the projection model as a weak perspective projection and \(Z_{cam}\) is approximated as \(\bar{Z}\), we obtain the Taylor expansion at \(\bar{Z}\): \[\begin{split} x&=\frac{f_{x}X_{cam}}{Z_{cam}}\\ &=\frac{f_{x}X_{cam}}{\bar{Z}}+\frac{x^{\prime}(\bar{Z})}{1!}(Z_ {cam}-\bar{Z})+o(Z_{cam}-\bar{Z})^{2}\end{split}, \tag{4}\] where \(o(\cdot)\) is the Peano remainder term of the Taylor expansion, representing the higher order infinitesimals of (a-b). Next, by comparison with Eq.(3), we obtain an error formula approximated as a weak perspective projection: \[E=\frac{x^{\prime}(\bar{Z})}{1!}(Z_{cam}-\bar{Z})=-f_{x}\frac{X_{cam}}{\bar{Z }^{2}}(Z_{cam}-\bar{Z}). \tag{5}\] The higher-order term of the Taylor formula was not considered. The factors that influence the equivalent error can be derived from Eq.(5). As \(f_{x}\frac{X_{cam}}{\bar{Z}}\approx x\), we can derive \(|E|\approx\left|-x(Z_{cam}-\bar{Z})/\bar{Z}\right|\). Ideally, when reconstructing the 3D information of a particular area, a schematic such as that shown in Fig.2 should be drawn to assist with the illustration. As \(Z_{cam}\gg Z_{cam}-Z_{min}\) and \(\bar{Z}=(Z_{max}+Z_{min})/2\), we propose that \(\bar{Z}\) is similar in image regions \(A\) and \(B\). Consequently, we demonstrate that in identical spatial areas, the equivalent error is determined by the image size and the disparity in ground elevation \(Z_{cam}-\bar{Z}\). For images \(A\) and \(B\), \(E_{A}\leq\left|-x_{A}(Z_{A}^{max}-\bar{Z})/\bar{Z}\right|\), \(E_{B}\leq\left|-x_{B}(Z_{B}^{max}-\bar{Z})/\bar{Z}\right|\). Because \(u_{A}>u_{B}\) and \(Z_{A}^{max}>Z_{B}^{max}\), we conclude that \(E_{A}>E_{B}\). Based on Fig.2, it can be inferred that, in general, the size of an image can indirectly determine the height difference between the ground and the corresponding object, which is proportional to the size of the image. Therefore, to control the equivalent error in the experiments, suitable image sizes were routinely obtained via cropping. ### Image refinement model In Section 3.2.2, we explained that the error in the equivalent pinhole model arises from using RFM and PCM to calculate the positional deviation from the object point to the pixel point. To reduce this bias, we minimize the equivalent error by introducing a polynomial image refinement model, as shown in Fig.3. The pixel points calculated via the RFM and PCM form \(n\) sets of corresponding points, which are substituted into the polynomial correction function: \[\begin{cases}x^{\prime}=m_{0}+m_{1}x+m_{2}y+m_{3}xy+m_{4}x^{2}+m_{5}y^{2}\\ y^{\prime}=m_{6}+m_{7}x+m_{8}y+m_{9}xy+m_{10}x^{2}+m_{11}y^{2}\end{cases}, \tag{6}\] where \(m_{i}(i=0,1\cdots 11)\) are the coefficients of the polynomial function. The parameters of the polynomial image refinement model are computed using the least squares method: \[\underset{M(p)}{min}\sum_{j}\left\|M(p_{j})-p_{j}^{\prime}\right\|_{2}^{2}. \tag{7}\] Finally, the polynomial image refinement model is used to resample the original image to obtain the corrected image. Initially, our aim was to correct the image by minimizing the equivalent error through the homography transformation. However, the results of the homography correction model were unsatisfactory. Instead, we selected a more complex second-order polynomial transformation to correct the images and reduce the equivalent pinhole model error. Through experiments, we found that the polynomial correction was more effective (see Section 5.3 for a description of the experiments). Therefore, we added the polynomial image refinement model before starting the downstream reconstruction task. The algorithm used for correcting the model is presented in **Algorithm 2**. ### Reconstruction of the DSM A major factor that distinguishes satellite stereo pipelines from typical vision pipelines is the camera model (RPC vs. pinhole). Once we have obtained the corrected images and the internal and external parameter matrices of the equivalent pinhole camera, we can feed them into the SfM and MVS frameworks in the Adapted COLMAP pipeline Zhang et al. (2019) to reconstruct the 3D information. The goal of SfM is to recover accurate camera parameters for use in subsequent MVS steps. In addition, the authors identified and resolved key issues in the MVS framework that prevented the direct application of standard MVS pipelines tailored for ground-level images to the satellite domain. ## 4 Experiments In this section, we describe the datasets we used and the experiments we conducted on them. We also present an analysis of the experimental results. Figure 2: Diagram of the error formula of equivalent model. The brown region represents the earth’s surface, and the blue and green regions represent the projection of images A and B for two different image sizes, respectively, for the same surface on the earth. ### Experimental setup #### 4.1.1 Datasets We assessed our pipeline using three publicly accessible image datasets: the WHU-TLC test set, the DFC2019 dataset, and the ISPRS-ZY3 image data. We also present the reconstruction outcomes of applying the REPM pipeline to GF7 image data. * **WHU-TLC test set.** The WHU-TLC test set, including three-view images and RPC parameters that have been refined in advance to achieve sub-pixel reprojection accuracy, is provided by Gao et al. (2021). The ground-truth DSMs were prepared using both high-accuracy LiDAR observations and GCP-supported photogrammetric software. The DSM is stored as a regular grid with 5-m resolution using the WGS-84 geodetic coordinate system and the UTM projection coordinate system. * **DFC2019 dataset.** The four sites of the DFC2019 dataset Bosch et al. (2019) from the 2019 IEEE Geoscience and Remote Sensing Society (GRSS) Data Fusion Competition were used in this study. The dataset was obtained using the WorldView3 satellites. The dataset is comprised of satellite images featuring multiple views captured on multiple dates between 2014 and 2016. Each RGB image measures \(2048\times 2048\) pixels, and the ground truth DSM is \(512\times 512\) pixels. * **ISPRS-ZY3 data.** China's first high-resolution stereo mapping civilian satellite, ZY3, has provided reliable high-resolution stereo image data. The experimental data from the International Society for Photogrammetry and Remote Sensing (ISPRS-ZY3) ISPRS (2018) covers Sainte-Maxime, France, including three line-array stereo panchromatic images of 2.1 m for nadir and 2.5 m for forward and backward. The ISPRS-ZY3 data consists of 12 ground control points for checking the absolute accuracy of the DSM. * **GF7 data.** The Gaofen-7 (GF7) satellite has a dual-line array stereo camera that delivers high-precision remote sensing imagery with a panchromatic stereo resolution superior Figure 3: Image refinement model. An object point \(P\) is taken as an example, and \(p\) and \(p^{\prime}\) denote the positions before and after correction, respectively. to 0.8 m, with 0.8 m for forward and 0.64 m for backward. Our experimental findings revealed an area in Zhengzhou, China with an image size measuring \(35864\times 40000\) pixels. Furthermore, a local reference DSM measuring \(5717\times 6043\) pixels is available as a benchmark in comparison experiments. #### 4.1.2 Implementation details The experiments were conducted in a computing environment featuring an NVIDIA A100-PCI graphics card with a video memory of 40 GB. Python was used as the programming language, and VCcode served as the compiler. Four distinct 3D reconstruction methods were employed for experimental testing, and the implementation details are provided below. **S2P.** Satellite Stereo Pipeline (S2P) is an automatic and modular stereo pipeline for push-broom images. The images are divided into small tiles and processed in parallel using multiple processes to improve efficiency. In our experiment, the tile size was set to 500-1000 pixels, and the dense matching method in S2P was the default More Global Matching (MGM) algorithm Facciolo et al. (2015). In the fusion step, an outlier removal threshold of 25 m was chosen (the same operation as in Gao et al. (2023)), and the height map outlier cleaning was set to false (otherwise, the completeness rate was meager). The other settings were left at their default values. **LPS.** The Leica Photogrammetry Suite (LPS) Leica (2023) is a collection of digital software for processing photogrammetry and remote sensing data. The software performs binocular stereo reconstruction, thereby displaying the outcomes with maximum precision when using multi-view images as input. **Adapted COLMAP.** Adapted COLMAP (AC) Zhang et al. (2019) fits the pinhole camera model to the image-based RPC model, and then employs a computer vision reconstruction pipeline for 3D reconstruction. However, Adapted COLMAP cannot handle large-scale images, and the direct output of large-scale images is displayed as NA. Therefore, the pipeline was only run on the WHU TLC and DFC2019 datasets. **Sat-MVSF.** **Ours.** We used our REPM pipeline to reconstruct the DSM for two datasets, the WHU-TLC and DFC2019 dataset, which are referred to as "Ours" in the experiments. In addition, the REPM pipeline was constructed based on the Adapted COLMAP, which we improved by incorporating the image partition module. We also improved the DSM generation module to reconstruct the DSM, which is referred to as "REPM" in the experiments. When the image refinement model is introduced, it is called "REPM+Ref." in the experiments. The crop size represents both length and width in all experiments. #### 4.1.3 Accuracy metrics We utilized the assessment metric codes established in the literature Zhang et al. (2019), and the equations employed to calculate the assessment metrics are as follows. (1) The root-mean-square error (RMSE), which is the standard deviation of the residuals between the ground truth and the estimation, is defined as \[RMSE=\sqrt{\frac{(\hat{h}_{i}-h_{i})^{2}}{N_{i}}}(i\in(\hat{h}\cap h)), \tag{8}\] where \(\hat{h},h\) denote the predicted and true values, respectively, and \(N_{x}\) denotes the number of computations required. For example, when calculating the RMSE accuracy of the reconstructed DSM, \(\hat{h},h\) denote the heights of the generated DSM and true DSM, respectively, and \(N_{i}\) denotes the number of pixels. (2) The median error (ME), which is the median of the absolute values of the residuals between the ground truth and the estimation, is defined as \[ME=median\left|\hat{h}_{i}-h_{i}\right|(i\in(\hat{h}\cap h)). \tag{9}\] (3) The mean absolute error (MAE), which is the mean of the absolute values of the residuals between the ground truth and the estimation, is defined as \[MAE=\frac{1}{n}\left(\sum_{i=1}^{n}\left|\hat{h}_{i}-h_{i}\right|\right)(i\in( \hat{h}\cap h)). \tag{10}\] (4) Completeness, which is the percentage of points with a height error less than a certain threshold. In this study, the completeness is denoted as \(Comp_{threshold}\) and defined as \[Comp_{threshold}=\frac{N_{|\hat{h}_{i}-h_{i}|<threshold}}{N_{i}}(i\in h). \tag{11}\] (5) Time consumption, which is the time that elapses between the input of an image and the generation of a DSM product. The unit of time is min. ### Comparative results on benchmark datasets We evaluated the proposed pipeline on four datasets, and compared its performance to the experimental results obtained from the image reconstruction pipeline for classic linear-array satellite CCDs de Franchis et al. (2014) and from commercial software Leica (2023). #### 4.2.1 Results for the WHU-TLC test set To evaluate the accuracy and completeness of our proposed reconstruction pipeline, we compared its performance to that of other reconstruction methods using the WHU-TLC test set. To ensure fairness, we used more straightforward point-cloud filtering for postprocessing. Table 1 compares the experimental results for the WHU-TLC test set contained in the literature Gao et al. (2023) to our results. The table shows that our method outperformed other methods in terms of accuracy and completeness and that there was no significant decrease in the running time of our method. Furthermore, compared to the Adapted COLMAP approach, the addition of our image refinement model led to a 5.52% increase in the RMSE accuracy and a 2.57% improvement in the completeness for a threshold of 2.5 m. Table 1 shows that our method improves both the accuracy and completeness. The poor RMSE accuracy of LPS can be attributed to the WHU-TLC test set, which is comprised of 46 image sets, some of which are occluded by clouds, whereas others have weakly textured regions such as water bodies. Because of LPS's inability to densely match such regions, a triangulation mesh is used to fill in the gaps, and the DSM outliers cannot be removed by point-cloud filtering, eventually resulting in an overall poor RMSE accuracy. A comparison between the DSM reconstruction output and the error maps for the WHU test set is shown in Fig. 4. S2P had more pixels with reconstruction failures than LPS, and all the pixels from LPS were successfully reconstructed. This discrepancy may be attributed to the provision of a low-resolution digital elevation model (DEM) by the ERDAS LPS software. A comparison of the error maps demonstrates that the iamge refinement model significantly enhanced the reconstruction accuracy and that our method had the highest reconstruction accuracy. Figure 4: Examples of DSMs and error plots for different solutions of the WHU-TLC test set. The color bar is expressed in units of meters. #### 4.2.2 Results for the DFC2019 dataset We conducted a comparative study using four prevalent sites in the DFC2019 dataset. For each of the four sites, we performed simple artificial masking of water bodies based on pixel values. Each site includes various satellite images obtained from different dates, and S2P and our pipeline could perform multi-view 3D reconstruction. LPS generated a DSM for every two images, and we selected the one with the highest accuracy, as listed in Table 2. In the DFC2019 dataset, the accuracy of our method and the Adapted COLMAP were essentially the same, but the completeness of our method was significantly higher than that of the other methods. Table 2 shows that S2P achieved a higher accuracy. However, compared to our method, the S2P reconstruction failed in more areas, and the completeness at a threshold of 1 m was worse, causing a maximum drop of approximately 40% and a minimum drop of approximately 4%. By combining the DSM reconstruction results shown in Fig. 5 and the error maps presented in Fig. 6, our method performed exceptionally well for vegetation, roads, and the edges of buildings. Fig. 6 displays the error maps of the DSM against the true DSM. Our method exhibited superior accuracy in estimating the heights of buildings and roads, but its accuracy was relatively low for vegetation and building shadows. The DSMs reconstructed by LPS exhibited a lower accuracy overall, possibly because the LPS method's 3D reconstruction of multi-date VHR imagery without GCPs is poor. #### 4.2.3 Results for the ISPRS-ZY3 dataset Because the data for the Sainte-Maxime region in the ISPRSZY3 satellite imagery contained GCPs, we assessed the absolute positioning accuracy of these data. We first calculated the altitude accuracy of the stereo positioning of the checkpoints for the RPC model's image space compensation scheme, as shown in Table 3. We then examined the corresponding altitudes of the GCPs in the reconstructed DSM and calculated the median, RMSE, and maximum of the altitude errors. For this large-scale satellite image, the Adapted COLMAP generated NA. Table 3 demonstrates that the addition of the image refinement model dramatically improved the precision of our method, which exhibited an accuracy comparable to S2P. In addition, our method attained optimality regarding both the median and maximum errors. The final results of our REPM pipeline are presented in Fig. 7 (water is filtered out in the pipeline). #### 4.2.4 Results for the GF7 dataset In the experiments with the Gaofen7 (GF7) image of the Zhengzhou region, we used the reference image as a benchmark to calculate the accuracy and completeness of the reconstructed DSM. The quantitative results are presented in Table 4, and Fig. 8 displays the DSM reconstruction results and error maps for partial areas. As shown in Table 4, our method achieved the highest reconstruction accuracy and completeness, and the incorporation of the image refinement model significantly boosted the accuracy and completeness with a threshold of 2 m. Fig. 8 shows the DSM reconstruction results for local areas. For high-resolution images of urban areas, we mainly compared the reconstruction results of buildings and roads. Our method was optimal in terms of the completeness and accuracy for buildings. In contrast, S2P was unsuccessful in reconstructing tall buildings (white buildings). Furthermore, LPS lost detailed building information, and the overall height estimation accuracy was poor. Figure 5: DSM results for the DFC dataset. The color bars of JAX_004, JAX_068, JAX_214, and JAX_260 are represented by (f), (g), (h), and (i), respectively. The color bar is expressed in units of meters. Figure 6: Error maps of on the DFC dataset. The color bar is expressed in units of meters. Figure 7: Results produced by our REPM pipeline for the Sainte-Maxime dataset: (a) true color image and (b) DSM product. Figure 8: Partial DSM results and error plots for the GF7 image data. The color bar is expressed in units of meters. ## 5 Discussion ### Summary of the reconstruction results In the DSM reconstruction experiments on the four datasets, we can draw the following conclusions. (1) The proposed REPM pipeline significantly outperforms all the other methods, including both the software and open-source solutions, in terms of the two accuracy metrics of RMSE and Comp. When comparing the reconstructed DSMs and error maps, it can be observed that the pipeline proposed in this paper has yielded the highest accuracy and completeness. (2) The incorporation of a polynomial image refinement model has proven to further enhance the accuracy and completeness of the DSM reconstruction. Additionally, it is worth noting that for imaging on a large-scale, the DSM reconstruction accuracy and completeness is significantly improved. ### Influence of the equivalent pinhole model According to the equivalent error formula presented in Section 3.2, the factors that affect the equivalent error are directly proportional to the size of the image and the difference in terrain altitude. In the experiment, we primarily explored the effect of the image size on the equivalent error. We defined the equivalent error \(L_{E}\) as the distance between the position of pixel \(p\) computed using the RPC parameter and the position of pixel \(p^{\prime}\) computed using the KRT parameter for the same object point, as displayed in Eq. (13). We used Eq. (8) to calculate the RMSE accuracy of the equivalent error. \[RMSE=\sqrt{(Samp\ RMSE)^{2}+(Line\ RMSE)^{2}} \tag{12}\] \[L_{E}=\sqrt{(x_{p}-x_{p^{\prime}})^{2}+(y_{p}-y_{p^{\prime}})^{2}} \tag{13}\] For satellite CCD images within a specific dataset, the error is directly proportional to the image size. For the same image size, as the image resolution increases, the equivalent error increases. The impact of image resolution on the equivalent error can be viewed as follows: as the image resolution increases, the terrain area corresponding to the same image size decreases, which indirectly affects the altitude difference and decreases it. According to Eq. (13), the equivalent error is proportional to the altitude difference. In summary, the image resolution indirectly affects the equivalent error for the same image size; as the image resolution increases, the equivalent error increases. Additionally, Fig. 9 displays the distribution of pixel-level equivalent errors on the image, providing a visual representation of the impact of image size on the equivalent error. Evidently, the equivalent error decreases as the image size decreases, eventually resulting in subpixel precision. Next, we examine the impact of the image size on the reconstructed DSMs in the WHU-TLC dataset, as presented in Table 5. The reduction in image size improved the ME accuracy and DSM completeness, but decreased the RMSE accuracy. Nevertheless, based on the results shown in Fig. 10, the altitude error in the error map display decreased as the image size decreased. In the second group of images, which had substantial differences in altitude, the reconstruction precision declined significantly when the image size was reduced from 1024 pixel to 512 pixel. This underscores the importance of selecting an optimal image size. We conclude that reducing the image size leads to an exponential increase in the number of images required, which creates a more complex stereo-matching view-selection problem in the pipeline reconstruction process, resulting in a decrease in the reconstruction accuracy. Figure 10: DSM results for different image sizes in the WHU-TLC test set: (a) REPM@5120, (b) REPM@2048, (c) REPM@1024, and (d) REPM@512. The color bar is expressed in units of meters. Figure 9: Visualizations of the distribution of the equivalent error on the image and the relationship between the equivalent error and the image size. The color bar is expressed units of pixels. ### Influence of the polynomial image refinement model We validated the effectiveness of the polynomial image refinement model on large scale ISPRS-ZY3 satellite images and GF7 satellite images. We compared the homography correction (H-Cor.) and polynomial correction (P-Cor.) for the equivalent error in images of different sizes. According to Table 6, the polynomial correction was more effective than the homography correction, and the effect of the correction was more significant for large-scale satellite images. In summary, the polynomial image refinement model was found to be more effective based on the metrics of the equivalent error. Subsequently, we evaluated the impact of the polynomial image refinement model on the DSM reconstruction accuracy. The ISPRS-ZY3 data indicated the absolute positioning accuracy calculated by the GCPs, whereas the GF7 data highlighted the accuracy and completeness of the DSM reconstruction. Based on our experimental findings, the addition of a polynomial image refinement model ensured that the reconstruction accuracy did not deteriorate substantially. However, for large images, the reconstruction accuracy and completeness were significantly enhanced. ### Future and outlook The 3D reconstruction method of RFM is equivalent to PCM provides new ideas and possibilities for the 3D reconstruction of linear-array satellite images. In this study, the proposed REPM pipeline exhibited excellent potential. However, weakly textured regions (such as bodies of water) remain a significant challenge for the reconstruction method. For instance, the reconstruction results of high-resolution images were more precise in contour. In contrast, the roofs of buildings were prone to larger holes, as shown in Fig. 5. Although high-resolution images are favorable for including more details in the reconstruction, this causes certain patches in the images to lack texture, which causes the matching computation and depth estimation to fail in weakly textured regions. Low-resolution images reflect the structural information contained in images more effectively Xu et al. (2023). Consequently, it may be worthwhile to introduce a multi-scale reconstruction framework that enhances the reconstruction effect in weakly textured regions. Furthermore, deep learning could be utilized to address the challenge of reconstructing regions with weak textural features and to enhance the precision and comprehensiveness of DSM reconstruction. Although we can process VHR images with sizes up to \(5120\times 5120\) pixel, satellite images are generally large. If they are cropped (especially considering the overlap), the number of images, the memory they would occupy, and the time they would take to process would all increase exponentially. The number and size of the images determine the efficiency of the pipeline for a given input. Therefore, it is imperative to enhance the pipeline framework and increase the number of parallel operations to improve the operational efficiency of the pipeline. In addition, because we determined the optimal image processing size based on the equivalent error, in the future we aim to create an adaptable model for choosing the most suitable image size. ## 6 Conclusion In this study, we proposed an REPM pipeline for large-scale satellite CCD imagery. The pipeline is equipped with an equivalent pinhole model that converts the RFM to the PCM and a polynomial image refinement model that back-calculates the polynomial correction function to remap the image based on the least squares method. The experimental results showed that the REPM pipeline outperformed the state-of-the-art pipelines on four satellite image datasets. In addition, we assessed the effectiveness of the polynomial image refinement model in enhancing the precision of the DSM reconstruction for large-scale images. This model can be used for 3D reconstruction of large-scale CCD imagery. However, the model has many limitations, such as weak texture region processing methods and low efficiency in handling large-scale satellite images. In the future, we intend to further improve our pipeline by significantly enhancing the stereo matching of low-texture regions and increasing the reconstruction efficiency. ## Acknowledgment This study was supported by the National Natural Science Foundation of China (grant numbers: 41971427, 42371459, 42101458, 42130112, and 42201513).
2306.00247
A Relationship Between Spin and Geometry
In a recent paper, algebraic descriptions for all non-relativistic spins were derived by elementary means directly from the Lie algebra $\specialorthogonalliealgebra{3}$, and a connection between spin and the geometry of Euclidean three-space was drawn. However, the details of this relationship and the extent to which it can be developed by elementary means were not expounded. In this paper, we will reveal the geometric content of the spin algebras by realising them within a novel, generalised form of Clifford-like algebra. In so doing, we will demonstrate a natural connection between spin and non-commutative geometry, and discuss the impact of this on the measurement of hypervolumes and on quantum mechanics.
Peter T. J. Bradshaw
2023-05-31T23:44:09Z
http://arxiv.org/abs/2306.00247v1
# A Relationship Between Spin and Geometry ###### Abstract In a recent paper, algebraic descriptions for all non-relativistic spins were derived by elementary means directly from the Lie algebra \(\mathfrak{so}(3,\mathbb{R})\), and a connection between spin and the geometry of Euclidean three-space was drawn. However, the details of this relationship and the extent to which it can be developed by elementary means were not expounded. In this paper, we will reveal the geometric content of the spin algebras by realising them within a novel, generalised form of Clifford-like algebra. In so doing, we will demonstrate a natural connection between spin and non-commutative geometry, and discuss the impact of this on the measurement of hypervolumes and on quantum mechanics. ## 1 Introduction ### The Spin Algebras \(A^{(s)}\) A recent paper[1] derived real associative algebras that completely describe the spin structure for non-relativistic systems of arbitrary spin. These "spin algebras" \(A^{(s)}\), are derived from the universal enveloping algebra[2]\(U(\mathfrak{so}(3,\mathbb{R}))\) of the Lie algebra \(\mathfrak{so}(3,\mathbb{R})\), \[\mathfrak{so}(3,\mathbb{R}) \coloneqq\mathrm{span}_{\mathbb{R}}(\{S_{1},S_{2},S_{3}\}) \tag{1a}\] \[S_{a} \!\times\!S_{b}=\sum_{c}\varepsilon_{abc}S_{c}, \tag{1b}\] by quotient, \[A^{(s)}\coloneqq\frac{U(\mathfrak{so}(3,\mathbb{R}))}{I\left(\mathrm{Im}(M^{(2 s+1)})\right)}, \tag{2}\] where \(I\left(\mathrm{Im}(M^{(k)})\right)\) is the two-sided ideal generated by the totally-symmetric and contractionless "multipoles", \[M^{(k)}:T(\mathfrak{so}(3,\mathbb{R}))^{\otimes k}\to U( \mathfrak{so}(3,\mathbb{R})) \tag{3a}\] \[\mathrm{ad}(S^{2}+k(k+1)) \!\circ\!M^{(k)}=0\] (3b) \[\forall\tau\in S_{k},\;M^{(k)}\!\circ\!\tau=M^{(k)}\] (3c) \[\forall m\neq n\in\{1,...,k\},\;\sum_{a_{m},a_{n}=1}^{3}\delta_{a _{m}a_{n}}M^{(k)}\Big{(}\bigotimes_{j=1}^{k}S_{a_{j}}\Big{)}=0, \tag{3d}\] with \(T(\mathfrak{so}(3,\mathbb{R}))\) the tensor algebra of \(\mathfrak{so}(3,\mathbb{R})\)[3]. The multipoles are defined recursively in terms of the adjoint action \(\mathrm{ad}\), \[\mathrm{ad}(u)\coloneqq v\mapsto\begin{cases}uv&u\in\mathbb{R}\\ u\!\otimes\!v-v\!\otimes\!u&u\in\mathfrak{so}(3,\mathbb{R})\\ \mathrm{ad}(a)\!\circ\!\mathrm{ad}(b)(v)&u=a\!\otimes\!b,\end{cases} \tag{4}\] the left multiplication \(\forall A\in U(\mathfrak{so}(3,\mathbb{R}))\), \[L(A)\coloneqq B\mapsto A\otimes B, \tag{5}\] and the Casimir element of \(U(\mathfrak{so}(3,\mathbb{R}))\), \[S^{2}\coloneqq\sum_{a=1}^{3}S_{a}\otimes S_{a}, \tag{6}\] as \(\forall k\in\mathbb{Z}^{+}\), \(\alpha\in\mathbb{R}\), \(v\in\mathfrak{so}(3,\mathbb{R})\), \(B_{k}\in\mathfrak{so}(3,\mathbb{R})^{\otimes k}\), \[\begin{split} M^{(0)}(\alpha)&=\alpha\\ M^{(1)}(v)&=v\\ M^{(k+1)}(v\otimes B_{k})&=\frac{\operatorname{ ad}(S^{2}+k(k-1))\circ\operatorname{ad}(S^{2}+k(k+1))}{4(k+1)(2k+1)}\circ L(v) \circ M^{(k)}(B_{k}).\end{split} \tag{7}\] The multipoles are important to the structure of \(U(\mathfrak{so}(3,\mathbb{R}))\), since \(\forall A_{k}\in U(\mathfrak{so}(3,\mathbb{R}))\), \(\forall k\in\mathbb{N}\), for which \(\operatorname{ad}(S^{2}+k(k+1))(A_{k})=0\) may be written as an \(\mathbb{R}[S^{2}]\)-linear combination of objects from \(\operatorname{Im}(M^{(k)})\), and all elements of \(U(\mathfrak{so}(3,\mathbb{R}))\) are linear combinations of such \(A_{k}\). For compactness, let us define \(\forall k\in\mathbb{Z}^{+}\), \[\begin{split} M\coloneqq M^{(0)}(1)\\ M_{a_{1}a_{2}\dots a_{k}}\coloneqq M^{(k)}(S_{a_{1}}\otimes S_{a _{2}}\otimes...\otimes S_{a_{k}}).\end{split} \tag{8}\] The spin algebras \(A^{(s)}\) are real unital associative algebra of multipoles, \(\forall k\in\mathbb{Z}^{+}\), \[\begin{split} A^{(0)}&\cong\operatorname{span}_{ \mathbb{R}}(\{M\})\\ A^{(k)}&\cong\operatorname{span}_{\mathbb{R}}(\{M, M_{a_{1}},...,M_{a_{1}\dots a_{2k}}\}),\end{split} \tag{9}\] within which, \[S^{2}=-s(s+1). \tag{10}\] Since all multipoles \(M_{a_{1}\dots a_{2k}}\) are algebraic combinations of the \(\{S_{a}\}\), the \(A^{(s)}\) encode the spin structures for arbitrary spins \(s\) entirely in terms of \(\mathfrak{so}(3,\mathbb{R})\). The Lie algebra \(\mathfrak{so}(3,\mathbb{R})\) generates the Lie group \(\operatorname{SO}(3,\mathbb{R})\), which is the connected symmetry group of Euclidean three-space. In this way, the \(A^{(s)}\) are connected to the geometry of Euclidean three-space, however the extent and consequences of this connection is unclear. ### A Geometric Realisation of \(\mathfrak{so}(3,\mathbb{R})\) through Clifford Algebra To understand the extent of the relationship between the \(A^{(s)}\) and the geometry of Euclidean three-space, let us first attempt to understand the underlying geometric content of \(\mathfrak{so}(3,\mathbb{R})\), with which any geometric account of \(A^{(s)}\) must be compatible. Towards this, we will explore the geometric structure of the more general \(\mathfrak{so}(p,q,\mathbb{R})\), the Lie algebra of the connected symmetry group \(\operatorname{SO}^{+}(p,q,\mathbb{R})\), which preserves the geometry of a \((p{+}q)\)-dimensional space with indefinite signature. Let \((V,g)\) denote such a non-trivial finite-dimensional vector space \(V\) over \(\mathbb{R}\) equipped with a symmetric, non-degenerate, bilinear map \(g:V{\times}V\to\mathbb{R}\), which we shall follow relativity by referring to as a "metric". We may identify the Lie algebra \(\mathfrak{so}(p,q,\mathbb{R})\) with the set of all linear maps \(A\in\operatorname{End}(V)\) satisfying, \(\forall v,w\in V\), \[g(A(v),B)+g(v,A(w))=0. \tag{11}\] Such maps are closed under commutators, which serves as the Lie product. It has long been known that \(\mathfrak{so}(p,q,\mathbb{R})\) is in bijection[4] with \(\Lambda^{2}(V)\subset T(V)\), the space of second-order antisymmetric tensors on \(V\)[3], \[\Lambda^{2}(V)\coloneqq\operatorname{span}_{\mathbb{R}}(\{a\wedge b\,|\,a,b \in V\}), \tag{12}\] where \(\wedge\) is the multilinear, totally antisymmetric, associative "wedge" \(\forall k\in\mathbb{Z}^{+}\),\(\forall v_{j}\in V:j\in\{1,\dots,k\}\), \[v_{1}\wedge v_{2}\wedge\dots\wedge v_{k}=\frac{1}{k!}\sum_{\sigma\in S_{k}} \operatorname{sgn}(\sigma)\bigotimes_{j=1}^{k}v_{\sigma(j)}, \tag{13}\] with \(S_{k}\) is the set of all permutations of \(k\) objects, and \(\operatorname{sgn}(\sigma)\) is the sign of the permutation \(\sigma\). Explicitly, this bijection may be given, up to a scalar, as \(\forall v,w,x\in V\), \[v\wedge w\mapsto\big{(}x\mapsto g(v,\!x)w-g(w,\!x)v\big{)}. \tag{14}\] This bijection grants us an immediate geometric interpretation for the objects of \(\mathfrak{so}(p,q,\mathbb{R})\): linear combinations of planar elements. More generally, the "\(k\)-blade"[5] (13) can be interpreted as a hypervolume element of dimension \(k\). For \(\mathfrak{so}(3,\mathbb{R})\), a 2-blade encodes both the plane and angle of the rotation it generates. We refer to an arbitrary element of \(\Lambda^{k}(V)\) as a "\(k\)-vector"[5] or "_prefix_-vector" e.g. 2-vector and bivector are identical. For completeness, we consider 0-vectors and 0-blades to be the scalars of \(V\). With the objects of \(\mathfrak{so}(p,q,\mathbb{R})\) algebraically identified as bivectors \(\Lambda^{2}(V)\), we may find their Lie product by constructing the Clifford algebra[6]\(\operatorname{Cl}(V,g)\): \[\operatorname{Cl}(V,g)\cong\frac{T(V)}{I\left(v\otimes w+w\otimes v-2g(v,\!w )\right)}. \tag{15}\] This quotient reduces all tensors of \(T(V)\) to linear combinations of \(k\)-blades. The survival of the \(k\)-blades in \(T(V)\) mark them as objects of geometric significance. \(\operatorname{Cl}(V,g)\) is finite-dimensional, as all \(k\)-blades with \(k>\dim(V)\) are 0 by antisymmetry. Since the field of scalars of \(V\) is not of characteristic 2, this is equivalent to the construction of \(\operatorname{Cl}(V,g)\) using a quadratic form[6]. The structure of the Clifford algebra reveals the Lie product between bivectors, \[(a\wedge b)\otimes(c\wedge d)-(c\wedge d)\otimes(a\wedge b)=2g(b,\!c)(a\wedge d )-2g(b,\!d)(a\wedge c)-2g(a,\!c)(b\wedge d)+2g(a,\!d)(b\wedge c), \tag{16}\] turning \(\Lambda^{2}(V)\) into a Lie algebra. This Lie product is related to the usual one[7] by a scaling. The Clifford algebra also naturally defines an \(\mathfrak{so}(p,q,\mathbb{R})\)-action on vectors \(\forall a,b,c\in V\), \[m(a\wedge b)(c)=\frac{1}{2}\big{(}c\otimes(a\wedge b)-(a\wedge b)\otimes c \big{)}=g(a,\!c)b-g(a,\!b)c, \tag{17}\] which is identical to (14). This enables a natural action of the symmetry group \(\operatorname{SO}^{+}(p,q,\mathbb{R})\) to be defined algebraically on \(\operatorname{Cl}(V,g)\)[5]. Restricting our attention to three-dimensional Euclidean space \((E,\delta)\), we may introduce the transformation, \[S^{\prime}_{p}\coloneqq-\frac{1}{4}\sum_{a,b=1}^{3}\!\varepsilon_{abp}\,e_{a }\wedge e_{b}, \tag{18}\] where \(\{e_{1},e_{2},e_{3}\}\) are a basis for \(E\) satisfying \(\delta(e_{a},\!e_{b})=\delta_{ab}\). Then, on basis bivectors the Lie product (16) becomes, \[[S^{\prime}_{p},S^{\prime}_{q}]=\sum_{r=1}^{3}\varepsilon_{pqr}S^{\prime}_{r}, \tag{19}\] consistent with (1b). Thus, we see that \(\operatorname{Cl}(E,\delta)\) algebraically realises \(\mathfrak{so}(3,\mathbb{R})\) in a geometrically meaningful way using bivectors. ### Limitations of the Clifford Algebra Approach Despite this natural emergence of \(\mathfrak{so}(3,\mathbb{R})\) within \(\operatorname{Cl}(E,\delta)\), this realisation is severely limited. To see this, we note that in \(\operatorname{Cl}(E,\delta)\), we have \(\forall a,b,c,d\in E\), \[\frac{1}{2}\big{(}(a\wedge b)\otimes(c\wedge d)+(c\wedge d)\otimes(a\wedge b) \big{)}=g(a,\!d)g(b,\!c)-g(a,\!c)g(b,\!d). \tag{20}\] Applying (18), we find in \(\operatorname{Cl}(E,\delta)\), \[\frac{1}{2}(S^{\prime}_{p}\otimes S^{\prime}_{q}+S^{\prime}_{q}\otimes S^{ \prime}_{p})=-\frac{1}{4}\delta_{pq}, \tag{21}\] and the Casimir element \(S^{2}\) of \(\mathfrak{so}(3,\mathbb{R})\) is, \[S^{\prime 2}\coloneqq\sum_{p=1}^{3}S^{\prime}_{p}\otimes S^{\prime}_{p}=-\frac{3} {4}. \tag{22}\] Together, (21) and (22) imply that the spin quadrupole \(M^{\prime}_{pq}=0\) in \(\mathrm{Cl}(E,\delta)\). By the multipole recurrence relationship (7), we also conclude that all spin multipoles \(M_{p_{1},\ldots,p_{k}}=0\) for \(k>2\). This shows that, unsurprisingly[5], the unital subalgebra \(\{\mathbb{R},\Lambda^{2}(E)\}\subset\mathrm{Cl}(E,\delta)\) has spin-\(\frac{1}{2}\) structure, and is algebra isomorphic to \(A^{(\frac{1}{2})}\). This is a direct result of the defining algebraic structure (15) of \(\mathrm{Cl}(E,\delta)\). Thus, \(\mathrm{Cl}(E,\delta)\) cannot support an arbitrary spin structure within it, and cannot be used to explore the geometric content of \(A^{(s)}\) for \(s\neq\frac{1}{2}\). Finding an algebra which can will be the focus of this paper. In section 2, we will define a "Spinless Weak Clifford Algebra" compatible with the structure of an arbitrary \(A^{(s)}\). We will present the "Spin-\(s\) Weak Clifford Algebras" derived from these in section 3, and show they may naturally entail spin-dependence in the measured sizes of hypervolumes. Finally, in section 4, we will discuss the connection between the spin-\(s\) Clifford algebras and non-commutative geometries, and contrast these new algebras with other higher-spin models. We will also consider the implications of these algebras for quantum mechanics. ## 2 Method ### Towards a weaker Clifford Algebra As we saw in section 1.3, the incompatibility of the Clifford algebra with arbitrary spin algebras \(A^{(s)}\) is inherent to its algebraic structure. This is unfortunate, since this algebraic structure enabled us to find both a natural Lie algebra action (17) and the geometric structure of \(\mathfrak{so}(3,\mathbb{R})\) (16). To study the \(A^{(s)}\), we will construct a weaker algebra with both of these features by elementary means. Such an algebra will have no spin structure at all, enabling any \(A^{(s)}\) to be embedded within. We expect the Lie product of \(\mathfrak{so}(p,q,\mathbb{R})\) to follow from the \(\mathfrak{so}(p,q,\mathbb{R})\)-action on our algebra, as (16) follows from (17). Thus, to proceed we require only: an elementary derivation of an \(\mathfrak{so}(p,q,\mathbb{R})\)-action on vectors; and a determination of how this action must be implemented within an associative algebra. #### 2.1.1 Elementary Derivation of the \(\mathfrak{so}(p,q,\mathbb{R})\)-action on Vectors Recall that \(V\) is a finite-dimensional real vector space and that \(g\) is a non-degenerate, symmetric, bilinear map on \(V\). For any non-null element \(a\in V\), i.e. \(g(a,a)\neq 0\), we may always find a unique direct sum decomposition of \(V\), \[V\cong\mathrm{span}_{\mathbb{R}}(\{a\})\oplus W_{a}, \tag{23}\] where \(\forall w\in W_{a}\), \(g(a,w)=0\). Specifically, we can write \(\forall v\in V\) uniquely as, \[v=\frac{g(a,v)}{g(a,a)}a+b \tag{24}\] where \(g(a,b)=0\). We notice from (24) that the decomposition (23) is scale-invariant in two ways: \(\forall\alpha\in\mathbb{R}/\{0\}\), \(a\) and \(a^{\prime}=\alpha a\) give the same decomposition, as do \(g\) and \(b=\alpha g\). This suggests that some of the structure imparted on \(V\) by \(g\) is independent of scale. Thus, let us explore this structure more easily by accepting scaling of the metric \(g\) in our arguments. Doing so will enable us to establish results using more mathematically convenient objects. From (23), we may use \(g\) to alter the scale of the component of \(v\in V\) parallel to a non-null vector \(a\in V\), \(\forall k\in\mathbb{R}\), \[S(k,a)\coloneqq v\mapsto g(a,a)v+(k-1)g(a,v)a, \tag{25}\] accepting the overall scaling by \(g(a,a)\) that occurs. Note that \(\forall v,w\in V\), \[g(S(k,a)(v),S(k,a)(w))=g^{2}(a,a)g(v,w), \tag{26}\] precisely when \(k_{\pm}=\pm 1\). Recognising the \(k_{+}\) case as simply an overall scaling, we use the \(k_{-}\) solution to define the "conformal reflection", \[R(a)\coloneqq S(-1,a)=g(a,a)v-2g(a,v)a. \tag{27}\] The conformal reflections are a superset of the traditional reflections, but are similarly not closed under composition. To see this, let us first define a "\(g\)-adjoint" of an endomorphism \(A\in\mathrm{End}(V)\) as an endomorphism \(B\in\mathrm{End}(V)\) such that \(\forall v,w\in V\), \[g(A(v),w)=g(v,B(w)). \tag{28}\] Since \(g\) is symmetric and non-degenerate, \(B\) is unique and its \(g\)-adjoint is \(A\); accordingly, we shall denote the \(g\)-adjoint of \(A\) as \(\bar{A}\) with \(\bar{A}=A\). We also define "self-\(g\)-adjoint" to mean \(\bar{A}=A\), "anti-self-\(g\)-adjoint" to mean \(\bar{A}=-A\), and two maps \(a_{+}\) and \(a_{-}\) that respectively yield the self-\(g\)-adjoint and anti-self-\(g\)-adjoint parts of an endomorphism \(A\), \[a_{\pm}\coloneqq A\mapsto\frac{1}{2}(A\pm\bar{A}). \tag{29}\] We find all conformal reflections are self-\(g\)-adjoint, but the \(g\)-adjoint of \(R(a)\!\circ\!R(b)\) is \(R(b)\!\circ\!R(a)\) which is different in general. This indicates that there is structure in the product of two conformal reflections that is not itself a conformal reflection. To identify this additional structure, we decompose \(R(a)\!\circ\!R(b)(v)\), \(\forall v\in V\) into self-\(g\)-adjoint and anti-self-\(g\)-adjoint parts \(\forall v\in V\), \[a_{-}\big{(}R(a)\!\circ\!R(b)\big{)}(v)=-2g(a,\!b)\big{(}g(a,\!v)b-g(b,\!v)a \big{)} \tag{30a}\] \[a_{+}\big{(}R(a)\!\circ\!R(b)\big{)}(v)=g(a,\!a)g(b,\!b)v+2g(a,\!v )\big{(}g(a,\!b)b-g(b,\!b)a\big{)}-2g(b,\!v)\big{(}g(a,\!a)b-g(b,\!a)a\big{)}. \tag{30b}\] Noticing that there is a repeated pattern within (30a) and (30b), we define \(\forall a,b\in V\), \[t(a,\!b)\coloneqq v\mapsto g(a,\!v)b-g(b,\!v)a, \tag{31}\] enabling us to write, \[a_{-}\big{(}R(a)\!\circ\!R(b)\big{)}=-2g(a,\!b)t(a,\!b) \tag{32a}\] \[a_{+}\big{(}R(a)\!\circ\!R(b)\big{)}=g(a,\!a)g(b,\!b)\mathrm{id} +2t(a,\!b)\!\circ\!t(a,\!b). \tag{32b}\] Since \(a_{+}(A)+a_{-}(A)=A\), we find that the map \(t(a,\!b)\) controls binary products of conformal reflections. It is also antisymmetric, \(t(b,\!a)=-t(a,\!b)\), and anti-self-\(g\)-adjoint. Furthermore, given (32a), and \(\forall v\in V\), \[a_{-}\big{(}R(a)\!\circ\!t(b,\!c)\big{)}=\frac{1}{2}\big{(}t(R(a)(b),\!c)+t( b,\!R(a)(c))\big{)} \tag{33a}\] \[g(b,\!c)\,a_{+}\big{(}R(a)\!\circ\!t(b,\!c)\big{)}=\frac{1}{2} \big{(}g^{2}(a,\!c)R(b)-g^{2}(a,\!b)R(c)-R(t(a,\!b)(c))+R(t(a,\!c)(b))\big{)}, \tag{33b}\] we see the \(\{R(a),t(b,c)\}\) forms a generating set for the algebra of conformal reflections. Therefore, we have derived a map with central importance to the conformal reflections, and whose image agrees with the image of (14). So far our derivation has only accounted for non-null vectors \(a,b\in V\) with \(g(a,\!b)\neq 0\), so, \[t(a,\!b)=\frac{R(a)\!\circ\!R(b)-R(b)\!\circ\!R(a)}{-4g(a,\!b)}. \tag{34}\] Let us extend this definition to the whole of \(V\). Since \(t(a,\!b+\epsilon a)=t(a,\!b)\), \(\forall\epsilon\in\mathbb{R}\), \(t(a,\!b)\) for non-null vectors is defined without the need for limits. To define \(t(a,\!b)\) when \(b\) is null and \(g(a,\!b)=0\), by the non-degeneracy of \(g\) we may find a null \(c\in V\) such that \(g(b,\!c)\neq 0\), and use this pair to construct a pair of vectors \(\{p,n\}\) such that, \[g(p,\!p)=-g(n,\!n)>0 \tag{35a}\] \[g(p,\!n)=0\] (35b) \[b=p+n\] (35c) \[c=p-n. \tag{35d}\] Thus, we may define, \[t(a,\!b)=t(a,\!p)+t(a,\!n), \tag{36}\] by the bilinearity of \(t\). Similar arguments yield \(t(a,\!b)\) for \(a\) and \(b\) both null, regardless of the value of \(g(a,\!b)\). Thus, we have defined \(t(a,\!b)\)\(\forall a,b\in V\) and completed our derivation of the \(\mathfrak{so}(p,q,\mathbb{R})\)-action on \(V\); this was achieved in a coordinate-free, elementary way, without appealing to differential geometry or the theory of Lie groups[8]. #### 2.1.2 The Algebraic Form of \(t(a,b)\) Having acquired \(t(a,b)\), the \(\mathfrak{so}(p,q,\mathbb{R})\)-action on \(V\), \(\forall a,b\in V\), we must consider how to implement it algebraically within an associative algebra. More precisely, we seek a third-order tensor \(f(a,b,c)\) whose properties match those of \(t(a,b)(c)\), so that a quotient of \(T(V)\) by the two-sided ideal \(\forall a,b,c\in V\), \[I\left(f(a,b,c)-t(a,b)(c)\right) \tag{37}\] yields the most general non-trivial algebra possible. We first note that \(t(a,b)(c)\) is antisymmetric in its first two arguments. The most general third-order tensor sharing this property is, \[f(a,b,c)=k_{1}\big{(}(a\!\otimes\!b-b\!\otimes\!a)\!\otimes\!c\big{)}+k_{2} \big{(}c\!\otimes\!(a\!\otimes\!b-b\!\otimes\!a)\big{)}+k_{3}\big{(}a\!\otimes \!c\!\otimes\!b-b\!\otimes\!c\!\otimes\!a\big{)}. \tag{38}\] We may constrain (38) even further by considering the commutator between two \(t\) maps, which is closed, \[t(a,b)\!\circ\!t(c,\!d)-t(c,\!d)\!\circ\!t(a,\!b)=t(t(a,b)(c),\!d)+t(c,\!t(a, \!b)(d)). \tag{39}\] From (39), we see \(\forall a,b,c,d,e\in V\), \[t(t(a,\!b)(c),\!d)(e)+t(c,\!t(a,\!b)(d))(e)=-t(t(c,\!d)(a),\!b)(e)-t(a,\!t(c, \!d)(b))(e), \tag{40}\] from which we require, \[f(f(a,b,c),d,e)+f(c,f(a,b,d),e)+f(f(c,d,a),b,e)+f(a,f(c,d,b),e)=0. \tag{41}\] The only non-trivial solution to (41) is, \[f(a,b,c)=k_{1}\big{(}(a\!\otimes\!b-b\!\otimes\!a)\!\otimes\!c-c\!\otimes\!( a\!\otimes\!b-b\!\otimes\!a)\big{)},\] defining \(f(a,b,c)\) up to an arbitrary scaling, which was expected. ### The Spinless Weak Clifford Algebra We now have everything we need to construct the "Spinless Weak Clifford Algebra" \(\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\): choosing \(k_{1}=\frac{1}{2}\) in (2.1.2), we define \(\forall a,b,c\in V\), \[\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\cong\frac{T(V)}{I\left((a\!\wedge\! b)\!\otimes\!c-c\!\otimes\!(a\!\wedge\!b)-g(a,\!c)b+g(b,\!c)a\right)}, \tag{42}\] whose defining identity, \[(a\!\wedge\!b)\!\otimes\!c-c\!\otimes\!(a\!\wedge\!b)=g(a,\!c)b-g(b,\!c)a, \tag{43}\] agrees with the Clifford algebra's \(\mathfrak{so}(p,q,\mathbb{R})\)-action (17) up to a scaling. We use the term "weak" Clifford algebra to contrast the "strong" Clifford algebra in the sense used in logic: the defining relationship of \(\mathrm{Cl}(V,g)\) is stronger than the defining relationship of \(\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\). The bivector \(a\!\wedge\!b\) necessarily appearing whole in (43) demonstrates its significance to the properties of \(g\), and that \(t(a,\!b)\) is truly a bivector-action on a vector. We may capture this by defining, \[\mathrm{u}(a\!\wedge\!b)\coloneqq c\mapsto(a\!\wedge\!b)\!\otimes\!c-c\! \otimes\!(a\!\wedge\!b)=t(a,\!b). \tag{44}\] As the unique embedding of \(t(a,\!b)\) in \(\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\), we may consider the properties of \(\mathrm{u}(a\!\wedge\!b)\) to be the natural extension of \(t(a,\!b)\) to \(\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\). In particular, we find that \(\mathrm{u}(a\!\wedge\!b)\) is naturally a derivation[3] on \(\mathrm{Cl}_{\!\!\infty}^{\bullet}\!(V,g)\), \(\forall A,B\in T(V)\), \[\mathrm{u}(a\!\wedge\!b)(A\!\otimes\!B)=\big{(}\mathrm{u}(a\!\wedge\!b)(A) \big{)}\!\otimes\!B+A\!\otimes\!\big{(}\mathrm{u}(a\!\wedge\!b)(B)\big{)}, \tag{45}\] since, \[(a\!\wedge\!b)\!\otimes\!\big{(}A\!\otimes\!B\big{)}-\big{(}A\!\otimes\!B \big{)}\!\otimes\!(a\!\wedge\!b)=\big{(}(a\!\wedge\!b)\!\otimes\!A-A\! \otimes\!(a\!\wedge\!b)\big{)}\!\otimes\!B+A\!\otimes\!\big{(}(a\!\wedge\!b) \!\otimes\!B-B\!\otimes\!(a\!\wedge\!b)\big{)}. \tag{46}\] This agrees with the standard prescription for a Lie algebra-action on a tensor product of representations, so that exponentiation yields a representation of a Lie group[9]. To ensure consistency with the structure of the \(A^{(s)}\), we shall extend \(\mathrm{u}\) to \(T(\Lambda^{2}(V))\) as an associative algebra-action, \(\forall\alpha\in\mathbb{R}\), \(A,B\in T(\Lambda^{2}(V))\), \[\mathrm{u}(\alpha) \coloneqq A\mapsto\alpha A \tag{47a}\] \[\mathrm{u}(A\!\otimes\!B) \coloneqq\mathrm{u}(A)\!\circ\!\mathrm{u}(B). \tag{47b}\] As with (17), (43) determines the Lie product of \(\mathfrak{so}(p,q,\mathbb{R})\), which in \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(V,g)\) is, \(\forall a,b,c,d\in V\), \[(a\wedge b)\otimes(c\wedge d)-(c\wedge d)\otimes(a\wedge b)=g(a,c)(b\wedge d)-g (b,c)(a\wedge d)-g(a,d)(b\wedge c)+g(b,d)(a\wedge c), \tag{48}\] consistent with the standard Lie product[7] with \(\hat{J}^{\mu\nu}=iJ^{\mu\nu}=i(e^{\mu}\wedge e^{\nu})\), and \(\hat{J}^{\mu\nu}\) the generators in the physics convention. This is also consistent with the spin generator convention \(\hat{S}_{a}=iS_{a}\) in [1]. By properties (47b) and (45), we may use (48) to write the commutator of \(\mathrm{u}\) compactly, \[\mathrm{u}(a\wedge b)\circ\mathrm{u}(c\wedge d)-\mathrm{u}(c\wedge d)\circ \mathrm{u}(a\wedge b)=\mathrm{u}(\mathrm{u}(a\wedge b)(c\wedge d)), \tag{49}\] showing that \(\mathrm{u}\) is indeed an \(\mathfrak{so}(p,q,\mathbb{R})\)-action. ### The Spinless Weak Clifford Algebra for \((E,\delta)\) Restricting our attention to the present problem, we consider the three-dimensional Euclidean space \((E,\delta)\) from earlier, and its spinless weak Clifford algebra \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(E,\delta)\). Using an orthonormal basis \(\{e_{a}\}\), we define, \[S_{p}\coloneqq\frac{1}{2}\sum_{a,b=1}^{3}\varepsilon_{abp}\,e_{a}\wedge e_{b}, \tag{50}\] which in \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(E,\delta)\) turns (48) into the standard Lie product of \(\mathfrak{so}(3,\mathbb{R})\) (1b). Therefore, \(U(\mathfrak{so}(3,\mathbb{R}))\subset\mathrm{Cl}_{\overline{\mathsf{w}}}^{ \tau}(V,g)\), and we identify, \[\mathrm{u}|_{T(\Lambda^{2}(V))}=\mathrm{ad}, \tag{51}\] where \(\mathrm{ad}\) was defined in (4). We note the inverse transformation of (50), \[e_{a}\wedge e_{b}=\sum_{p=1}^{3}\varepsilon_{abp}S_{p}, \tag{52}\] and recognise that it enables us to write the multipoles of the \(A^{(s)}\) in the language of bivectors, \(\forall k\in\mathbb{Z}^{+}\), \[M^{(k)}\Big{(}\bigotimes_{j=1}^{k}e_{a_{j}}\wedge e_{b_{j}}\Big{)}=\!\!\!\! \sum_{p_{1},\dots,p_{k}=1}^{3}\prod_{j=1}^{k}\varepsilon_{a_{j}b_{j}p_{j}}M^ {(k)}\Big{(}\bigotimes_{m=1}^{k}S_{p_{m}}\Big{)}, \tag{53}\] with, \[S^{2}=\frac{1}{2}\sum_{a,b=1}^{3}(e_{a}\wedge e_{b})\otimes(e_{a}\wedge e_{b}). \tag{54}\] Significantly, though \(U(\mathfrak{so}(3,\mathbb{R}))\subset\mathrm{Cl}_{\overline{\mathsf{w}}}^{ \tau}(E,\delta)\), \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(E,\delta)\) has no spin structure whatsoever. This makes it the ideal basic structure with which to realise the \(A^{(s)}\) and explore their geometric content. ### Measures of \(k\)-Volumes #### 2.4.1 \(k\)-Volumes in \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(V,g)\) In \(\mathrm{Cl}(V,g)\), geometric measurements about its objects are conveyed by the scalar part[5]\(\langle\cdot\rangle\), for example: \(|((v_{1}\wedge\dots\wedge v_{k})^{2})|\) is the square of a \(k\)-blade's \(k\)-volume; and \(\langle(v_{1}\wedge\dots\wedge v_{k})(w_{1}\wedge\dots\wedge w_{k})\rangle\) describes the projected overlap of two \(k\)-blades. In \(U(\mathfrak{so}(3,\mathbb{R}))\subset\mathrm{Cl}_{\overline{\mathsf{w}}}^{ \tau}(E,\delta)\), there is a similar notion. Recall from section 1.1, that all elements \(A_{0}\in U(\mathfrak{so}(3,\mathbb{R}))\) for which \(\mathrm{ad}(S^{2})(A_{0})=0\) are \(\mathbb{R}[S^{2}]\)-linear combinations of the monopole \(M\). Thus, given an element \(A\in U(\mathfrak{so}(3,\mathbb{R}))\) on which \(\mathrm{ad}(S^{2})\) has minimal polynomial \(m(x)\), we define its "Monopole Part" \(\mathrm{Mon}(A)\), \[\mathrm{Mon}(A)\coloneqq\begin{cases}\frac{n(\mathrm{ad}(S^{2}))}{n(0)}(A)&m(x)= xn(x)\\ 0&\text{otherwise.}\end{cases} \tag{55}\] We may use \(\mathrm{u}\) to identify geometrically meaningful objects within \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(V,g)\) with objects forming simple \(\mathrm{u}(U(\mathfrak{so}(3,\mathbb{R})))\)-modules; this is consistent with how the multipole tensors were identified[1]. The \(k\)-blades are such modules, demonstrating their significance even in \(\mathrm{Cl}_{\overline{\mathsf{w}}}^{\tau}(V,g)\). #### 2.4.2 \(k\)-Volumes in \(\operatorname{Cl}^{\overline{\bullet}}_{\mathsf{w}}(E,\delta)\) In \(\operatorname{Cl}^{\overline{\bullet}}_{\mathsf{w}}(E,\delta)\), we wish to capture the sizes of bivectors and trivectors in some natural way. Accordingly, the invariant tensors at second-order, \[\operatorname{Mon}(S_{p}\!\otimes\!S_{q})=\frac{1}{3}\delta_{pq}S^{2} \tag{56}\] and third-order, \[\operatorname{Mon}(S_{p}\!\otimes\!S_{q}\!\otimes\!S_{r})=\frac{1}{6}\varepsilon _{pqr}S^{2} \tag{57}\] in \(U(\mathfrak{so}(3,\mathbb{R}))\) are of particular interest, where we have used that \(M=1\). In bivector form, these are respectively, \[\operatorname{Mon}((a\!\wedge\!b)\!\otimes\!(c\!\wedge\!d))=\frac{1}{3}\big{(} \delta(a\!,\!c)\delta(b\!,\!d)-\delta(a\!,\!d)\delta(b\!,\!c)\big{)}S^{2}, \tag{58}\] and, \[\operatorname{Mon}((a\!\wedge\!b)\!\otimes\!(c\!\wedge\!d)\! \otimes\!(e\!\wedge\!f))=\frac{1}{6}\left(\delta(a\!,\!c)\big{(}\delta(b\!,\!e )\delta(d\!,\!f)-\delta(b\!,\!f)\delta(d\!,\!e)\big{)}\right. \tag{59}\] \[\left.-\delta(a\!,\!d)\big{(}\delta(b\!,\!e)\delta(c\!,\!f)- \delta(b\!,\!f)\delta(c\!,\!e)\big{)}\right.\] \[\left.-\delta(a\!,\!e)\big{(}\delta(b\!,\!c)\delta(d\!,\!f)- \delta(b\!,\!d)\delta(c\!,\!f)\big{)}\right.\] \[\left.+\delta(a\!,\!f)\big{(}\delta(b\!,\!c)\delta(d\!,\!e)- \delta(b\!,\!d)\delta(c\!,\!e)\big{)}\right)\!S^{2}\] Since these objects are invariant under u, they are invariant under the action of \(\operatorname{SO}(3,\mathbb{R})\), and we may use them to extend the metric \(\delta\) of \((E,\delta)\) to a metric \(\Delta\) on the space of all antisymmetric tensors[3]\(\Lambda(E)\), \(\forall\alpha,\beta\in\mathbb{R}\), \(\forall a,b,c,d,e,f\in E\), \[\Delta(\alpha\!,\!\beta)\coloneqq\alpha\beta \tag{60a}\] \[\Delta(a\!,\!b)\coloneqq\delta(a\!,\!b)\] (60b) \[\Delta(a\!\wedge\!b,\!c\!\wedge\!d)\coloneqq\operatorname{Mon}(( a\!\wedge\!b)\!\otimes\!(c\!\wedge\!d))\] (60c) \[=\frac{1}{3}\left\langle a\!\wedge\!b,\!c\!\wedge\!d\right\rangle S ^{2}\] \[\Delta(a\!\wedge\!b\!\wedge\!c,\!d\!\wedge\!e\!\wedge\!f)\coloneqq \operatorname{Mon}((a\!\wedge\!b)\!\otimes\!(c\!\wedge\!d)\! \otimes\!(e\!\wedge\!f))\] \[\quad-\operatorname{Mon}((a\!\wedge\!b)\!\otimes\!(c\!\wedge\!e) \!\otimes\!(d\!\wedge\!f))\] (60d) \[\quad+\operatorname{Mon}((a\!\wedge\!b)\!\otimes\!(c\!\wedge\!f) \!\otimes\!(d\!\wedge\!e))\] \[=-\frac{1}{3}\left\langle a\!\wedge\!b\!\wedge\!c,\!d\!\wedge\!e \!\wedge\!f\right\rangle S^{2},\] with all other combinations zero, and \(\left\langle\cdot,\cdot\right\rangle\) is the usual Cauchy-Binet metric[3] for \((E,\delta)\), \(\forall k\in\mathbb{Z}^{+}\), \[\left\langle\bigwedge_{j=1}^{n}a_{j},\bigwedge_{k=1}^{n}b_{k}\right\rangle \coloneqq\det(\delta(a_{j},b_{k})). \tag{61}\] Note that \(\Delta\) is not (yet) scalar-valued, \[\Delta:\Lambda(E)\to\mathbb{R}[S^{2}]. \tag{62}\] ## 3 Results ### The Spin-\(s\neq 0\) Weak Clifford algebras Using the spinless weak Clifford algebra for Euclidean three-space \(\operatorname{Cl}^{\overline{\bullet}}_{\mathsf{w}}(E,\delta)\) of (42), we may define the "Spin-\(s\) Weak Clifford Algebra" \(\operatorname{Cl}^{(s)}_{\mathsf{w}}(E,\delta)\) for spin-\(s\neq 0\), \[\operatorname{Cl}^{(s)}_{\mathsf{w}}(E,\delta)\cong\frac{\operatorname{Cl}^{ \overline{\bullet}}_{\mathsf{w}}(E,\delta)}{I\left(\operatorname{Im}(M^{(2s+1) })\right)}. \tag{63}\] Since \(U(\mathfrak{so}(3,\mathbb{R}))\subset\operatorname{Cl}^{\overline{\bullet}}_{ \mathsf{w}}(E,\delta)\), this is equivalent to embedding the structure of the \(A^{(s)}\) within \(\operatorname{Cl}^{\overline{\bullet}}_{\mathsf{w}}(E,\delta)\). Within this algebra, we may positively identify the \(A^{(s)}\) as algebras of bivectors in general, whose action on \(E\) respects its Euclidean geometry \(\delta\). Thus, we have finally explicated the structure of the \(A^{(s)}\) in both geometric and algebraic terms. However, the embedding of their spin structures also has an impact on the geometry itself. As before, the quotient (63) entails, \[S^{2}=-s(s+1), \tag{64}\] within \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\). This ensures that the metric \(\Delta\) becomes scalar valued for bivectors and trivectors, \[\Delta(a\!\wedge\!b,a\!\wedge\!b)=-\frac{s(s+1)}{3}\left\langle a\!\wedge\!b, a\!\wedge\!b\right\rangle \tag{65a}\] \[\Delta(a\!\wedge\!b\!\wedge\!c,a\!\wedge\!b\!\wedge\!c)=\frac{s(s+1)}{3} \left\langle a\!\wedge\!b\!\wedge\!c,a\!\wedge\!b\!\wedge\!c\right\rangle, \tag{65b}\] and naturally _spin dependent_. The metric on vectors and on scalars remains spin independent. Besides this feature, the values of the metric \(\Delta\) are quite different from those of the usual Clifford algebra on \((E,\delta)\). It has more consistency with \(\mathrm{Cl}(E,-\frac{1}{2}\delta)\), for example the bivectors have the same size and sign, and the trivector has the same sign. However, the vectors and trivector are too large by a factor of 2, and the vectors are also positive-definite. There is freedom in scaling \(\Delta\) in these sectors for consistency, but the author can see no mathematical reason for doing so at time of writing. Despite this change to square norms, the algebraic structure imparted on \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) does not affect the action of \(\mathrm{u}(a\!\wedge\!b)\) nor the rotational behaviour of \(E\subset\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\), since (63) constrains only totally symmetric combinations of bivectors. However, the structure of \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) is significantly affected by the spin structure from \(A^{(s)}\). Recall that \(A^{(s)}\) is defined completely from \(U(\mathfrak{so}(3,\mathbb{R}))\) by \(\mathrm{Im}(M^{(2s+1)})=\{0\}\). Taking spin-\(\frac{1}{2}\) as an example, this is equivalent to a series of tensor identities, \(\forall a,b\in\{1,2,3\}\), \[\frac{1}{2}(S_{a}\!\otimes\!S_{b}+S_{b}\!\otimes\!S_{a})+\frac{1}{4}\delta_{ab }=0. \tag{66}\] In the language of bivectors this condition is equivalent to \(\forall a,b,c,d\in E\), \[\frac{1}{2}\big{(}(a\!\wedge\!b)\!\otimes\!(c\!\wedge\!d)+(c\!\wedge\!d)\! \otimes\!(a\!\wedge\!b)\big{)}=\Delta(a\!\wedge\!b,c\!\wedge\!d). \tag{67}\] Recognising \(\Delta(a\!\wedge\!b,c\!\wedge\!d)\) as a scalar, we may break up each bivector on the left-hand side according to (12), revealing (67) to be an constraint on fourth-order tensors in \(\mathrm{Cl}_{\mathrm{w}}^{(1/2)}(E,\delta)\). For spin-\(s\), these identities constrain order \(2(2s+1)\) tensors. Interpreting \(E\) as physical Euclidean space, these embeddings of the spin structures of \(A^{(s)}\) within \(\mathrm{Cl}_{\mathrm{w}}^{\pi}(E,\delta)\) constitute non-commutative geometries for \(E\), in the sense of non-commuting position observables[10, 11, 12]. ### The Spin-\(0\) Weak Clifford algebra The case of the spin-\(0\) algebra is an edge-case requiring separate treatment. \(A^{(0)}\) contains only the monopole \(M=1\), and is defined by \(\mathrm{Im}(M^{(1)})=\{0\}\), which entails \(S^{2}=0\). In the bivector language, this means that \(\forall a,b\in E\), \[a\!\wedge\!b=0. \tag{68}\] Trying to apply this identity to \(\mathrm{Cl}_{\mathrm{w}}^{\pi}(E,\delta)\) as in (63) results in the trivial algebra \(\mathbb{R}\). This is because in (42) we associate \(\big{[}\Lambda^{2}(E),E\big{]}\) with the whole of \(E\), so quotienting all bivectors in \(\mathrm{Cl}_{\mathrm{w}}^{\pi}(E,\delta)\) sets all vectors in the algebra to \(0\). Really, the identity (43) is only reasonable when the bivectors are non-zero in the algebra, otherwise we seek an action of zero mapping any vector to any other. To avoid this, we must impose the structure of \(A^{(0)}\) directly on \(T(E)\), \[\mathrm{Cl}_{\mathrm{w}}^{(0)}(E,\delta)\cong\frac{T(E)}{\mathrm{Im}(M^{(1)} )}. \tag{69}\] This algebra is unique amongst the spin-\(s\) weak Clifford algebras: \(\mathrm{Im}(M^{(1)})=\{0\}\) implies that \(\forall a,b\in E\), \[a\!\otimes\!b=b\!\otimes\!a, \tag{70}\] so \(\mathrm{Cl}_{\mathrm{w}}^{(0)}(E,\delta)\cong\mathrm{Sym}(E)\) is commutative. In fact, a spin-\(s\) weak Clifford algebra is commutative iff it has a spin-\(0\) structure. Additionally, \(S^{2}=0\) implies that, \[\Delta(a\!\wedge\!b,a\!\wedge\!b)=0 \tag{71a}\] \[\Delta(a\!\wedge\!b\!\wedge\!c,a\!\wedge\!b\!\wedge\!c)=0, \tag{71b}\] which is consistent with our expectations from the other spin-\(s\) weak Clifford algebras, and the commutative nature of \(\mathrm{Cl}_{\mathrm{w}}^{(0)}(E,\delta)\). Discussion Interpreting the meaning of the spin-\(s\) weak Clifford algebras depends heavily on our interpretation of the Euclidean three-space \(E\). The simplest and most relevant view of \(E\), is as the non-relativistic configuration space for a point-like particle. We may then interpret each vector as a position in three-space, or as the underlying algebraic object for a position operator in a quantum mechanical system. In this setting, we see that each spin-\(s\) weak Clifford algebra describes a different algebra of position variables according to its spin structure: the spin-0 weak Clifford algebra is commutative and corresponds to the usual position operator algebra in quantum mechanics; and the higher spin algebras are all non-commutative. In the sense of non-commuting position operators, we see a correspondence between the spin-\(s\) weak Clifford algebras, and non-commutative geometries whose structures are determined by their spin. Though the meaning of the spin dependence of area and volume is unclear, especially when identically zero, these phenomena further indicate that the spin structure affects the geometry of, or perhaps experienced by, the system. These non-commutative geometries are, in general, much weaker than those common to the literature[10, 11, 12], which typically place the position operators into a Heisenberg-like[13, 14] algebra. From these observations, we see the \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) as a new way to incorporate spin into quantum mechanical theories: directly as certain non-commutative algebras of position (and perhaps, by symmetry, momentum) operators. It is viable to extend the \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) to such a more phenomenologically complete model, since they contain the totally symmetric tensors, which are essential to algebraically perform dynamical (symplectic) transformations[4, 6]. Aside from these considerations, \(\mathrm{Cl}_{\mathrm{w}}^{(1/2)}(E,\delta)\) and \(\mathrm{Cl}_{\mathrm{w}}^{(1)}(E,\delta)\) are also weak enough that the Euclidean Clifford and Duffin-Kemmer-Petiau[15, 16, 17] algebras respectively may be derived from them. Thus, the \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) may form the basis for a generalised spin-\(s\) theory of such algebras. Furthermore, relativistic versions of this formalism may prove useful in the construction of theories of quantum gravity which incorporate both non-commutative geometry and spin. With our interpretation of the \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) laid out, it is instructive to compare it against other arbitrary spin models. The most relevant such comparison is with the standard tensor product of center of mass and "internal" spin degrees of freedom[18]. An immediate similarity is that both models include the spin algebra \(A^{(s)}\) as a subalgebra, originating from their spin structures. An immediate difference is that the traditional model incorporates the Heisenberg algebra[13, 14], and therefore the notion of momentum, whereas the spin-\(s\) weak Clifford algebras do not. However, the most significant difference is that in the traditional model the position and spin degrees of freedom are commuting, and thus independent of each other; they are not in \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) by construction. This implies that there is phenomenology between position and spin in a dynamical model containing \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) which the tensor product model cannot describe. The tensor product model should however be recoverable within this richer formalism as an approximation in some suitable setting. Another standard approach to higher spin in non-relativistic physics is to consider a subspace of \(\mathrm{Cl}(E,\delta)^{\otimes k}\) with the appropriate spin structure[19]. However, due to the strength of the algebra, it lacks totally symmetric tensors, and so cannot easily form part of a model which algebraically encodes symplectic transformations. Such algebras also have interpretational issues regarding the underlying substructure of \(\mathrm{Cl}(E,\delta)^{\otimes k}\) when applying it to fundamental particles; \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) does not suffer from this. Beyond the realm of non-relativistic physics are the Bargmann-Wigner[20] and Joos-Weinberg[21, 22] equations. Since the former equations do not have definite spin in general[23], we shall focus on the latter. The \(\gamma^{\mu_{1}\ldots\mu_{2s}}\) for a particle of spin-\(s\) in the Joos-Weinberg equation are comprised of objects which bear close, but not exact, resemblance to the multipole tensors of order \(s\)[1], revealing a link to the spin structure of the theory. Much like the tensor product model however, the spin sectors of the Joos-Weinberg equations and their center of mass sectors commute, as do the position operators. In this way the comparisons made between \(\mathrm{Cl}_{\mathrm{w}}^{(s)}(E,\delta)\) and the tensor product model are valid for the Joos-Weinberg equations also. ## 5 Conclusion In this paper, we demonstrated the incompatibility of the Clifford algebra with arbitrary non-relativistic spin structures, and defined a family of generalised Clifford-like algebras which support arbitrary spin structures. To do this, we presented a novel algebraic derivation for the structure of the Lie algebra \(\mathfrak{so}(p,q,\mathbb{R})\), without the need to appeal to the theory of differential geometry or Lie groups. We also defined an even more general Clifford-like algebra with no spin structure at all, which underpins the structure of the arbitrary spin Clifford-like algebras. In so doing, we explicated the geometric structure of the spin algebras, and showed that they each define a unique non-commutative geometry on Euclidean three-space. We found that this structure induces a spin-dependence on the measured notions of area and volume, and compared these new arbitrary spin algebras with existing models of arbitrary spin. Some applications and avenues for further enquiry were discussed. ## 6 Acknowledgements The author would like to thank B. J. Hiley, M. Hajtanian, and D. Nellist for their insightful conversations and support.
2309.09380
Mitigating Shortcuts in Language Models with Soft Label Encoding
Recent research has shown that large language models rely on spurious correlations in the data for natural language understanding (NLU) tasks. In this work, we aim to answer the following research question: Can we reduce spurious correlations by modifying the ground truth labels of the training data? Specifically, we propose a simple yet effective debiasing framework, named Soft Label Encoding (SoftLE). We first train a teacher model with hard labels to determine each sample's degree of relying on shortcuts. We then add one dummy class to encode the shortcut degree, which is used to smooth other dimensions in the ground truth label to generate soft labels. This new ground truth label is used to train a more robust student model. Extensive experiments on two NLU benchmark tasks demonstrate that SoftLE significantly improves out-of-distribution generalization while maintaining satisfactory in-distribution accuracy.
Zirui He, Huiqi Deng, Haiyan Zhao, Ninghao Liu, Mengnan Du
2023-09-17T21:18:02Z
http://arxiv.org/abs/2309.09380v1
# Mitigating Shortcuts in Language Models with Soft Label Encoding ###### Abstract Recent research has shown that large language models rely on spurious correlations in the data for natural language understanding (NLU) tasks. In this work, we aim to answer the following research question: _Can we reduce spurious correlations by modifying the ground truth labels of the training data?_ Specifically, we propose a simple yet effective debiasing framework, named Soft Label Encoding (SoftLE). We first train a teacher model with hard labels to determine each sample's degree of relying on shortcuts. We then add one dummy class to encode the shortcut degree, which is used to smooth other dimensions in the ground truth label to generate soft labels. This new ground truth label is used to train a more robust student model. Extensive experiments on two NLU benchmark tasks demonstrate that SoftLE significantly improves out-of-distribution generalization while maintaining satisfactory in-distribution accuracy. ## 1 Introduction Large language models (LLMs), such as BERT Devlin et al. (2019), RoBERTa Liu et al. (2019), and GPT-3 Brown et al. (2020), have achieved remarkable performance in various natural language understanding (NLU) tasks. However, recent studies suggest that these LLMs heavily rely on shortcut learning and spurious correlations rather than developing a deeper understanding of language and semantic reasoning across multiple NLU tasks Niven and Kao (2019); Du et al. (2021); Mudrakarta et al. (2018). This reliance on shortcuts and spurious correlations gives rise to biases within the trained models, which results in their limited generalization capability on out-of-distribution (OOD) datasets. To mitigate shortcut learning and build more robust models free from biases, several debiasing methods have been proposed, following the framework of knowledge distillation Hinton et al. (2015). These methods involve training a teacher model with prior knowledge about the task to capture dataset bias and then training a student model to avoid learning the biases present in the teacher model He et al. (2019); Clark et al. (2019). However, most existing debiasing methods rely on manual annotation and require specific prior knowledge of bias about the dataset, making it challenging to cover the entire dataset with prior knowledge. Motivated by the crucial observation that the limited robustness of LLMs on NLU tasks arises from spurious correlations learned during training, we aim to improve generalization and robustness by decreasing the likelihood of learning such correlations through a data-centric perspective. One straightforward approach is to filter shortcut features in the input data. However, this method is time-consuming and labor-intensive. This motivates us to explore the following research question: _Can we reduce spurious correlations by modifying the ground truth labels of the training data?_ In this work, we propose a method called Soft Label Encoding (SoftLE) to address the issue of shortcut learning in NLU models through a straightforward data-centric perspective. We first train a teacher model with hard labels to determine each sample's degree of relying on shortcuts. We then add one dummy class to encode the shortcut degree, which is used to smooth other dimensions in the ground truth label to generate soft labels. This new ground truth label is used to train a more robust student model. The key idea of our method is to reduce spurious correlations between shortcut tokens and certain class labels in the training set. This can be leveraged to discourage models from relying on spurious correlations during model training. This also implicitly encourages the models to derive a deeper understanding of the task. This label smoothing method is efficient since it directly operates on labels and does not require manual feature filtering. The major contributions of this work can be summarized as: * We propose a data-centric framework, SoftLE, to debias NLU models. * The SoftLE framework is flexible and can be applied to address the shortcut learning issue of various NLU problems. * Experimental results demonstrate that SoftLE improves out-of-distribution generalization while maintaining in-distribution accuracy. ## 2 Proposed Method In this section, we introduce the proposed Soft Label Encoding (SoftLE) debiasing framework. ### SoftLE Debiasing Framework Problem Formulation.NLU tasks are usually formulated as a general multi-class classification problem. Consider a dataset \(D=\{(x_{i},y_{i})\}_{i=1}^{N}\) consisting of the input data \(x_{i}\in\mathcal{X}\) and the hard labels \(y_{i}\in\mathcal{Y}\), the goal is to train a robust model with good OOD generalization performance. **Teacher Model Training.** A biased teacher model \(f_{T}\) containing \(K\) classes is first fine-tuned on the corresponding NLU dataset. As shown in Figure 2, when this model starts to converge, the percentage of over-confident samples in the in-domain set exceeds 0.9, while this ratio is around 0.8 for the OOD sets, indicating there are more over-confident samples in in-domain set. The in-domain test set contains both shortcut samples and difficult samples, whereas the two OOD sets primarily contain difficult samples. Therefore, the inconsistency in confidence ratios indicates that samples utilizing more shortcut features will be predicted by the teacher model with a higher softmax confidence. In the following, we leverage the prediction confidence of the model to quantify the degree of shortcut for each training sample. Quantifying Shortcut Degree.We fix the parameters of teacher model \(f_{T}\) and calculate the logit value and softmax confidence of training sample \(x_{i}\) as \(z_{i}^{T}\) and \(\sigma(z_{i}^{T})\). Then, we set the threshold \(\xi\) and hyperparameters to calculate the shortcut degree for each over-confident sample(i.e., \(\sigma(z_{i}^{T})\) > \(\xi\)): \[s_{i,j}=\log_{\alpha}(\sigma(z_{i}^{T})+\beta). \tag{1}\] Soft Label Encoding.Equipped with the shortcut degree, we then transform a K-class classification problem into a K+1 class problem by introducing a new _dummy category_. The value of the dummy category is given as the shortcut degree value \(s_{i,j}\). The original label \(1\) in one-hot form \(y_{i}\) is transformed into smoothed label: \(1-s_{i,j}\). We illustrate this process using the MNLI task as example (see Figure 1). Here, we set \(\xi\) as 0.9. If the teacher model predicts a high softmax confidence \(\sigma(z_{i}^{T})\) > 0.9 for a sample, then the original three-class classification label \(y_{i}=[0,1,0]\) of this sample will be transformed into a four-class label \(y_{i}^{\prime}=[0,1-s_{i,j},0,s_{i,j}]\). For training samples with a large shortcut degree, more smoothed new labels will be obtained. In contrast, we will preserve the original hard labels for samples with a low shortcut degree. ### Overall Framework We present the overall framework in Algorithm 1. The dummy class is only required during the training stage and will be discarded during inference. The Training Stage.We use standard cross entropy loss to train the debiased model: \[\mathcal{L}_{SL}=-\sum_{i=1}^{N}\sum_{j=1}^{K+1}y_{ij}^{\prime}\log(p_{ij}), \tag{2}\] Figure 1: An overview of the proposed Soft Label Encoding framework. Figure 2: Percentage of over-confident samples in FEVER and corresponding challenge sets during training the teacher model. We set the threshold \(\xi\) = 0.9. where \(p_{ij}\) is the predicted probability for input \(x_{i}\) to have label \(j\), and \(y^{\prime}_{ij}\) is the _transformed label_ for training example \(i\). During training, we replace the proposed loss \(L_{SL}\) with the standard hard label loss \(L_{HL}\) for the first two epochs as a warming-up training: \[\mathcal{L}_{HL}=-\sum_{i=1}^{N}\sum_{j=1}^{K+1}y_{ij}\log(p_{ij}), \tag{3}\] where \(y_{ij}\) stands for the _one-hot label_ for (K+1)-class classification of the training example. In the last few epochs, we switch back to using \(L_{SL}\). This has been demonstrated to retain better ID performance, while achieving similar debiasing performance. We give further analysis in Section 3.5. **The Inference Stage.** We ignore the extra class and predict based on the first \(K\) classes: \[\hat{y}_{i}=\text{argmax}_{j\in[1,\cdots,K]}p_{i,j}. \tag{4}\] ## 3 Experiments In this section, we evaluate the debiasing performance of the proposed SoftLE framework. ### Tasks and Datasets We explore two NLI tasks: natural language inference (NLI) and fact verification. For NLI, we use the MNLI dataset Williams et al. (2018) to train biased and de-biased models. We evaluate these models on the in-distribution(ID) MNLI-dev set and the out-of-distribution(OOD) HANS dataset McCoy et al. (2019) to test for generalization. For fact verification, we use the FEVER dataset Thorne et al. (2018) as our ID data. We then evaluate the model's OOD performance on the FEVER symmetric dataset Schuster et al. (2019). Further details are provided in Appendix B. For both tasks, we employ accuracy as the metric to evaluate the performance of the models on the ID and OOD sets. ### Comparing Baselines We compare our proposed method with four representative baseline methods: _Product of Experts (POE)_Clark et al. (2019); Mahabadi et al. (2020), _Example Reweighting (ER)_Schuster et al. (2019), _Regularized Confidence_Utama et al. (2020), and _Debiasing Masks_Meissner et al. (2022). More details of the baselines are given in Appendix C. ### Implementation Details We employ the Adam optimizer with its default hyperparameters to train the biased model for 5 epochs, where the learning rates for the MNLI and FEVER are set as 5 * \(10^{-5}\) and 2 * \(10^{-5}\) respectively. In the debiasing model training process, empirical results show that training for 5 epochs with a learning rate of 2 * \(10^{-5}\) can provide convergence on FEVER, while 6 epochs are needed to yield the best result on MNLI. Appendix D is provided for more details. In the following sections, we use BERT-base to test the effectiveness of the proposed debiasing framework (results for RoBERTa-base are given in Appendix E). ### Comparison with Baselines We compare our approach against baselines and report the results in Table 1. We observe that our SoftLE method consistently improves the performance on both challenge sets. However, the in-distribution performance on HANS drops slightly. Figure 3 reveals that despite the biased teacher model assigning high softmax confidences for both Figure 3: We compared the distribution of softmax confidences for the samples misclassified by softLE and the original model (i.e., teacher model) on Fever and Symm.1. Y-axis denotes the ratio. in-domain and OOD samples, a larger proportion of high-confidence OOD samples are misclassified. It further illustrates that the SoftLE assigns lower softmax confidences for over-confident samples, thereby effectively reducing the probability of the model incorrectly predicting OOD samples. The results confirm SoftLE leads to a narrower discrepancy between the error rates of ID and OOD sets. ### Ablation Study In Section 2, we mentioned that a better trade-off between ID and OOD performance can be achieved by adjusting the loss function during training the debiasing model. To confirm that the combination of this adjustment strategy is necessary to achieve our results, we provide an ablation experiment where the debiasing loss \(L_{SL}\) is replaced with \(L_{HL}\) during different training epochs. Previous work has shown that shortcut features tend to be picked up by the NLU model in very early iterations Du et al. (2021). Our results on FEVER support this idea, as shown in Table 2, where we find that replacing training loss in the first 2 epochs outperforms other strategies. As we have observed, SoftLE prevents models from learning spurious correlations, resulting in a lower performance increase on ID and OOD sets during the early stages of training the debiasing model. Thus, this adjustment strategy leverages superficial features, while SoftLE prevents the model from solely relying on superficial features, ultimately achieving a delicate balance. ### Why Does Our Algorithm Work? For an over-confident input sample \(x=(x^{b},x^{-b})\), let \(x^{b}\) denote _biased features_ of the sample, and let \(x^{-b}\) represent the remaining features of the sample except for the biased ones. It is generally considered that a bias model only uses the biased features \(x^{b}\) to predict the original ground-truth label: \[p(y^{\text{truth}}|x)=p(y^{\text{truth}}|x^{b}). \tag{5}\] Over-confidence indicates that the predicted probability \(p(y^{\text{truth}}|x^{b})\) of the sample is very high. In other words, for over-confident samples, there is a relatively high spurious correlation between labels and bias features. In comparison, when we transform the label of the sample, _i.e._, altering \(y^{\text{truth}}\) to \(y^{\text{smooth}}\), it is proved in Clark et al. (2019) that the predictive probability \(p(y^{\text{smooth}}|x)\) can be computed as follows: \[p(y^{\text{smooth}}|x)\propto p(y^{\text{smooth}}|x^{-b})p(y^{\text{smooth}}|x ^{b}). \tag{6}\] For over-confident samples, we find that the label transformation actually mitigates the potential correlation between labels and biased features. In other words, it significantly lowers the predictive probability given biased features, _i.e._, \(p(y^{\text{smooth}}|x^{b})<p(y^{\text{truth}}|x^{b})\). Thus, to maximize \(p(y^{\text{smooth}}|x)\), our model has to depend more on unbiased features \(x^{-b}\) to obtain a higher \(p(y^{\text{smooth}}|x^{-b})\) value. ## 4 Conclusions Recently debiasing NLU tasks has attracted increasing attention from the community. In this paper, we proposed SoftLE, a simple yet effective method for mitigating the shortcuts in NLU tasks. We present a theoretical analysis of this approach and experimental results on different NLP benchmark tasks confirming its effectiveness. \begin{table} \begin{tabular}{l c c c|c c c c} \hline \hline & \multicolumn{3}{c}{MNLI(acc.)} & \multicolumn{3}{c}{FEVER(acc.)} \\ \cline{2-9} Models & DEV & HANS & Avg. & DEV & Symm.1 & Symm.2 & Avg. \\ \hline Original & **84.5** & 62.4 & 73.5 & 85.6 & 55.1 & 62.2 & 67.6 \\ Reweighting Schuster et al. (2019) & 81.4 & 68.6 & 75.0 & 84.6 & 61.7 & 64.9 & 70.4 \\ PoE Clark et al. (2019); Mahabadi et al. (2020) & 84.2 & 64.6 & 74.4 & 82.3 & **62.0** & 64.3 & 69.5 \\ Reg-conf Utama et al. (2020) & 84.3 & **69.1** & **76.7** & 86.4 & 60.5 & 66.2 & 71.0 \\ Debias-Mask Meissner et al. (2022) & 81.8 & 68.7 & 75.3 & 84.6 & - & 64.9 & - \\ **SoftLE** & 81.2 & 68.1 & 74.7 & **87.5** & 60.3 & **66.9** & **71.5** \\ \hline \hline \end{tabular} \end{table} Table 1: Model performance on in-distribution and OOD test set. We select the version that achieves the best performance in the original paper for the listed baseline methods. The Avg. columns report the average score on in-distribution and challenge sets. We highlight the best performance on each dataset. \begin{table} \begin{tabular}{l c c c} \hline \hline Method & FEVER & Symm.1 & Symm.2 \\ \hline Original & 85.6 & 55.1 & 62.2 \\ SoftLE w/o Replacing & 86.6 & 57.7 & 63.9 \\ SoftLE-F2 (Ours) & **87.5** & **60.3** & **66.9** \\ SoftLE-L2 & 87.1 & 57.8 & 64.4 \\ \hline \hline \end{tabular} \end{table} Table 2: Our experimental results comparing the original method against several loss function adjustment strategies. SoftLE-F2 denotes training with \(L_{HL}\) for the first 2 epochs, while SoftLE-L2 denotes training with \(L_{HL}\) for the last 2 epochs. ## 5 Limitations There are various ways to quantify the shortcut degree. Our debiasing framework only attempts one solution to generate the shortcut degree of each sample. Going forward, in order to better measure the shortcut degree of the training samples, a more comprehensive analysis is needed. Additionally, although our proposed debiasing framework is general, we have only applied it to two NLU tasks (MNLI and FEVER) and two types of LLMs (i.e., BERT and RoBERTa). In the future, we plan to extend our debiasing framework to more NLU tasks and additional types of LLMs.
2307.16718
An Efficient Shapley Value Computation for the Naive Bayes Classifier
Variable selection or importance measurement of input variables to a machine learning model has become the focus of much research. It is no longer enough to have a good model, one also must explain its decisions. This is why there are so many intelligibility algorithms available today. Among them, Shapley value estimation algorithms are intelligibility methods based on cooperative game theory. In the case of the naive Bayes classifier, and to our knowledge, there is no ``analytical" formulation of Shapley values. This article proposes an exact analytic expression of Shapley values in the special case of the naive Bayes Classifier. We analytically compare this Shapley proposal, to another frequently used indicator, the Weight of Evidence (WoE) and provide an empirical comparison of our proposal with (i) the WoE and (ii) KernelShap results on real world datasets, discussing similar and dissimilar results. The results show that our Shapley proposal for the naive Bayes classifier provides informative results with low algorithmic complexity so that it can be used on very large datasets with extremely low computation time.
Vincent Lemaire, Fabrice Clérot, Marc Boullé
2023-07-31T14:39:10Z
http://arxiv.org/abs/2307.16718v1
# An Efficient Shapley Value Computation ###### Abstract Variable selection or importance measurement of input variables to a machine learning model has become the focus of much research. It is no longer enough to have a good model, one also must explain its decisions. This is why there are so many intelligibility algorithms available today. Among them, Shapley value estimation algorithms are intelligibility methods based on cooperative game theory. In the case of the naive Bayes classifier, and to our knowledge, there is no "analytical" formulation of Shapley values. This article proposes an exact analytic expression of Shapley values in the special case of the naive Bayes Classifier. We analytically compare this Shapley proposal, to another frequently used indicator, the Weight of Evidence (WoE) and provide an empirical comparison of our proposal with (i) the WoE and (ii) KernelShap results on real world datasets, discussing similar and dissimilar results. The results show that our Shapley proposal for the naive Bayes classifier provides informative results with low algorithmic complexity so that it can be used on very large datasets with extremely low computation time. Keywords:Interpretability Explainability Shapley value naive Bayes ## 1 Introduction There are many intelligibility algorithms based on the computation of variable's contribution to classifier results, often empirical and sometimes without theoretical justifications. This is one of the main reasons why the Python SHAP library was created in 2017 by Scott Lundberg following his publication [16], to provide algorithms for estimating Shapley values, an intelligibility method based on cooperative game theory. Since its inception, this library has enjoyed increasing success, including better theoretical justifications and qualitative visualizations. It provides local explanation like other methods such as LIME [17]. In the case of the naive Bayes classifier, we show in this paper that Shapley values can be computed accurately and efficiently. The key contributions are: * an analytical formula for the Shapley values in the case of the naive Bayes classifier, * an efficient algorithm for calculating these values, with algorithmic complexity linear with respect to the number of variables. The remainder of this paper is organized into three contributions : (i) in the next section 2 we give our proposal for local Shapley values in the case of the naive Bayes (NB) classifier, with further discussion in the section 3; (ii) the following section 4 compares, in an analytic analysis, our Shapley proposal to another frequently used indicator in the case of the NB classifier: the Weight of Evidence (WoE); (iii) we then provide, in section 5 an empirical comparison of the results our Shapley formulation to the results of (i) the WoE and (ii) KernelShap on real world datasets and discuss similar similar and dissimilar results. The last section concludes the paper. ## 2 Shapley for naive Bayes Classifier To our knowledge, there is no "analytical" formula of Shapley values for the naive Bayes classifier in the literature1. This first section is therefore devoted to a proposal for calculating these these values, exploiting the conditional variable independence assumption that characterizes this classifier. Footnote 1: See the introduction of the Section 4 for a very brief literature overview ### Reminders on the naive Bayes classifier The naive Bayes classifier (NB) is a widely used tool in supervised classification problems. It has the advantage of being efficient for many real data sets [9]. However, the naive assumption of conditional independence of the variables can, in some cases, degrade the classifier's performance. This is why variable selection methods have been developed [11]. They mainly consist of variable addition and deletion heuristics to select the best subset of variables maximizing a classifier performance criterion, using a wrapper-type approach [8]. It has been shown in [4] that averaging a large number of selective naive Bayes classifiers2, performed with different subsets of variables, amounts to considering only one model with a weighting on the variables. Bayes' formula under the assumption of independence of the input variables conditionally to the class variable becomes: Footnote 2: In this case, it is an assembly of models providing better results than a single classifier \[P(Y_{k}|X)=\frac{P(Y_{k})\prod_{i}P(X_{i}|Y_{k})^{w_{i}}}{\sum_{j=1}^{K}(P(Y_ {j})\prod_{i}P(X_{i}|Y_{j})^{w_{i}})} \tag{1}\] where \(w_{i}\in[0,1]\) is the weight of variable \(i\). The predicted class is the one that maximizes the conditional probability \(P(Y_{k}|X)\). The probabilities \(P(X_{i}|Y_{i})\) can be estimated by interval using discretization for numerical variables. Gaussian naive Bayes could be also considered. For categorical variables, this estimation can be done directly if the variable takes few different modalities, or after grouping (of values) in the opposite case. Note 1: in accordance with the naive Bayes model definition, our Shapley value proposal assumes that the variables of the model are independent conditionally to the class. In practice, we expect a variable selection method to result in a classifier relying on variables which are uncorrelated or only weakly correlated conditionally to the class. A posthoc analysis of our results shows that this is indeed the case in the experiments of this article with the parsimonious classifier used (see Section 5.1). Note 2: Even if in equation 1 the NB have transparent weights for all feature variables it is interesting to explain NB models in order to have local interpretations. ### Definition and notations The following notations are used: * the classifier uses \(d\) variables: \([d]=\{1,2,...,d\}\) * for a subset \(u\) of \([d]\), we note \(|u|\) the cardinality of \(u\) * for two disjoint sets \(u\) and \(r\) of \([d]\), let \(u+r\) be \(u\cup r\) * for a subset \(u\) of \([d]\), we denote by \(-u=[d]\backslash u\), the complement of \(u\) in \(d\) We define a "value function" \(v(.)\) indicating for each subset \(u\) of variables the maximum "contribution" they can obtain together, i.e. \(v(u)\), to the output of the classifier. The maximum value (or total gain) of the value function is reached when all the variables are taken into account, \(v([d])\). The Shapley value for variable \(j\) is denoted \(\phi_{j}\). Shapley's theorem [19] tells us that there is a unique distribution of Shapley values satisfying the following four properties: * Efficiency: \(v([d])=\sum_{j}\phi_{j}\); i.e. the total gain is distributed over all the variables * Symmetry: if \(\forall u\subset-\{i,j\}\), \(v(u+j)=v(u+i)\), then \(\phi_{j}=\phi_{i}\); i.e. if the variables \(i\) and \(j\) bring the same gain to any subset of variables, then they have the same Shapley value * Null player: if \(\forall u\subset-\{i\}\), \(v(u+i)=v(u)\), then \(\phi_{i}=0\); i.e. if the variable \(i\) contributes nothing to any subset of variables, then its Shapley value is zero * Additivity: if the \(d\) variables are used for two independent classification problems \(A\) and \(B\) associated with \(v_{A},v_{B}\), then the Shapley values for the set of two problems are the sum of the Shapley values for each problem ### Shapley Values for the naive Bayes Classifier #### 2.3.1 'Value Function': In the case of the NB we propose to take as 'Value Function' (case of a two-class classification problem) the log ratio (LR) of probabilities: \[LR = log\left(\frac{P(Y_{1}|X)}{P(Y_{0}|X)}\right) \tag{2}\] \[= log\left(\frac{P(Y_{1})\prod_{i=1}^{d}P(X_{i}|Y_{1})^{w_{i}}}{ \sum_{j=1}^{K}(P(Y_{j})\prod_{i=1}^{d}P(X_{i}|Y_{j})^{w_{i}})}\frac{\sum_{j=1} ^{K}(P(Y_{j})\prod_{i=1}^{d}P(X_{i}|Y_{j})^{w_{i}})}{P(Y_{0})\prod_{i=1}^{d}P( X_{i}|Y_{1})^{w_{i}}}\right)\] \[= log\left(\frac{P(Y_{1})\prod_{i=1}^{d}P(X_{i}|Y_{1})^{w_{i}}}{P( Y_{0})\prod_{i=1}^{d}P(X_{i}|Y_{1})^{w_{i}}}\right)\] \[= log\left(\frac{P(Y_{1})}{P(Y_{0})}\right)+\sum_{i=1}^{d}w_{i}log \left(\frac{P(X_{i}|Y_{1})}{P(X_{i}|Y_{0})}\right)\] The choice of the logarithm of the odd ratio as the "value function" is motivated by two reasons (i) the logarithm of the odd ratio is in bijection with the score produced by the classifier according to a monotonic transformation (ii) the logarithm of the odd ratio has a linear form that allows the derivation of an analytical formula. This value function differs from the usual value function, \(f(X)=P(Y|X)\), as mentioned and analyzed later in this document when comparing with KernelShap (see section 5.3). We stress here that the derivation above is only valid in the case of independent variables conditionally to the class variable, which is the standard assumption for the naive Bayes classifier. For a subset, \(u\), of the variables 1 given \(X_{u}=x_{u}\): Footnote 1: on the covariates in \(u\), we average over the conditional distribution of \(X_{-u}\) \[v(u)=\mathbb{E}_{X_{-u}|X_{u}=x_{u}}\left[LR(X_{u}=x_{u}^{*},X_{-u})\right] \tag{3}\] which we write in a "simplified" way afterwards \[v(u)=\mathbb{E}\left[(LR(X)|X_{u}=x_{u}^{*})\right] \tag{4}\] This is a proxy of the target information provided by \(u\) at the point \(X=x^{*}\). Thus, for a point (an example) of interest \(x^{*}\) we have: * \(v([d])=LR(X=x^{*})\), everything is conditional on \(x^{*}\) so we have the log odd ratio for \(X=x^{*}\) * \(v(\emptyset)=\mathbb{E}_{X}\left[LR(X)\right]=\mathbb{E}_{X}\left[log(\frac{ P(Y_{1}|X)}{P(Y_{0}|X)})\right]\), nothing is conditioned so we have the expectation of the log odd ratio #### 2.3.2 Shapley Values: By definition of the Shapley values [19], we have for a variable \(m\): \[\phi_{m}=\frac{1}{d}\sum_{u\in-m}\frac{v(u+m)-v(u)}{{d-1\choose|u|}} \tag{5}\] To obtain \(\phi_{m}\), we therefore need to calculate, for a subset of variables in which the variable \(m\) does not appear, the difference in gain \(v(u+m)-v(u)\). This makes it possible to compare the gain obtained by the subset of variables with and without the \(m\) variable, in order to measure its impact when it "collaborates" with the others. We therefore need to calculate \(v(u+m)-v(u)\) in the case of the naive Bayes classifier. If this difference is positive, it means that the variable contributes positively. Conversely, if the difference is negative, the variable is penalizing the gain. Finally, if the difference is zero, it indicates that the variable makes no contribution. Following the example of [16] and Corollary1 with a linear model whose covariates are the log odd ratio as a 'value function', one can decompose the subsets of variables into 3 groups \(\{u\},\{m\},-\{u+m\}\). **Calculation** of \(v(u):\) On \(\{u\}\), we condition on \(X_{u}=x_{u}\) while on \(\{m\}\), \(\{u+m\}\), we do an averaging. By consequent: \[v(u) = \mathbb{E}\left[LR(X)|X_{u}=x_{u}^{*}\right]\] \[= log(P(Y_{1})/P(Y_{0}))\] \[+ \sum_{k\in u}w_{k}log\left(\frac{P(X_{k}={x_{k}}^{*}|Y_{1})}{P(X_ {k}={x_{k}}^{*}|Y_{0})}\right)\] \[+ w_{m}\sum_{X_{m}}\left[P(X_{m}=x_{m})log\left(\frac{P(X_{m}=x_{m }|Y_{1})}{P(X_{m}=x_{m}|Y_{0})}\right)\right]\] \[+ \sum_{k\in-\{u+m\}}w_{k}\sum_{X_{k}}\left[P(X_{k}=x_{k})log\left( \frac{P(X_{k}=x_{k}|Y_{1})}{P(X_{k}=x_{k}|Y_{0})}\right)\right]\] **Calculation of \(v(u+m):\)** The only difference is that we also condition on \(X_{m}\) \[v(u+m) = \mathbb{E}\left[LR(X)|X_{u+m}=x_{u+m}^{*}\right)]\] \[= log(P(Y_{1})/P(Y_{0}))\] \[+ \sum_{k\in u}w_{k}log\left(\frac{P(X_{k}={x_{k}}^{*}|Y_{1})}{P(X _{k}={x_{k}}^{*}|Y_{0})}\right)\] \[+ w_{m}\left[log\left(\frac{P(X_{m}=x_{m}^{*}|Y_{1})}{P(X_{m}=x_{ m}^{*}|Y_{0})}\right)\right]\] \[+ \sum_{k\in-\{u+m\}}w_{k}\sum_{X_{k}}\left[P(X_{k}=x_{k})log\left( \frac{P(X_{k}={x_{k}}^{*}|Y_{1})}{P(X_{k}={x_{k}}^{*}|Y_{0})}\right)\right]\] The difference \(v(u+m)-v(u)\) is independent on \(u\) and therefore the combinatorial sum averaging over all \(u\in-m\) in equation 5 simply vanishes and finally \(\phi_{m}=v(u+m)-v(u)\) \[= w_{m}\left(log\left(\frac{P(X_{m}=x_{m}^{*}|Y_{1})}{P(X_{m}=x_{ m}^{*}|Y_{0})}\right)-\sum_{X_{m}}\left[P(X_{m}=x_{m})log\left(\frac{P(X_{m}=x_{ m}|Y_{1})}{P(X_{m}=x_{m}|Y_{0})}\right)\right]\right)\] \[= w_{m}\left(log\left(\frac{P(X_{m}=x_{m}^{*}|Y_{1})}{P(X_{m}=x_{ m}^{*}|Y_{0})}\right)-\mathbb{E}\left(log\left(\frac{P(X_{m}=x_{m}|Y_{1})}{P(X_{m}= x_{m}|Y_{0})}\right)\right)\right) \tag{10}\] Equation 10 provides the exact analytical expression of the Shapley value for our choice of the log odd ratio as the value function of the weighted naive Bayes. ## 3 Interpretation and Discussion We give here some interpretation details and discussion about the Shapley formulation which are interesting arguments for its use. \(\bullet\) The equation 10 is the difference between the information content of \(X_{m}\) conditionally on \(X_{m}=x_{m}^{*}\) and the expectation of this information. In other words, it is the information contribution of the variable \(X_{m}\) for the value \(X_{m}=x_{m}^{*}\) of the considered instance, contrasted by the average contribution on the entire database. \(\bullet\) The Equation 10 can be rewritten (we just omit the product by \(w_{m}\)) in the form: \[-\left[log\left(\frac{1}{P(X_{m}=x_{m}^{*}|Y_{1})}\right)-\sum_{X _{m}}\left(P(X_{m}=x_{m})log\left(\frac{1}{P(X_{m}=x_{m}|Y_{1})}\right)\right)\right]\] \[+\left[log\left(\frac{1}{P(X_{m}=x_{m}^{*}|Y_{0})}\right)-\sum_{X _{m}}\left(P(X_{m}=x_{m})log\left(\frac{1}{P(X_{m}=x_{m}|Y_{0})}\right)\right)\right]\] The terms in brackets \([\dots]\) in equation 11 are the difference between the information content related to the conditioning \(X_{m}=x_{m}^{*}\) and the entropy of the variable \(X_{m}\) for each class (\(Y_{0}\) and \(Y_{1}\)). This term measures how much conditioning on \(X_{m}=x_{m}^{*}\) brings information about the target classes. \(\bullet\) For a given variable, the expectation of our Shapley proposal is equal to zero, due to the conditional independence of the variables. The consequence is that high Shapley values in some parts of the data space must be exactly compensated by low values in other parts of the data space. \(\bullet\) For a given example if we return to our choice of value function (equation 2) and using the sum of equation 10 over the \(d\) variables we have: \[LR = log\left(\frac{P(Y_{1})}{P(Y_{0})}\right)+\sum_{m=1}^{d}w_{m} log\left(\frac{P(X_{m}|Y_{1})}{P(X_{m}|Y_{0})}\right) \tag{12}\] \[= log\left(\frac{P(Y_{1})}{P(Y_{0})}\right)+\sum_{m=1}^{d}\phi_{m} +\sum_{m=1}^{d}\mathbb{E}\left(log\left(\frac{P(X_{m}=x_{m}|Y_{1})}{P(X_{m}= x_{m}|Y_{0})}\right)\right)\] \[= \sum_{m=1}^{d}\phi_{m}+\mathrm{cste}\] We obtain a result consistent with the notion of a value function for the Shapley's formulation. Our value function consists of a constant plus the individual contribution of the \(d\) variables. The constant is the log ratio of class prior plus the sum of the average contribution of all variables. \(\bullet\) If we inverse the role of \(Y_{0}\) and \(Y_{1}\) in equation 10, we observe that the Shapley value is symmetric; i.e the positive contribution of the variable for \(Y_{0}\) is negative for \(Y_{1}\) (with the same absolute value). \(\bullet\) When the numerical (resp. categorical) variables have been previously discretized into intervals (resp. groups of values), the complexity of the equation 10 is linear in the number of discretized parts. For an input vector made up of \(d\) variables, this complexity is \(O(\sum_{i=1}^{d}P_{i})\) where \(P_{i}\) is the number of discretized parts of variable \(i\). \(\bullet\) In term of explainability, if the discretization method used for numerical attributes (resp. grouping method for categorical attributes) provides a reasonable number of intervals (resp. groups of values), then the number of potential "behaviors" of the individuals in the classification problem is small and therefore easy to understand. \(\bullet\) Extension to multiclass: We simply define the Shapley Value of an input variable as the sum of the absolute \(C\) Shapley values when choosing in equation 10 one of the \(C\) class of the problem as the "positive class" (\(Y_{1}\)) and all the others \(C-1\) class as the "negative class" (\(Y_{0}\)). For example in a 3 class problems where the class are'red', 'green', and 'yellow': \[\phi_{m} = |\phi_{m}(Y_{1}=\{red\},Y_{0}=\{green,yellow\})|\] \[+|\phi_{m}(Y_{1}=\{green\},Y_{0}=\{red,yellow\})|\] \[+|\phi_{m}(Y_{1}=\{yellow\},Y_{0}=\{green,red\})|\] In this way, we can find out which feature has the greatest impact on all classes. Note that there are other ways of measuring the impact of features in multiclassification problems (see, for example, the discussion in [1] on using the SHAP package for multi-classification problems). \(\bullet\) To conclude this discussion and prior to the experiments presented in Section 5, we give here an illustrative example on the Adult dataset (the experimental conditions are the same as those presented in Section 5). Figure 1 shows the Shapley values obtained for 3 examples which are respectively predicted as belonging to the class'more' with probabilities 0.99, 0.50 and 0.01. On this well-known data set, we find the usual results on the role of input variables for examples with high to low probabilities when considering the class'more'. ## 4 Analytic comparison with the Weight of Evidence In the case of the naive Bayes classifier, there are a number of "usual" methods for calculating the importance of input variables. We do not go into detail on all of them, but the reader can find a wide range of these indicators in [18, 12] for a brief literature overview but nonetheless quite exhaustive. This section focuses on presenting the "Weight of evidence" (WoE) [7] and its comparison with the Shapley values proposed in the previous section, since this indicator is (i) close to the equation presented above (equation 10) and (ii) among the most widely used indicators for the naive Bayes classifier. We give below the definition of the WoE (in the case with two classes) which is a log odds ratio calculated between the probability of the output of the model and the latter deprived of the variable \(X_{m}\): \[(WoE)_{m}=log\left(\frac{\frac{p}{1-p}}{\frac{q}{1-q}}\right)=w_{m}\left(log \left(\frac{\frac{P(Y_{1}|X)}{P(Y_{0}|X)}}{\frac{P(Y_{1}|X,X_{m})}{P(Y_{0}|X \setminus X_{m})}}\right)\right)=w_{m}\left(log\left(\frac{P(Y_{1}|X)P(Y_{0}|X \setminus X_{m})}{P(Y_{0}|X\setminus X_{m})}\right)\right) \tag{13}\] \[(WoE)_{m}=w_{m}\left(log\left(\frac{P(Y_{1})\left[\prod_{i=1}^{d}P(X_{i}|Y_{1} )\right]P(Y_{0})\left[\prod_{i=1,i\neq m}^{d}P(X_{i}|Y_{0})\right]}{P(Y_{0}) \left[\prod_{i=1}^{d}P(X_{i}|Y_{0})\right]P(Y_{1})\left[\prod_{i=1,i\neq m}^{d }P(X_{i}|Y_{1})\right]}\right)\right) \tag{14}\] by simplifying the numerator and denominator: \[(WoE)_{m}=w_{m}\left(log\left(\frac{P(X_{m}=x_{m}^{\star}|Y_{1})}{P(X_{m}=x_{ m}^{\star}|Y_{0})}\right)\right) \tag{15}\] **Link between \((WoE)_{m}\) and \(\phi_{m}\)**: If we compare the equations 15 and 10, we can see that it is the reference that changes. For the Shapley value ( equation 10), the second term takes the whole population as a reference whereas for the WoE (equation 15) the reference is zero. The averaging is not at the same place between the two indicators, as we will demonstrate just below. We can also observe that the expectation of our Shapley proposal is equal to zero, whereas the expectation of WoE is the second term of our Shapley proposal (second part, the expectation term, of equation 10). In case of the naive Bayes classifier, "depriving" the classifier of a variable is equivalent to performing a "saliency" calculation (as proposed in [13]) which takes into account the probability distribution of the variable \(X_{m}\). Indeed, to deprive the classifier of the variable \(X_{m}\), it is sufficient to recalculate the average of the classifier's predictions for all the possible values of the variable \(X_{m}\) as demonstrated in [18]. Indeed, if we assume that the variable \(X_{m}\) has \(k\) distinct values, Robnik et al. [18] have shown that the saliency calculation of [13] is exact in the naive Bayes case and amounts to "erasing" the variable \(X_{m}\). Denoting either \(Y=Y_{0}\) or \(Y=Y_{1}\) by \(Y\), we have \[P(Y.|X\setminus X_{m})=\sum_{q=1}^{k}P(X_{m}=X_{q})\frac{P(Y.|X,X_{m}=X_{q})}{ P(X,X_{m}=X_{q})} \tag{16}\] \[P(Y.|X\setminus X_{m})=\sum_{q=1}^{k}P(X_{m}=X_{q})\Bigg{(}P(Y.)\left(\prod_{i =1,i\neq m}^{d}\frac{P(X_{i}|Y.)}{P(X_{i})}\right)\frac{P(X_{m}=X_{q}|Y.)}{P( X_{m}=X_{q})}\Bigg{)}\] \[P(Y_{.}|X\backslash X_{m}) = P(Y_{.})\prod_{i=1,i\neq m}^{d}P(X_{i}|Y_{.})\Bigg{(}\sum_{q=1}^{k} \frac{P(X_{m}=X_{q})P(X_{m}=X_{q}|Y_{.})}{P(X_{m}=X_{q})}\Bigg{)} \tag{17}\] \[P(Y_{.}|X\backslash X_{m}) = P(Y_{.})\prod_{i=1,i\neq m}^{d}P(X_{i}|Y_{.}) \tag{18}\] with \(P(Y_{.}|X,X_{m}=X_{q})\) being \(P(Y_{.}|X)\) but where the value of the variable \(X_{m}\) has been replaced by another value of its distribution \(X_{q}\). This last result is interesting because with the help of the equation 17 we can rewrite the equation 13 in : \[(WoE)_{m} = w_{m}\left(log\left(\frac{P(Y_{1}|X)P(Y_{0}|X\backslash X_{m})} {P(Y_{0}|X)(Y_{1}|X\backslash X_{m})}\right)\right)\] \[(WoE)_{m} = w_{m}log\frac{\Bigg{(}P(Y_{1})\prod_{i=1}^{d}P(X_{i}|Y_{1}) \Bigg{)}\left(P(Y_{0})\prod_{i=1,i\neq m}^{d}P(X_{i}|Y_{0})\sum_{q=1}^{k}P(X_{ m}=X_{q}|Y_{0})\right)}{\Bigg{(}P(Y_{0})\prod_{i=1}^{d}P(X_{i}|Y_{0})\Bigg{)} \Bigg{(}P(Y_{1})\prod_{i=1,i\neq m}^{d}P(X_{i}|Y_{1})\sum_{q=1}^{k}P(X_{m}=X_{ q}|Y_{1})\Bigg{)}}\] \[(WoE)_{m} = w_{m}\left(log\left(\frac{P(X_{m}=x_{m}^{*}|Y_{1})}{P(X_{m}=x_{ m}^{*}|Y_{0})}\right)+log\left(\frac{\sum_{q=1}^{k}P(X_{m}=X_{q}|Y_{0})}{\sum_{q=1}^{k}P(X _{m}=X_{q}|Y_{1})}\right)\right) \tag{19}\] \[(WoE)_{m} = w_{m}\left(log\left(\frac{P(X_{m}=x_{m}^{*}|Y_{1})}{P(X_{m}=x_{ m}^{*}|Y_{0})}\right)+log\left(\frac{1}{1}\right)\right) \tag{20}\] This result allows to better understand why the WoE is referenced in zero. The comparison of the equation 10 and the equation 19 exhibits the difference in the localization of the averaging resulting in a reference in zero for the WoE. In the first case an expectation is computed on the variation of the log ratio \(log(P(Y_{1}|X)/P(Y_{0}|X))\) while in the second case this expectation is computed only on the variations of \(f(X)=P(Y_{1}|X)\) (or reciprocally \(P(Y_{0}|X)\)). This comparison shows the effect of choosing either the odds (our Shapley proposal) or the output of the classifier (WoE) as the 'value function'. Since both results are very consistent, and WoE does not suffer from calculation exhaustion, the two methods are very close. ## 5 Experiments The experiments carried out in this section allow us to compare our Shapley proposal with the Weight of Evidence and KernelShap to highlight similar or dissimilar behaviors. We focus below on two classes problems. The code and data used in this section are available in the GitHub repository at [https://tinyurl.com/ycxzkffk](https://tinyurl.com/ycxzkffk). ### Datasets and Classifier **Classifier** : The naive Bayes classifier used in the experiments exploits two main steps. A first step in which (i) the numerical variables are discretized, using the method described in [6], (ii) the modalities of the categorical variables are grouped using the method described in [5]. Then, variable weights are calculated using the method described in [4]. In the first and second steps, uninformative variables are eliminated from the learning process. In this paper, we have used the free Khiops software [3] in which the whole process is implemented. This software produces a preparation report containing a table of the values of \(P(X_{m}=x_{m}|Y)\) for all classes and all variables, enabling us to easily implement the two methods described earlier in the article. Note: below, the same classifier and preprocessing are used for comparing the different methods used to calculate the variable importance, so that the differences in the results will be only due to those different methods. **Dataset** : Ten datasets have been selected in this paper and are described in the Table 1. They are all available on the UCI website [14] or on the Kaggle website[2]. They were chosen to be representative datasets in terms of variety of number of numerical attributes (#Cont), number of categorical attributes (#Cat), number of instances (#Inst) and imbalance between classes4 (Maj. class.). They are widely used in the "machine learning" community as well as in the analysis of recently published Shapley value results. In this table, we give in the last columns the performances, for information purposes, obtained by the naive Bayes used (an averaged naive Bayes, see Section 2.1); i.e the accuracy and the Area Under the ROC curve (AUC), as well as the number of variables retained by this classifier (#Var) since uninformative variables are eliminated from the learning process. As the aim of this article is not to compare classification results, we decide simply to use 100 % of the examples to train the model5 and to compute later the importance indicators (WoE and Shapley). Footnote 4: Here we give the percentage of the majority class. Footnote 5: To facilitate reproducibility. Nevertheless, the test performances of the models (Table 1) are very close with a 10-fold cross-validation process. \begin{table} \begin{tabular}{|l|c|c|c|c|c|c|c|} \hline Name & \#Cont & \#Cat & \#Inst (\(N\)) & Maj. class. & Accuracy & AUC & \#Var \\ \hline Twonorm & 20 & 0 & 7400 & 0.5004 & 0.9766 & 0.9969 & 20 \\ Crx & 6 & 9 & 690 & 0.5550 & 0.8112 & 0.9149 & 7 \\ Ionosphere & 34 & 0 & 351 & 0.6410 & 0.9619 & 0.9621 & 9 \\ Spam & 57 & 0 & 4307 & 0.6473 & 0.9328 & 0.9791 & 29 \\ Tictactoe & 0 & 9 & 958 & 0.6534 & 0.6713 & 0.7383 & 5 \\ German & 24 & 0 & 1000 & 0.7 & 0.7090 & 0.7112 & 9 \\ Telco & 3 & 18 & 7043 & 0.7346 & 0.8047 & 0.8476 & 10 \\ Adult & 7 & 8 & 48842 & 0.7607 & 0.8657 & 0.9216 & 13 \\ KRFCC & 28 & 7 & 858 & 0.9358 & 0.9471 & 0.8702 & 3 \\ Breast & 10 & 0 & 699 & 0.9421 & 0.975 & 0.9915 & 8 \\ \hline \end{tabular} \end{table} Table 1: Description of the datasets used in the experiments (KRFCC = KagRisk-FactorsCervicalCancer dataset) ### Comparison with the WoE In this first part of the experiments, the comparison is made with the Weight of Evidence. and we present the observed correlation between the Shapley values (Eq. 10) and the WoE values (Eq. 15). We compute the Shapley and WoE values per class (\(C\)), per variable (\(J\)) and per instance (\(N\)) then, we compute the Kendall correlation6 line per line; that is, for each example, we compute the \(d\) values of WoE or of our Shapley values and then the Kendall coefficient for that example. Finally we compute the average and the standard deviation of these \(N\) values which are reported in the Table 2. Footnote 6: We used the scipy.stats.kendalltau with the default parameter, i.e \(\tau\)-b. The Kendall correlation is a measure of rank correlation, therefore, it measures whether the two indicators, WoE and our Shapley values, give the same ordering in the importance of the variables. In Table 2, we observe only Kendall values above 0.82. Kendall's coefficient values can range from 0 to 1. The higher the Kendall's coefficient value, the stronger the association. Usually, Kendall's coefficients of 0.9 or more are usually considered very good. Kendall's coefficient means also that the appraisers apply essentially the same standard when assessing the samples. With the values shown in this Table, we observe [10] a minimum of fair agreement to a near perfect agreement between our Shapley proposition and WoE in terms of ranking of the variable importances7. Footnote 7: It would also be interesting to see the correlations of only the most important variables (e.g. the top five), since usually only a few of the most important features are perceptible to humans. However, for lack of space, we do not present this result. We do, however, provide the code for doing so. This good agreement can be understood from two non exclusive perspectives. First, using an averaged naive Bayes model introduces a weight \(w_{m}\) which has a strong influence on the variable importance (the higher the weight, the stronger the influence, for both methods): the variable importance would be mainly in \begin{table} \begin{tabular}{|l|c|} \hline Name & Kendall \\ \hline \hline Twonorm & 0.9919 \(\pm\)8.71e-05 \\ Crx & 0.9919 \(\pm\)4.28e-04 \\ Ionosphere & 0.8213 \(\pm\)1.76e-02 \\ Spam & 0.9011 \(\pm\)2.66e-04 \\ Tictactoe & 1.0000 \(\pm\)2.60e-04 \\ German & 0.9515 \(\pm\)1.01e-03 \\ Telco & 0.9210 \(\pm\)3.70e-03 \\ Adult & 0.8589 \(\pm\)6.57e-03 \\ KRFCC & 0.9931 \(\pm\)1.77e-03 \\ Breast & 0.9222 \(\pm\)2.73e-03 \\ \hline \end{tabular} \end{table} Table 2: Two Class problems fluenced by the weights ordering and therefore the same for both methods. Second, it could point out to the fact that the variable-dependent reference terms \(-w_{m}\mathbb{E}\left(log\left(\frac{P(X_{m}=x_{m}|Y_{1})}{P(X_{m}=x_{m}|\bar{Y}_{ 0})}\right)\right)\) which make the difference between the Shapley value and the WoE are either small or roughly constant in our datasets. How those two perspectives are combined to lead to the good agreement experimentally observed is left for future work. ### Comparison with Kernel Shap Among the libraries able to compute Shapley values, one may find'model oriented' proposals that can only be used on particular model as for example with tree-based algorithms like random forests and XGBoost (TreeShape [15], FastTreeShap [20]), or model agnostic which can be used with any machine learning algorithm as KernelShap [16]. Here since we did not find a library dedicated to naive Bayes, we compare our results to the popular Kernel Shap. In this section we attempt to compare the results obtained, for the Shapley values, with our analytic expression and the results obtained with the KernelShap library. For a fair comparison, the first point to raise is that the two processes do not use the same 'value function'. Indeed, in our case we use a log odds ratio whereas in KernelShap, when providing the classifier to the library, the value function used is the output of the classifier. **On the use of Kernelshap [16]:** The computation time of the library can be very long, even prohibitive. To use the library, the user has to define two datasets: (i) a first dataset, as a knowledge source, which is used to perform the permutation of variable values (ii) a second dataset on which one would like to obtain the Shapley values. The first database is used to compute the Shapley value of the variables for a given example. Given this table and a variable of interest, an example \(X_{i}\), is modified thanks to the permutation of the others variables. This allows the KernelShap library to create a "modified table" which contains all the modified versions of this example. To give more intuition about the size of 'the modified-example-table' we plot, in Figure 2, for the "CRX" dataset, the size of this table as a function of the number of examples in the 'knowledge table', showing the linear increase that results from a very large table. Then the classifier have to predict its output value for the considered example \(X_{i}\) to compute the Shapley values. For this "CRX" dataset, which contains 15 inputs variables, the time taken to compute the Kernelshap values for a single example and using all the 690 examples as 'knowledge table' is 12.13 seconds8, so 8370 seconds for the entire dataset (around 2.5 hours for a small dataset). To summarize, the algorithmic complexity of KernelShap is \(O(N_{k}2^{d})\) where \(N_{k}\) is the number of examples used in the 'knowledge table'. Footnote 8: The characteristics of the used computer are: Intel(R) Core(TM) i7-10875H (No. of threads. 16; 5.10 GHz) RAM:32.0 Go, Windows 10, Python 3.8.6 As a consequence, we were not able to obtain a complete result on most datasets (even with a half-day credit) when using the entire dataset. As sug gested9 by the KernelShap library, in the results below we limit the computation time to a maximum of 2 hours per dataset: (i) the Shapley values are computed only on 1000 (randomly chosen) examples10 and (ii) the number of examples in the 'knowledge table', \(N_{k}\)11, has been set to the values indicated in the Table 3 (where the number of examples of the entire dataset is given as a reminder in the brackets). Footnote 9: The variance in the results observed in recent publications is due to this constraint. Footnote 10: It is obvious that for large datasets such as the “adult” the chosen sample of 1000 is statistically insignificant and, as a result, the calculated importance values, computed by KernelShap may not be reliable. Footnote 11: We start with 50 examples (as a minimum budget) and we increment this number by step of 50 until the credit is reached. **On the use of our Shapley proposal -** In contrast, for the analytic Shapley proposed in this paper, the time required to compute the Shapley values is very low (see the discussion in Section 3). Indeed, the algorithmic complexity, for an input variable, is linear in number of parts, intervals or groups of values (see Equation 10). On the largest dataset used in this paper, the Adult dataset which contains 48842 examples, the time used to compute all the Shapley values for all the variables, all the classes and all the examples is lower than 10 seconds. This computation time could be further reduced if the \(log(P(X|C)\) per variable and per interval (or group values) are precomputed as well as the expectation term of the equation 10, which is not the case in our experiments. **Results:** The Table 3 gives the correlation between the global Shapley values, defined for each variable as the average on all samples of the absolute values of the local Shapley values. We observe good correlations for both coefficients. We also give an example of comparison on the TwoNorm dataset in Figure 3 (where we have drawn the normalized global Shapley values), for which the correlations are lowest in the Table 3. For this data set, the lower Kendall coefficient value is due to the fact that many variables have close Shapley values, resulting in differences in their value ranks. Based on all the results we may conclude that there is a nice agreement between our Shapley proposal and KernelShap on the ten datasets used in this paper. Figure 2: CRX dataset: size of the “modified table” versus the number of examples in the “knowledge” data table. ## 6 Conclusion In this paper, we have proposed a method for analytically calculating Shapley values in the case of the naive Bayes classifier. This method leverages a new definition of the value function and relies on the independence assumption of the variables conditional on the target to obtain the exact value of the Shapley values, with a linear algorithmic complexity linear with respect to the number of variables. Unlike alternative evaluation/approximation methods, we rely on assumptions that are consistent with the underlying classifier and avoid approximation methods, which are particularly costly in terms of computation time. We also presented a discussion on the key elements that help to understand the proposal and its behavior. We compared this Shapley formulation, in an analytic analysis, to another frequently used indicator, the Weight of Evidence (WoE). We also carried out experiments on ten datasets to compare this proposal with the Weight of Evidence and the KernelShap to highlight similar or dissimilar behaviors. The results show that our Sphaley proposal for the naive Bayes classifier is in fair agreement with the WoE and with KernelShap's Shapley values, but with a much lower algorithmic complexity, enabling it to be used for very large datasets with extremely reduced computation times. Figure 3: Two Norm dataset: Comparison of our Shapley proposal and KernelShap. \begin{table} \begin{tabular}{|l|c|c|c|} \hline Name & \(N_{k}\) & Pearson & Kendall \\ \hline Twonorm & 200 (7400) & 0.9027 & 0.7052 \\ Crx & 690 (690) & 0.9953 & 0.9047 \\ Ionosphere & 351 (351) & 0.9974 & 0.8888 \\ Spam & 200 (4307) & 0.8829 & 0.7684 \\ Tictactoe & 958 (958) & 1.0000 & 1.00 \\ German & 1000 (1000) & 0.9974 & 0.9047 \\ Telco & 1000 (7043) & 0.9633 & 0.7333 \\ Adult & 1000 (48842) & 0.8373 & 0.7692 \\ KRFCC & 858 (858) & 0.9993 & 1.00 \\ Breast & 699 (699) & 0.9908 & 0.8571 \\ \hline \end{tabular} \end{table} Table 3: Correlation between our analytic Shapley and Kernelshap
2309.04868
MemSPICE: Automated Simulation and Energy Estimation Framework for MAGIC-Based Logic-in-Memory
Existing logic-in-memory (LiM) research is limited to generating mappings and micro-operations. In this paper, we present~\emph{MemSPICE}, a novel framework that addresses this gap by automatically generating both the netlist and testbench needed to evaluate the LiM on a memristive crossbar. MemSPICE goes beyond conventional approaches by providing energy estimation scripts to calculate the precise energy consumption of the testbench at the SPICE level. We propose an automated framework that utilizes the mapping obtained from the SIMPLER tool to perform accurate energy estimation through SPICE simulations. To the best of our knowledge, no existing framework is capable of generating a SPICE netlist from a hardware description language. By offering a comprehensive solution for SPICE-based netlist generation, testbench creation, and accurate energy estimation, MemSPICE empowers researchers and engineers working on memristor-based LiM to enhance their understanding and optimization of energy usage in these systems. Finally, we tested the circuits from the ISCAS'85 benchmark on MemSPICE and conducted a detailed energy analysis.
Simranjeet Singh, Chandan Kumar Jha, Ankit Bende, Vikas Rana, Sachin Patkar, Rolf Drechsler, Farhad Merchant
2023-09-09T19:37:52Z
http://arxiv.org/abs/2309.04868v1
# MemSPICE: Automated Simulation and Energy Estimation Framework for MAGIC-Based Logic-in-Memory ###### Abstract Existing logic-in-memory (LIM) research is limited to generating mappings and micro-operations. In this paper, we present _MemSPICE_, a novel framework that addresses this gap by automatically generating both the netlist and testbench needed to evaluate the LIM on a memristive crossbar. MemSPICE goes beyond conventional approaches by providing energy estimation scripts to calculate the precise energy consumption of the testbench at the SPICE level. We propose an automated framework that utilizes the mapping obtained from the SIMPLER tool to perform accurate energy estimation through SPICE simulations. To the best of our knowledge, no existing framework is capable of generating a SPICE netlist from a hardware description language. By offering a comprehensive solution for SPICE-based netlist generation, testbench creation, and accurate energy estimation, MemSPICE empowers researchers and engineers working on memristor-based LIM to enhance their understanding and optimization of energy usage in these systems. Finally, we tested the circuits from the ISCAS'85 benchmark on MemSPICE and conducted a detailed energy analysis. Memristors, Digital Logic-in-Memory, MAGIC, Energy-Efficiency, SPICE Simulation ## I Introduction In-memory computing using memristors is gaining popularity as it helps overcome the von Neumann bottleneck in traditional computing. Memristors possess two states: high resistive state (HRS) and low resistive state (LRS), which are mapped to Boolean logic '0' and '1' for logic-in-memory (LiM) implementation. Among various techniques like IMPLY [1], FELIX [2], and majority logic [3], memristor-aided logic (MAGIC) [4] is widely adopted for LiM due to its superior energy and latency performance [5]. The larger design is synthesized to MAGIC NOR and NOT gates for a single-row memristor crossbar using the SIMPLER tool [6]. The tool maps MAGIC operations to three memristors (two inputs, one output) and two memristors (one input, one output) on the crossbar. Additionally, it generates the necessary number of cycles and input/output memristors for a given application or benchmark. As an illustration, consider a netlist for a half adder provided in Fig. 1 Initially, it is synthesized into a NOR and NOT netlist, as depicted in Fig. 1 Subsequently, the NOR and NOT gates are sequentially mapped onto memristors connected in series. The resulting implementation of MAGIC NOR and NOT using memristors is presented in Fig. 1 Given the increasing popularity of the MAGIC design style in mainstream computing [5], evaluating this technique's energy consumption on larger circuit datasets is crucial. However, current methods for energy calculation rely on a coarse-grained approach, multiplying the average energy consumption of an operation by the number of occurrences in an application [7]. This approach is unsuitable for accurately estimating energy consumption in the MAGIC design style. A more detailed analysis of the implemented circuit is necessary to provide fine-grained energy values as presented in [8]. This paper proposes _MemSPICE_, an automated SPICE-level simulation and accurate energy estimation framework for the MAGIC design style. The proposed framework automatically generates a SPICE-level netlist and testbench voltages for a given application/benchmark. Furthermore, it provides fine-grained energy numbers by calculating the energy consumed by each device in the crossbar, irrespective of its contribution to the operation. The framework empowers researchers to obtain accurate energy estimates for digital designs, offering valuable insights into their methodologies at the circuit level. Furthermore, circuit designers can utilize this framework to implement additional optimizations at the circuit level, showcasing their benefits on the entire crossbar rather than just individual gates. The followings are the contributions of this paper: * Introducing MemSPICE, a framework that takes SIMPLER tool mapping (_,json_) as input and autonomously generates the SPICE-level netlist. * Detailed energy estimation techniques for MAGIC-based digital LIM at the low level, providing valuable insights to designers. * Finally, the SPICE-level netlist for widely-used benchmarks (ISCAS'85) is automatically generated using MemSPICE, and a comparative analysis of the energy consumption values with current methodologies is conducted0. The rest of the paper is organized as follows. Section II discusses the necessary background and related work. In Section III, we discuss the MemSPICE methodology in detail. In Section IV, we discuss the benchmark circuit preparation, the results obtained using MemSPICE methodology, and compare the results with state-of-the-art methods. We conclude the paper in Section V. ## II Background and Related Work ### _Memristive Devices or Memristors_ Memristive devices or memristors are two-terminal passive devices with variable resistance. When a voltage is applied across the terminals of a memristor, the resistance changes in response to the Fig. 1: Mapping process of standard logic gates defined in hardware description language to MAGIC NOT and NOR gates using memristors. The figure showcases the I-V curve of the memristors, providing insights into their resistive states and electrical characteristics. magnitude and direction of the current flow. Fig. 1 - depicts the electrical characterization of a memristor, showcasing its I-V curve with distinct SET and RESET points marked. Importantly, even when the power is turned off, the memristor retains its resistance value until a new voltage is applied, making it a non-volatile memory element [9]. Due to their unique characteristics, memristors have garnered significant interest in various fields, such as in-memory computing, neuromorphic computing, and LiM applications. The maximum and minimum resistance values of memristors are represented by LRS and HRS, which are mapped to logic states '0' and '1', respectively. Considering these states, Boolean logic operations can be performed by arranging the memristive devices into crossbar connections and applying different voltages across them. Numerous models have been proposed in the literature to characterize the memristive devices for SPICE simulation. VTEAM model [10] is one of the models that can characterize various memristive technologies and is used for this work. ### _MAGIC Design Style_ MAGIC is a stateful logic technique that utilizes memristive devices to implement logic operations, where the resistive states of memristors store the inputs and outputs of these operations. The MAGIC design style incorporates NOR and NOT gate implementations. To perform MAGIC operations, an initialization step is required, initializing the output memristor to logic '1' before executing the operations. During a NOR operation, an execution voltage (\(V_{in}\)) is applied to the input memristors (\(A\), \(B\)), while the output memristor is connected to the ground, as illustrated in Fig. 1 -. In contrast, the NOT operation only requires two memristors. Since the NOR gate serves as a universal gate, any logic function can be achieved by combining these gates sequentially. The initial step involves converting the given Verilog design netlist into MAGIC NOR and NOT netlists. Fig. 1 showcases the mapping of the netlist given in - to -, which is done using the SIMPLE mapping tool. ### _SIMPLER Mapping Tool_ SIMPLER MAGIC is a synthesis and mapping tool that is used to generate the MAGIC design style-based mapping of any arbitrary design [6]. The SIMPLER tool generates the mapping of the design to one row of a memristor crossbar. The work shows that these designs can then be replicated across multiple rows of the crossbar, which can operate on different data. Another advantage of doing the same is that the controller and additional circuits cost can be amortized as they can be shared across the multiple rows of the crossbar as they are performing the same set of operations. Hence the mapping obtained using SIMPLER is inherently suitable for _single instruction multiple data_ (SIMD) instructions as shown in Fig. 2. Since it supports SIMD-based operations it is useful for applications that require higher throughput. It also supports reusing the same memristors cells to map a larger design to a crossbar of limited size. ### _Related Work_ The automated flow to generate for analog implementation has been presented in the literature [11]. The automation is achieved through Cadence skill programming, making it suitable for specific applications. Additionally, the work in [12] automates the attachment of peripherals to the RRAM memory, though limited to memory functionality. Various simulators have been proposed at different levels of abstraction, including system, architecture, circuit, and device levels [13]. Some approaches have also explored mixed-signal simulation, integrating different levels together [14]. However, these existing approaches are often limited to a single device type and fixed parameters. In contrast, our paper introduces a digital LiM implementation, expanding the usability of RRAM beyond conventional memory and analog applications. The proposed implementation allows for configurable parameters, providing flexibility in simulations and enabling more extensive exploration of LiM designs. Moving toward energy consumption techniques, the current methods for calculating energy consumption involve multiplying the _average energy used during an operation by the number of such operations_ in an application, which is a highly coarse-grained approach to determine the energy consumed by the MAGIC design style [7]. Surprisingly, despite its popularity, this methodology falls short of providing accurate estimates of the energy dissipated by an application since it does not account for the energy consumed during initialization, reading, and loading input patterns. As far as we know, no existing framework has the capability to conduct an in-depth analysis of energy consumption in LiM. This paper addresses this gap by introducing the MemSPICE framework, which enables detailed energy consumption analysis for LiM designs by running a SPICE-level simulation. ## III MemSPICE Framework This section presents the MemSPICE framework to obtain accurate energy estimates by mapping the Verilog design to the MAGIC design style at the SPICE level. The MemSPICE methodology is shown in Fig 3. It is an automated framework to generate the SPICE-level netlist for MAGIC NOR and NOT gate. The automated process comprises three-step (\(A\)) SIMPLER tool mapping, (\(B\)) automated SPICE-level netlist, (\(C\)) MemSPICE output & test bench creation, and (\(D\)) energy estimations. In the following section, we discuss each step in detail. ### _Verilog Synthesis_ The process starts with the Verilog design synthesis using the ABC synthesis tool [15]. As an illustration, we consider a half-adder example, and you can find the Verilog declaration in Listing 1. This Verilog file (Fig. 3 -) serves as an input, alongside the MAGIC cell library (NOT and NOR gates) shown in Fig. 3 - to the ABC tool. Subsequently, the ABC tool synthesizes (Fig. 3 -) the Verilog description into NOR and NOT gates. The resulting NOR/NOT netlist becomes the input for the SIMPLER mapping tool (Fig. 3 -), which then generates an optimal mapping for MAGIC design style gates. Additionally, the SIMPLER mapping tool performs a sequential mapping of the MAGIC NOT and NOR operations, utilizing three and two memristors on the crossbar, respectively. Crucial information such as the required number of cycles, input/output memristors, and Fig. 2: Demonstrating the simultaneous execution of MAGIC NOT and NOT gates in parallel SIMD fashion. The row-wise parallel approach depicted in the Figure. other relevant details tailored to the specific application or benchmark is generated by the SIMPLER tool, and this data is stored in a json file (Fig. 3). Listing 2 shows the json file representing the mapping of the half-adder on five memristors connected in a single row. Further, this.json file is used to generate the SPICE-level netlist and test vectors for simulation and detailed energy analysis. ### _Automated SPICE Netlist_ MemSPICE utilizes the output from the SIMPLER tool in.json format as input to automatically generate the SPICE-level netlist. From this.json file, MemSPICE extracts critical information such as input and output data devices and the sequence of NOR/NOT operations for execution. Additionally, MemSPICE takes several parameters as input to customize the simulation process, which is presented in Fig. 3. These parameters are organized into four categories based on their role during the simulation: 1. Array parameters: These encompass device model and array size arguments essential for the simulation. As the SIMPLER tool maps the design to a single row, the generated SPICE-level netlist also contains a single row with 'n' numbers of columns. 2. Voltage-related parameters: MemSPICE supports pulse and piece-wise linear (PWL) voltage sources and allows control over parameters such as pulse amplitude, width, period, and rise/fall time of the voltage source for testbench purposes. 3. Simulation-related parameters: These parameters offer flexibility in choosing the type of simulation (DC simulation, transient) required for testing, along with defining simulation time and step size during the simulation. 4. Process variations: This set of parameters controls the type and range of variation for the devices. When process variation parameters are set, MemSPICE draws variation values from a normal distribution probability function around the mean value, with a defined standard deviation in input parameters. By effectively managing these input parameters, MemSPICE empowers users to tailor the SPICE-level netlist generation and simulation according to their specific requirements, enhancing the flexibility and accuracy of the overall analysis. Considering all the parameters, MemSPICE formulates a spectre-compatible netlist (.scs) in a crossbar configuration. Subsequently, this crossbar SPICE netlist is assembled into a crossbar symbol, complete with dedicated inputs and outputs optimized for benchmarking purposes. In the output, MemSPICE generates multiple.scs files, encompassing the crossbar sub-circuit, testbench signals, as well as simulation and energy estimation files, all essential for conducting SPICE simulations. ``` modulehalf_adder(A,B,S,Cout);//InputPortsDeclarationsinputA,B; outputS,Cy;//Logis assignS=A-B;; assignCy=(A&B); endmodule ``` Listing 1: Verilog logic of half adder as an input to MemSPICE. ```*Rowsize':5, "Numberof"of"of"of":5, "Inputs":"A(10),B(1)]*, "Outputs":"I(S(t),Cy(2))", "Reusecycles":1, "Executionsequence":{ "To":init(D?(2))","D(3)","D(4)", "T":"T":"S(3)=init(D?(4))", "T":"T":"C(3)=init(D?(1))", "T":"T":"C(3)=init(D?(2))", "T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":"T":":T":"T":"T":":T":"T":"T":"T":T":"T":T":"T":"T":"T":T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":T":"T":"T":"T":T":"T":"T":T":"T":"T":"T":T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":T":"T":"T":"T":"T":"T":"T":"T":T":"T":"T":"T":T":"T":"T":":"T":"T":"T":"T":"T":":T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":T":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":":T":"T":"T":"T":"T":T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T":"T": ### _MemSPICE Output & Test-benching_ MemSPICE generates various files in the output, all compatible with spectre simulation. These files are individually created and called in a single main file to execute the final simulation. #### Iv-C1 Crossbar Subckt MemSPICE first considers a device model (.va) and creates a symbol based on it. Crossbar Subckt block as shown in Fig. 3 *> > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > >> > > > > > >> > > > > >> > > > > > >> > > > > >> > > > >> > > > >> >> > > >> > > > > > > > >> > > > >> > > >> > > >> > >> >> > > > > >> > > > > > > > >> > > >> > >> > > >> > >> > > >> > >> >> > >> > > >> > >> > >> > >> > > >> > >> > >> > >> >> > > >> >> >> >> > >> >> >> >> >> >> >> >>> >>> >> >> >> >>> >> >> >>> >> >>> >> >> >>> >> >>> >>> >>> >>> >>>> >>> >>> >> >>> >>>> >>>> >>> >>>> >>> >>> >>>> >>>> >>>>> >>>> >>>> >>>> >>>> >>>> >>>>> >>>> >>>> >>>> >>>> >>>>> >>>> >>>>> >>>> Fig. 4 presents the schematic and waveform generated for the half-adder implementation, which is synthesized from Listing 1. Notably, the half adder implementation is mapped to a single row of 5 memristors. As shown in Listing 2, it also requires memristor re-initialization to store intermediate results. These intermediate results are represented by the dotted line with memristors labeled as 'Inter', while the final sum and carry are stored in the memristors labeled as 'Sum' and 'Cy', respectively. The implementation necessitates six voltage sources (\(V_{r0}\) and \(V_{c0}-V_{c4}\)) with values tailored to the logic implementation during each execution sequence. Additionally, there are five other voltage sources responsible for opening switches to execute operations. Furthermore, these operations can be performed in SIMD fashion, but they are beyond the scope of this work. In summary, the framework offers digital and circuit designers the ability to thoroughly test and optimize their designs in a more accurate and automated manner, providing valuable insights into the effectiveness of their methodologies. ## IV Experimental Results This section presents the results achieved through the MemSPICE framework. To validate the methodology, we conducted tests using the ISCAS'85 benchmark with the proposed framework. To ensure comparability with existing literature [8], we mapped all benchmarks to a single row of 512 devices. For operating, programming, and reading the state of memristors, we employed different voltage with a 1.3 ns pulse width, with 1 ps rise and fall times. Table I presents ISCAS'85 benchmark results. It includes benchmark names, primary inputs/outputs count, cycles needed for the final result, NOR/NOT gate counts, re-initialization cycle, and energy values from the literature. MemSPICE allows testing with input patterns for energy consumption. 11, 12, and 13 represent all 0's, all 1's, and alternating 1's and 0's, respectively. We now discuss the results in detail using some examples from the benchmarks. ### _c432 from ISCAS 85_ We mapped the c432 benchmark circuit (27-channel interrupt controller) on 512 devices, featuring 36 inputs and 7 outputs. Its operation requires 250 cycles, with 101 NOT operations and 148 NOR operations. The c432 is relatively a small benchmark that easily fits within 512 devices without any reinitialization cycles. The execution energy closely matches the literature, but the initialization energy is considerably higher. As a first step, MemSPICE initializes all devices to '1' (LRS) except for input memristors, which are necessary for MAGIC implementation. In this case, initializing 476 devices (512-PI) to '1' dominates the initialization energy. The execution energy varies significantly based on the input pattern due to the benchmark's smaller size. This is true for all the benchmark circuits that can be mapped to far fewer memristors than 512. The unused memristor energy dominates the operation energy in this case. Fig. 5 presents the energy breakdown of the c432 benchmark. Since c432 does not have any re-initialization cycle, all the devices are initialized simultaneously in the first cycle for use in the operation cycle. A closer view of Fig. 5(a) is provided in 5(b) and 5(c) to illustrate the execution and read energy, respectively. The read energy is computed by reading the state of each memristor in a row to verify functionality and switching during operation. ### _c3540 from ISCAS 85_ As shown in Table I, the c3540 benchmark, which is an 8-bit arithmetic and logic unit, requires 1397 cycles with 50 input and 22 output memristors. It comprises 465 NOT gates and 928 NOR gates. With the circuit mapped to 512 devices, three re-initialization of devices are needed to complete the entire operation, making re-initialization energy a significant factor. The energy consumption using the MemSPICE framework for 11, 12, and 13 input patterns is found to be 2676 pJ, 2474 pJ, and 2431 pJ, respectively. During re-initialization, we apply a SET voltage (2.0V) to reset the device to LRS state without reading its current state. If the device is already in the LRS state before re-initialization, it draws a considerable amount of current, significantly increasing the energy consumption. In this specific case, the re-initialization energy is approximately 70 \(\times\) higher than the execution energy. Fig. 6 displays the energy breakdown of the c3540 benchmark. As indicated in Table I, c3540 requires three initialization cycles. The graph in Fig. 6(a) clearly illustrates the significant increase in energy during the re-initialization cycle. This graph highlights that the re-initialization energy dominates the energy consumption during execution. Fig. 6(b) and Fig. 6(c) provide a closer view of Fig. 6(a). Table I presents the results for all the benchmarks, showcasing how the MemSPICE framework simplifies netlist generation and energy evaluation for MAGIC design style-based circuits. The execution energy values obtained by MemSPICE align closely with state-of-the-art energy calculations, validating the correctness of the methodology. Furthermore, MemSPICE can capture additional energy aspects not addressed in the existing literature. The utilization of SPICE-level simulation [8] in MemSPICE contributes to more precise energy values, emphasizing the framework's significance for accurate energy analysis and automation in digital LiM. The output of the framework includes waveforms illustrating the executed operations in the sequence, with an explicit read pulse incorporated. To validate the functional correctness of the benchmarks, manual testing is performed, ensuring the accuracy of the files generated by the proposed framework. Fig. 4: Half adder implementation for “10” input (\(in_{0}=1\), \(in_{1}=0\)) using five memristors in a row as per the mapping in Listing 2. This implementation requires a total of 7 execution cycles, including an initialization (\(T_{0}\)) and a reused cycle (\(T_{4}\)). Similar to a SIMD architecture, the same operation can be executed concurrently in different rows (\(r_{n}\)). ## V Conclusions This paper has introduced MemSPICE, a SPICE-level framework that fills the gap in existing research by automating the generation of SPICE netlists for digital LiM using memristors. MemSPICE offers users the ability to control and fine-tune various parameters, providing remarkable flexibility to accommodate diverse configurations and scenarios. It takes Veriolog-defined logic and automatically generates SPICE-level simulations for detailed analysis. Notably, it provides precise and fine-grained energy values, enhancing the accuracy of energy estimation. The energy values generated by MemSPICE were found to be in agreement with the energy calculations reported in the literature. MemSPICE will enable researchers, without any circuit knowledge, to accurately estimate the benefits of their proposed ideas by performing automated SPICE-level simulations and we believe this will have a huge impact in this domain of research. ## Acknowledgments This work was supported in part by the Federal Ministry of Education and Research (BMBF, Germany) in the project NEROTEC II under Project 16ME0398K, Project 16ME0399, German Research Foundation (DFG) within the Project PLiM (DR 287/35-1, DR 287/35-2) and through Dr. Suhas Pai Donation Fund at IIT Bombay.
2309.08707
Fixed-b Asymptotics for Panel Models with Two-Way Clustering
This paper studies a cluster robust variance estimator proposed by Chiang, Hansen and Sasaki (2024) for linear panels. First, we show algebraically that this variance estimator (CHS estimator, hereafter) is a linear combination of three common variance estimators: the one-way unit cluster estimator, the "HAC of averages" estimator, and the "average of HACs" estimator. Based on this finding, we obtain a fixed-$b$ asymptotic result for the CHS estimator and corresponding test statistics as the cross-section and time sample sizes jointly go to infinity. Furthermore, we propose two simple bias-corrected versions of the variance estimator and derive the fixed-$b$ limits. In a simulation study, we find that the two bias-corrected variance estimators along with fixed-$b$ critical values provide improvements in finite sample coverage probabilities. We illustrate the impact of bias-correction and use of the fixed-$b$ critical values on inference in an empirical example on the relationship between industry profitability and market concentration.
Kaicheng Chen, Timothy J. Vogelsang
2023-09-15T18:58:08Z
http://arxiv.org/abs/2309.08707v4
# Fixed-\(b\) Asymptotics for Panel Models with Two-Way Clustering + ###### Abstract This paper studies a cluster robust variance estimator proposed by Chiang, Hansen and Sasaki (2022) for linear panels. First, we show algebraically that this variance estimator (CHS estimator, hereafter) is a linear combination of three common variance estimators: the one-way individual cluster estimator, the "HAC of averages" estimator, and the "average of HACs" estimator. Based on this finding, we obtain a fixed-\(b\) asymptotic result for the CHS estimator and corresponding test statistics as the cross-section and time sample sizes jointly go to infinity. Furthermore, we propose two simple bias-corrected versions of the variance estimator and derive the fixed-\(b\) limits. In a simulation study, we find that the two bias-corrected variance estimators along with fixed-\(b\) critical values provide improvements in finite sample coverage probabilities. We illustrate the impact of bias-correction and use of the fixed-\(b\) critical values on inference in an empirical example from Thompson (2011) on the relationship between industry profitability and market concentration. **Keywords**: panel data, clustering dependence, standard errors, joint asymptotics. **JEL Classification Code:** C23 ## 1 Introduction When carrying out inference in a linear panel model, it is well known that failing to adjust the variance estimator of estimated parameters to allow for different dependence structures in the data can cause over-rejection/under-rejection problems under null hypotheses, which in turn can give misleading empirical findings (see Bertrand, Duflo and Mullainathan (2004)). To study different dependence structures and robust variance estimators in panel settings, it is now common to use a component structure model \(y_{it}=f(\alpha_{i},\gamma_{t},\varepsilon_{it})\) where the observable data, \(y_{it}\), is a function of an individual component, \(\alpha_{i}\), a time component, \(\gamma_{t}\), and an idiosyncratic component, \(\varepsilon_{it}\). See, for example, Kallenberg (2005), Davezies, D'Haultfoeuille and Guyonvarch (2021), MacKinnon, Nielsen and Webb (2021), Menzel (2021), and Chiang et al. (2022). As a concrete example, suppose \(y_{it}=\alpha_{i}+\varepsilon_{it}\) for \(i=1,...,N\) and \(t=1,...,T\), where \(\alpha_{i}\) and \(\varepsilon_{it}\) are assumed to be i.i.d random variables. The existence of \(\alpha_{i}\) generates serial correlation within group \(i\), which is also known as the individual clustering effect. This dependence structure is well-captured by the cluster variance estimator proposed by Liang and Zeger (1986) and Arellano (1987). One can also use the "average of HACs" variance estimator that uses cross-section averages of the heterogeneity and autocorrelation (HAC) robust variance estimator proposed by Newey and West (1987). On the other hand, suppose \(y_{it}=\gamma_{t}+\varepsilon_{it}\) where \(\gamma_{t}\) is assumed to be an i.i.d sequence of random variables. Cross-sectional/spatial dependence is generated in \(y_{it}\) by \(\gamma_{t}\) is through the time clustering effect. In this case one can use a variance estimator that clusters over time or use the spatial dependence robust variance estimator proposed by Driscoll and Kraay (1998). Furthermore, if both \(\alpha_{i}\) and \(\gamma_{t}\) are assumed to be present, e.g. \(y_{it}=\alpha_{i}+\gamma_{t}+\varepsilon_{it}\), then the dependence of \(\{y_{it}\}\) exists in both the temporal and spatial dimensions, also as known as two-way clustering effects. Correspondingly, the two-way/multi-way robust variance estimator proposed by Cameron, Gelbach and Miller (2011) is suitable for this case. In macroeconomics, the time effects, \(\gamma_{t}\), can be regarded as common shocks which are usually serially correlated. Allowing persistence in \(\gamma_{t}\) up to a known lag structure, Thompson (2011) proposed a truncated variance estimator that is robust to dependence in both the cross-sectional and temporal dimensions. Because of unsatisfying finite sample performance of this rectangular-truncated estimator, Chiang et al. (2022) proposed a Bartlett kernel variant (CHS variance estimator, hereafter) and establish validity of tests based on this variance estimator using asymptotics with the cross-sectional sample size, \(N\), and the time sample size, \(T\), jointly go to infinity. The asymptotic results of the CHS variance estimator rely on the assumption that the bandwidth, \(M\), goes to infinity as \(T\) goes to infinity while the bandwidth sample size ratio, \(b=\frac{M}{T}\) is of a small order. As pointed out by Neave (1970) and Kiefer and Vogelsang (2005), the value of \(b\) in a given application is a non-zero number that matters for the sampling distribution of the variance estimator. Treating \(b\) as shrinking to zero in the asymptotics may miss some important features of finite sample behavior of the variance estimator and test statistics. As noted by Andrews (1991), Kiefer and Vogelsang (2005), and many others, HAC robust tests tend to over-reject in finite samples when standard critical values are used. This is especially true when time dependence is persistent and large bandwidths are used. We document similar findings for tests based on CHS variance estimator in our simulations. To improve the performance tests based on CHS variance estimator, we derive fixed-\(b\) asymptotic results (see Kiefer and Vogelsang (2005), Sun, Phillips and Jin (2008), Vogelsang (2012), Zhang and Shao (2013), Sun (2014), Bester, Conley, Hansen and Vogelsang (2016), Lazarus, Lewis and Stock (2021)). Fixed-\(b\) asymptotics captures some important effects of the bandwidth and kernel choices on the finite sample behavior of the variance estimator and tests and provides reference distributions that can be used to obtain critical values that depend on the bandwidth (and kernel). Our fixed-\(b\) asymptotic results are obtained for \(N\) and \(T\) jointly going to infinity and leverage the joint asymptotic framework developed by Phillips and Moon (1999). One key finding is that the CHS variance has a multiplicative bias given by \(1-b+\frac{1}{3}b^{2}\leq 1\) resulting in a downward bias that becomes more pronounced as the bandwidth increases. By simply dividing the CHS variance estimator by \(1-b+\frac{1}{3}b^{2}\) we obtain a simple bias-corrected variance estimator that improves the performance of tests based on the CHS variance estimator. We label this bias corrected CHS variance estimator as BCCHS. As a purely algebraic result, we show that the CHS variance estimator is the sum of the Arellano cluster and Driscoll-Kraay variance estimators minus the "averages of HAC" variance estimator. We show that dropping the "averages of HAC" component while also bias correcting the Driscoll-Kraay component removes the asymptotic bias in the CHS variance estimator and has the same fixed-\(b\) limit as the BCCHS variance estimator. We label the resulting variance estimator of this second bias correction approach as the DKA (Driscoll-Kraay+Arellano) variance estimator. Similar ideas are also used by Davezies, D'Haultfoeuille and Guyonvarch (2018) and MacKinnon et al. (2021) where they argue the removal of the negative and small order component in the variance estimator brings computational advantage in the sense that the variance estimates are ensured to be positive semi-definite. In our simulations we find that negative CHS variance estimates can occur up to 6.8% of the time. An advantage of the DKA variance estimator is guaranteed positive semi-definiteness. The DKA variance estimator also tends to deliver tests with better finite sample coverage probabilities although there are exceptions. In a finite sample simulation study we compare sample coverage probabilities of confidence intervals based on CHS, BCCHS, and DKA variance estimators using critical values from both the standard normal distribution and the fixed-\(b\) limits. The fixed-\(b\) limits of the test statistics constructed by these three variance estimators are not pivotal, so we use a simulation method to obtain the critical values via a plug-in estimator approach to handle asymptotic nuisance parameters. While the fixed-\(b\) critical values substantially improve coverage rates when using the CHS variance estimator, improvements from simply using the bias corrections are impressive. In fact, once the bias corrections are used, fixed-\(b\) critical values only provide modest improvements in finite sample coverage probabilities. This is a practically useful finding given that simulating asymptotic critical values on a case-by-case basis can be computationally inconvenient. The rest of the paper is organized as follows. In Section 2 we sketch the algebra of the CHS estimator and rewrite it as a linear combination of three well known variance estimators. In Section 3 we derive fixed-\(b\) limiting distributions of CHS based tests for pooled ordinary least squares (POLS) estimators in a simple location panel model and a linear panel regression model. In Section 4 we derive the fixed-\(b\) asymptotic bias of the CHS estimator and propose two bias corrected variance estimators. We also derive fixed-\(b\) limits for tests based on the bias corrected variance estimators. Section 5 presents finite sample simulation results that illustrate the relative performance of \(t\)-tests based on the variance estimators. Although not covered by the theory, we provide some finite sample results using the two-way-fixed-effects (TWFE) estimator generating some conjectures for future work. In Section 6 we illustrate the practical implications of the bias corrections and use of fixed-\(b\) critical values in an empirical example. Section 7 concludes the paper. Proofs and some additional theoretical results are given in two appendices. ## 2 A Variance Estimator Robust to Two-Way Clustering We first motivate the estimator of the asymptotic variance of the POLS estimator under arbitrary dependence in both temporal and cross-sectional dimensions. Consider the linear panel model \[y_{it}=x_{it}^{\prime}\beta+u_{it},\ \ i=1,\ldots,N,.\ \ t=1,\ldots,T, \tag{2.1}\] where \(y_{it}\) is the dependent variable, \(x_{it}\) is a \(k\times 1\) vector of covariates, \(u_{it}\) is the error term, and \(\beta\) is the coefficient vector. For illustration purposes, suppose \(\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}x_{it}^{\prime}\overset{p}{ \rightarrow}Q\), where \(Q=E(x_{it}x_{it}^{\prime})\) is a full-rank matrix, and suppose \(\frac{1}{\sqrt{NT}}\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}u_{it}\) satisfies a central limit theorem. Then it follows that \(\sqrt{N}(\widehat{\beta}-\beta)=Q^{-1}\left(\frac{1}{\sqrt{NT}}\sum_{i=1}^{N} \sum_{t=1}^{T}x_{it}u_{it}\right)+o_{p}(1)\). Let \(\Omega=Var\left(\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}v_{it}\right)\) where \(v_{it}=x_{it}u_{it}\). By the asymptotic equivalence lemma, the asymptotic variance of \(\sqrt{N}(\hat{\beta}-\beta)\) is \[AVar\left[\sqrt{N}(\widehat{\beta}-\beta)\right]=\lim_{N,T\rightarrow\infty} Q^{-1}N\Omega Q^{-1}.\] Without imposing assumptions on the dependence structure of \(v_{it}\), it has been shown, algebraically, that \(\Omega\) has the following form (see Thompson (2011) and Chiang et al. (2022)): \[\Omega= \frac{1}{N^{2}T^{2}}\left[\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{s=1}^ {T}E\left(v_{it}v_{is}^{\prime}\right)+\sum_{t=1}^{T}\sum_{i=1}^{N}\sum_{j=1}^ {N}E(v_{it}v_{jt}^{\prime})-\sum_{t=1}^{T}\sum_{i=1}^{N}E\left(v_{it}v_{it}^{ \prime}\right)\right.\] \[+\sum_{m=1}^{T-1}\sum_{t=1}^{T-m}E\left(\sum_{i=1}^{N}v_{it} \right)\left(\sum_{j=1}^{N}v_{j,t+m}^{\prime}\right)-\sum_{i=1}^{N}\sum_{t=1} ^{T-m}E\left(v_{it}v_{i,t+m}^{\prime}\right)\] \[+\sum_{m=1}^{T-1}\sum_{t=1}^{T-m}E\left(\sum_{i=1}^{N}v_{i,t+m} \right)\left(\sum_{j=1}^{N}v_{j,t}^{\prime}\right)-\sum_{i=1}^{N}\sum_{t=1}^{ T-m}E\left(v_{i,t+m}v_{i,t}^{\prime}\right)\right].\] Based on this decomposition of \(\Omega\), Thompson (2011) and Chiang et al. (2022) each propose a truncation-type variance estimator. In particular, Chiang et al. (2022) replaces the Thompson (2011) truncation scheme with a Bartlett kernel and propose to estimate the asymptotic variance of \(\sqrt{N}(\widehat{\beta}-\beta)\) by \(\widehat{Q}^{-1}N\widehat{\Omega}_{CHS}\widehat{Q}^{-1}\), where \(\widehat{Q}=\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}x_{it}^{\prime}\) and \(\widehat{\Omega}_{CHS}\) is defined as follows: \[\widehat{\Omega}_{CHS}= \frac{1}{N^{2}T^{2}}\left[\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{s=1}^ {T}\left(\widehat{v}_{it}\widehat{v}_{is}^{\prime}\right)+\sum_{t=1}^{T}\sum_{ i=1}^{N}\sum_{j=1}^{N}\left(\widehat{v}_{it}\widehat{v}_{jt}^{\prime}\right)- \sum_{t=1}^{T}\sum_{i=1}^{N}\left(\widehat{v}_{it}\widehat{v}_{it}^{\prime} \right)\right.\] \[+\sum_{m=1}^{M-1}k\left(\frac{m}{M}\right)\sum_{t=1}^{T-m}\left( \sum_{i=1}^{N}\widehat{v}_{it}\right)\left(\sum_{j=1}^{N}\widehat{v}_{j,t+m} ^{\prime}\right)-\sum_{i=1}^{N}\sum_{t=1}^{T-m}\left(\widehat{v}_{it}\widehat{ v}_{i,t+m}^{\prime}\right)\] \[\left.+\sum_{m=1}^{M-1}k\left(\frac{m}{M}\right)\sum_{t=1}^{T-m} \left(\sum_{i=1}^{N}\widehat{v}_{i,t+m}\right)\left(\sum_{j=1}^{N}\widehat{v}_ {j,t}^{\prime}\right)-\sum_{i=1}^{N}\sum_{t=1}^{T-m}\left(\widehat{v}_{i,t+m} \widehat{v}_{i,t}^{\prime}\right)\right],\] where \(k\left(\frac{m}{M}\right)=1-\frac{m}{M}\) is the Bartlett kernel and \(M\) is the truncation parameter (bandwidth). Chiang et al. (2022) prove that, with appropriate scaling, \(\widehat{\Omega}_{CHS}\) consistently estimates \(\Omega\) while allowing two-way clustering effects with serially correlated stationary time effects under the assumptions that \(M\rightarrow\infty\) and \(\frac{M}{\min\{T,N\}^{1/2}}=o(1)\) as \(N,T\rightarrow\infty\). As an asymptotic approximation, appealing to consistency of the estimated variance allows the asymptotic variance to be treated as known when generating asymptotic critical values for inference. While convenient, such a consistency result does not capture the impact of the choice of \(M\) and kernel function on the finite sample behavior of the variance estimator and any resulting size distortions of test statistics. To capture some of the finite sample impacts of the choice of \(M\) and kernel, we apply the fixed-\(b\) approach of Kiefer and Vogelsang (2005) to \(\widehat{\Omega}_{CHS}\). To obtain a fixed-\(b\) result for \(\widehat{\Omega}_{CHS}\), it is helpful to rewrite \(\widehat{\Omega}_{CHS}\) in terms of three familiar variance estimators given by \[\widehat{\Omega}_{A}:= \frac{1}{N^{2}T^{2}}\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{s=1}^{T} \left(\widehat{v}_{it}\widehat{v}_{is}^{\prime}\right), \tag{2.2}\] \[\widehat{\Omega}_{DK}:= \frac{1}{N^{2}T^{2}}\sum_{t=1}^{T}\sum_{i=1}^{N}\sum_{j=1}^{N} \left(\widehat{v}_{it}\widehat{v}_{jt}^{\prime}\right)\] \[+\sum_{m=1}^{M-1}k\left(\frac{m}{M}\right)\left[\sum_{t=1}^{T-m} \left(\sum_{i=1}^{N}\widehat{v}_{it}\right)\left(\sum_{j=1}^{N}\widehat{v}_{j,t+m}^{\prime}\right)+\sum_{t=1}^{T-m}\left(\sum_{i=1}^{N}\widehat{v}_{i,t+m} \right)\left(\sum_{j=1}^{N}\widehat{v}_{j,t}^{\prime}\right)\right]\] \[= \frac{1}{N^{2}T^{2}}\sum_{t=1}^{T}\sum_{s=1}^{T}k\left(\frac{|t-s |}{M}\right)\left(\sum_{i=1}^{N}\widehat{v}_{it}\right)\left(\sum_{j=1}^{N} \widehat{v}_{js}^{\prime}\right), \tag{2.3}\] \[\widehat{\Omega}_{NW}:= \frac{1}{N^{2}T^{2}}\sum_{t=1}^{T}\sum_{i=1}^{N}\left(\widehat{v}_{it }\widehat{v}_{it}^{\prime}\right)+\sum_{m=1}^{M-1}k\left(\frac{m}{M}\right) \left[\sum_{i=1}^{N}\sum_{t=1}^{T-m}\left(\widehat{v}_{it}\widehat{v}_{i,t+m}^{ \prime}\right)+\sum_{i=1}^{N}\sum_{t=1}^{T-m}\left(\widehat{v}_{i,t+m}\widehat {v}_{i,t}^{\prime}\right)\right]\] \[= \frac{1}{N^{2}T^{2}}\sum_{i=1}^{N}\sum_{t=1}^{T}\sum_{s=1}^{T}k \left(\frac{|t-s|}{M}\right)\widehat{v}_{it}\widehat{v}_{is}^{\prime}. \tag{2.4}\] Notice that (2.2) is the "cluster by individuals" estimator proposed by Liang and Zeger (1986) and Arellano (1987), (2.3) is the "HAC of cross-section averages" estimator proposed by Driscoll and Kraay (1998), and (2.4) is the "average of HACs" estimator (see Petersen (2009) and Vogelsang (2012)). Using straightforward algebra, one can show that \(\widehat{\Omega}_{CHS}\) can be written as \[\widehat{\Omega}_{CHS}=\widehat{\Omega}_{A}+\widehat{\Omega}_{DK}-\widehat{ \Omega}_{NW}. \tag{2.5}\] In other words, \(\widehat{\Omega}_{CHS}\) is a linear combination of three well known variance estimators that have been proposed to handle particular forms of dependence structure. While Hansen (2007) and Vogelsang (2012) provide some potentially relevant asymptotic results for the components of (2.5), those results are not sufficiently comprehensive to directly obtain a fixed-\(b\) result for \(\widehat{\Omega}_{CHS}\). Furthermore, the regularity conditions used by Hansen (2007) and Vogelsang (2012) do not include the component structure used by Chiang et al. (2022). Some new theoretical results are needed. ## 3 Fixed-\(b\) Asymptotic Results ### The Multivariate Mean Case To set ideas and intuition we first focus on a simple panel mean model (panel location model) and then extend the analysis to the linear regression case. Consider a component structure representation (also see Kallenberg (2005) and Chiang et al. (2022)) of a \(k\times 1\) random vector \(y_{it}\) as follows \[y_{it}=\theta+f\left(\alpha_{i},\gamma_{t},\varepsilon_{it}\right),\] where \(\theta=E(y_{it})\) and \(f\) is an unknown Borel-measurable function, the sequences \(\{\alpha_{i}\}\), \(\{\gamma_{t}\}\), and \(\{\varepsilon_{it}\}\) are mutually independent, \(\alpha_{i}\) is i.i.d across \(i\), \(\varepsilon_{it}\) is i.i.d across \(i\) and \(t\), and \(\gamma_{t}\) is a strictly stationary serially correlated process. Defining the quantities \(a_{i}=E\left(y_{it}-\theta|\alpha_{i}\right),\ g_{t}=E\left(y_{it}-\theta| \gamma_{t}\right)\), \(e_{it}=\left(y_{it}-\theta\right)-a_{i}-g_{t}\), we can decompose \(y_{it}\) as \[y_{it}-\theta=a_{i}+g_{t}+e_{it}\equiv v_{it}.\] We can estimate \(\theta\) using the pooled sample mean given by \(\widehat{\theta}=\left(NT\right)^{-1}\sum_{i=1}^{N}\sum_{t=1}^{T}y_{it}\). Rewriting the sample mean using this component structure representation for \(y_{it}\) we have \[\widehat{\theta}-\theta=\frac{1}{N}\sum_{i=1}^{N}a_{i}+\frac{1}{T}\sum_{t=1}^{T }g_{t}+\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}e_{it}\equiv\overline{a}+ \overline{g}+\overline{e}. \tag{3.1}\] The Chiang et al. (2022) variance estimator of \(\widehat{\theta}\) is given by \(V\widehat{a}r_{CHS}(\widehat{\theta})=\widehat{\Omega}_{CHS}\) where \(\widehat{\Omega}_{CHS}\) is as defined in (2.5) with \(\widehat{v}_{it}=y_{it}-\widehat{\theta}\) used in (2.2) - (2.4). To obtain fixed-\(b\) results for \(\widehat{\Omega}_{CHS}\) we rewrite the formula for \(\widehat{\Omega}_{CHS}\) in terms of the following two partial sum processes of \(\widehat{v}_{it}\): \[\widehat{S}_{it}=\sum_{j=1}^{t}\widehat{v}_{ij}=t\left(a_{i}- \overline{a}\right)+\sum_{j=1}^{t}\left(g_{j}-\overline{g}\right)+\sum_{j=1}^ {t}\left(e_{ij}-\overline{e}\right), \tag{3.2}\] \[\widehat{\overline{S}}_{t}=\sum_{i=1}^{N}\widehat{S}_{it}=\sum_ {i=1}^{N}\sum_{j=1}^{t}\widehat{v}_{ij}=N\sum_{j=1}^{t}\left(g_{j}-\overline {g}\right)+\sum_{i=1}^{N}\sum_{j=1}^{t}\left(e_{ij}-\overline{e}\right). \tag{3.3}\] Note that the \(a_{i}\) component drops from (3.3) because \(\sum_{i=1}^{N}\left(a_{i}-\overline{a}\right)=0\). The Arellano component (2.2) of \(\widehat{\Omega}_{CHS}\) is obviously a simple function of (3.2) with \(t=T\). The HAC components (2.3) and (2.4) can be written in terms of (3.3) and (3.2) using fixed-\(b\) algebra (see Vogelsang (2012)). Therefore, the Chiang et al. (2022) variance estimator has the following equivalent formula: \[\widehat{\Omega}_{CHS}= \frac{1}{N^{2}T^{2}}\sum_{i=1}^{N}\widehat{S}_{iT}\widehat{S}_{iT }^{\prime} \tag{3.4}\] \[+\frac{1}{N^{2}T^{2}}\left\{\frac{2}{M}\sum_{t=1}^{T-1}\widehat{ \overline{S}}_{t}\widehat{\overline{S}}_{t}^{\prime}-\frac{1}{M}\sum_{t=1}^{T- M-1}\left(\widehat{\overline{S}}_{t}\widehat{\overline{S}}_{t+M}^{\prime}+ \widehat{\overline{S}}_{t+M}\widehat{\overline{S}}_{t}^{\prime}\right)\right\}\] (3.5) \[-\frac{1}{N^{2}T^{2}}\sum_{i=1}^{N}\left\{\frac{2}{M}\sum_{t=1}^{ T-1}\widehat{S}_{it}\widehat{S}_{it}^{\prime}-\frac{1}{M}\sum_{t=1}^{T-M-1}\left( \widehat{S}_{it}\widehat{S}_{i,t+M}^{\prime}+\widehat{S}_{i,t+M}\widehat{S}_{i,t}^{\prime}\right)\right.\] \[\left.-\frac{1}{M}\sum_{t=T-M}^{T-1}\left(\widehat{S}_{it} \widehat{S}_{iT}^{\prime}+\widehat{S}_{iT}\widehat{S}_{it}^{\prime}\right)+ \widehat{S}_{iT}\widehat{S}_{iT}^{\prime}\right\}. \tag{3.6}\] Define three \(k\times k\) matrices \(\Lambda_{a}\), \(\Lambda_{g}\), and \(\Lambda_{e}\) such that: \[\Lambda_{a}\Lambda_{a}^{\prime}=E(a_{i}a_{i}^{\prime}),\quad\Lambda_{g}\Lambda _{g}^{\prime}=\sum_{\ell=-\infty}^{\infty}E[g_{t}g_{t+\ell}^{\prime}],\quad \Lambda_{e}\Lambda_{e}^{\prime}=\sum_{\ell=-\infty}^{\infty}E[e_{it}e_{i,t+ \ell}^{\prime}].\] The following assumption is used to obtain an asymptotic result for (3.1) and a fixed-\(b\) asymptotic result for \(\widehat{\Omega}_{CHS}\). **Assumption 1**.: _For some \(s>1\) and \(\delta>0\), (i) \(y_{it}=\theta+f(\alpha_{i},\gamma_{t},\varepsilon_{it})\) where \(\{\alpha_{i}\}\), \(\{\gamma_{t}\}\), and \(\{\varepsilon_{it}\}\) are mutually independent sequences, \(\alpha_{i}\) is i.i.d across \(i\), \(\varepsilon_{it}\) is i.i.d across \(i\) and \(t\), and \(\gamma_{t}\) is strictly stationary. (ii) \(E[y_{it}]=\theta\) and \(E[\|y_{it}\|^{4(s+\delta)}]<\infty\). (iii) \(\gamma_{t}\) is an \(\alpha\)-mixing sequence with size \(2s/(s-1)\), i.e., \(\alpha_{\gamma}(\ell)=O(\ell^{-\lambda})\) for a \(\lambda>2s/(s-1)\). (iv) \(\Lambda_{a}\Lambda_{a}^{\prime}>0\) and/or \(\Lambda_{g}\Lambda_{g}^{\prime}>0\), and \(N/T\to c\) as \((N,T)\rightarrow\infty\) for some constant \(c\). (v) \(M=[bT]\) where \(b\in(0,1]\)._ Assumption 1(i) is the assumed DGP for the observed random vector. Assumption 1(ii) assumes the mean of \(y_{it}\) exists and \(y_{it}\) has finite fourth moments. Assumption 1(iii) assumes weak dependence of \(\gamma_{t}\) using a mixing condition. Assumption 1 (i) - (iii) follow Chiang et al. (2022). Assumption 1(iv) rules out the case where \(y_{it}\) is i.i.d across \(i\) and \(t\). Because the fixed-\(b\) limit of \(\widehat{\Omega}_{CHS}\) and its associated test statistics turn out to be different in the i.i.d case, we discuss the i.i.d case separately in Appendix B. Assumption (iv) also rules out the pathological case described in Example 1.7 of Menzel (2021): when \(y_{it}=\alpha_{i}\gamma_{t}+\varepsilon_{it}\) with \(E(\alpha_{i})=E(\gamma_{t})=0\), one can easily verify that \(a_{i}=g_{t}=0\), in which case the limiting distribution of appropriately scaled \(\widehat{\theta}\) is non-Gaussian. Assumption 1(v) uses the fixed-\(b\) asymptotic nesting for the bandwidth. The following theorem gives an asymptotic result for appropriately scaled \(\widehat{\theta}\) and a fixed-\(b\) asymptotic result for appropriately scaled \(\widehat{\Omega}_{CHS}\). **Theorem 1**.: _Let \(z_{k}\) be a \(k\times 1\) vector of independent standard normal random variables, and let \(W_{k}(r)\), \(r\in(0,1]\), be a \(k\times 1\) vector of independent standard Wiener processes independent of \(z_{k}\). Suppose Assumption 1 holds. Then as \((N,T)\rightarrow\infty\),_ \[\sqrt{N}\left(\widehat{\theta}-\theta\right)\Rightarrow\Lambda_{a}z_{k}+\sqrt {c}\Lambda_{g}W_{k}(1),\] \[N\widehat{\Omega}_{CHS}\Rightarrow\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_ {a}\Lambda_{a}^{\prime}+c\Lambda_{g}P\left(b,\widetilde{W}_{k}\left(r\right) \right)\Lambda_{g}^{\prime}, \tag{3.7}\] _where_ \[\widetilde{W}_{k}(r)=W_{k}(r)-rW_{k}(1),\] \[P\left(b,\widetilde{W}_{k}\left(r\right)\right)=\frac{2}{b}\int_{0}^{1} \widetilde{W}_{k}(r)\,\widetilde{W}_{k}(r)^{\prime}dr-\frac{1}{b}\int_{0}^{1 -b}\Big{[}\widetilde{W}_{k}\left(r\right)\widetilde{W}_{k}(r+b)^{\prime}+ \widetilde{W}_{k}\left(r+b\right)\widetilde{W}_{k}(r)^{\prime}\Big{]}dr.\] The proof of Theorem 1 is given in Appendix A. The limit of \(\sqrt{N}\left(\widehat{\theta}-\theta\right)\) was obtained by Chiang et al. (2022). Because \(z_{k}\) and \(W_{k}(1)\) are vectors of independent standard normals that are independent of each other, \(\Lambda_{a}z_{k}+\sqrt{c}\Lambda_{g}W_{k}(1)\) is a vector of normal random variables with variance-covariance matrix \(\Lambda_{a}\Lambda_{a}^{\prime}+c\Lambda_{g}\Lambda_{g}^{\prime}\). The \(\Lambda_{g}P\left(b,\widetilde{W}\left(r\right)\right)\Lambda_{g}^{\prime}\) component of (3.7) is equivalent to the fixed-\(b\) limit obtained by Kiefer and Vogelsang (2005) in stationary time series settings. Obviously, (3.7) is different than the limit obtained by Kiefer and Vogelsang (2005) because of the \(\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_{a}\Lambda_{a}^{\prime}\) term. As the proof illustrates, this term is the limit of the "cluster by individuals" (2.2) and "average of HACs" (2.4) components whereas the \(c\Lambda_{g}P\left(b,\widetilde{W}_{k}\left(r\right)\right)\Lambda_{g}^{\prime}\) term is the limit of the "HAC of averages" (2.3). Because of the component structure of (3.7), the fixed-\(b\) limits of \(t\) and \(Wald\) statistics based on \(\widehat{\Omega}_{CHS}\) are not pivotal. We provide details on test statistics after extending our results to the case of a linear panel regression. ### The Linear Panel Regression Case It is straightforward to extend our results to the case of a linear panel regression given by (2.1). The POLS estimator of \(\beta\) is \[\widehat{\beta}=\left(\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}x_{it}^{\prime}\right)^{ -1}\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}y_{it}. \tag{3.8}\] Defining \[\widehat{Q}\equiv\frac{1}{NT}\sum_{i=1}^{N}\sum_{t=1}^{T}x_{it}x_{it}^{\prime}, \tag{3.9}\] we can write the POLS estimator as \[\widehat{\beta}=\beta+\widehat{Q}^{-1}\left(\frac{1}{NT}\sum_{i=1}^{N}\sum_{t =1}^{T}x_{it}u_{it}\right).\] Following Chiang et al. (2022), we assume the components of the panel regression are generated from the component structure: \[(y_{it},x_{it}^{\prime},u_{it})^{\prime}=f\left(\alpha_{i},\gamma_{t},\varepsilon _{it}\right)\] where \(f\) is an unknown Borel-measurable function, the sequences \(\{\alpha_{i}\}\), \(\{\gamma_{t}\}\), and \(\{\varepsilon_{it}\}\) are mutually independent, \(\alpha_{i}\) is i.i.d across \(i\), \(\varepsilon_{it}\) is i.i.d across \(i\) and \(t\), and \(\gamma_{t}\) is a strictly stationary serially correlated process. Define the vector \(v_{it}=x_{it}u_{it}\). Similar to the simple mean model we can write \(a_{i}=E\left(x_{it}u_{it}|a_{i}\right)\), \(g_{t}=E\left(x_{it}u_{it}|\gamma_{t}\right)\), \(e_{it}=x_{it}u_{it}-a_{i}-g_{t}\), giving the decomposition \[v_{it}=x_{it}u_{it}=a_{i}+g_{t}+e_{it}.\] The Chiang et al. (2022) variance estimator of \(\widehat{\beta}\) is given by \[V\widehat{a}r_{CHS}(\widehat{\beta})=\widehat{Q}^{-1}\widehat{\Omega}_{CHS} \widehat{Q}^{-1} \tag{3.10}\] where \(\widehat{Q}\) is given by (3.9) and \(\widehat{\Omega}_{CHS}\) is given by (2.5) with \(\widehat{v}_{it}\) in (2.2) - (2.4) now defined as \(\widehat{v}_{it}=x_{it}\widehat{u}_{it}\) where \(\widehat{u}_{it}=y_{it}-x_{it}^{\prime}\widehat{\beta}\) are the POLS residuals. The following assumption is used to obtain an asymptotic result for (3.8) and a fixed-\(b\) asymptotic result for \(\widehat{\Omega}_{CHS}\) in the linear panel case. **Assumption 2**.: _For some \(s>1\) and \(\delta>0\), (i) \((y_{it},x_{it}^{\prime},u_{it})^{\prime}=f(\alpha_{i},\gamma_{t},\varepsilon_{ it})\) where \(\{\alpha_{i}\}\), \(\{\gamma_{t}\}\), and \(\{\varepsilon_{it}\}\) are mutually independent sequences, \(\alpha_{i}\) is i.i.d across \(i\), \(\varepsilon_{it}\) is i.i.d across \(i\) and \(t\), and \(\gamma_{t}\) is strictly stationary. (ii) \(E[x_{it}u_{it}]=0\), \(Q=E[x_{it}x_{it}^{\prime}]>0\), \(E[\|x_{it}\|^{8(s+\delta)}]<\infty\), and \(E[\|u_{it}\|^{8(s+\delta)}]<\infty\)._ \(\infty\). (iii) \(\gamma_{t}\) is an \(\alpha\)-mixing sequence with size \(2s/(s-1)\), i.e., \(\alpha_{\gamma}(\ell)=O(\ell^{-\lambda})\) for a \(\lambda>2s/(s-1)\). (iv) \(\Lambda_{a}\Lambda_{a}^{\prime}>0\) and/or \(\Lambda_{g}\Lambda_{g}^{\prime}>0\), and \(N/T\to c\) as \((N,T)\rightarrow\infty\) for some constant \(c\). (v) \(M=[bT]\) where \(b\in(0,1]\)._ Assumption 2 can be regarded as a counterpart of Assumption 1 with (ii) being strengthened. It is very similar to its counterpart in Chiang et al. (2022) with a main difference the use of the fixed-\(b\) asymptotic nesting for the bandwidth, \(M\), rather than the rate assumption given by Assumption 3(vi) of Chiang et al. (2022). For the same reason mentioned in the previous section, we discuss the case where \((x_{it},u_{it})\) are i.i.d separately in Appendix B. The next theorem presents the joint limit of the POLS estimator and the fixed-\(b\) joint limit of CHS variance estimator. **Theorem 2**.: _Let \(z_{k}\), \(W_{k}(r)\), \(\widetilde{W}_{k}\left(r\right)\) and \(P\left(b,\widetilde{W}_{k}\left(r\right)\right)\) be defined as in Theorem 1. Suppose Assumption 2 holds for model (2.1), then as \((N,T)\rightarrow\infty\),_ \[\sqrt{N}\left(\widehat{\beta}-\beta\right)\Rightarrow Q^{-1}B_{k},\] _where \(B_{k}\equiv\Lambda_{a}z_{k}+\sqrt{c}\Lambda_{g}W_{k}(1)\) and_ \[NV\widehat{a}r_{CHS}(\widehat{\beta})\Rightarrow Q^{-1}V_{k}(b)Q^{-1}, \tag{3.11}\] _where \(V_{k}(b)\equiv\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_{a}\Lambda_{a}^{\prime }+c\Lambda_{g}P\left(b,\widetilde{W}_{k}(r)\right)\Lambda_{g}^{\prime}\)._ The proof of Theorem 2 is given in Appendix A. We can see that the limiting random variable, \(V_{k}(b)\), depends on the choice of truncation parameter, \(M\), through \(b\). The use of the Bartlett kernel is reflected in the functional form of \(P\left(b,\widetilde{W}_{k}(r)\right)\) as well as the scaling term \(\left(1-b+\frac{1}{3}b^{2}\right)\) on \(\Lambda_{a}\Lambda_{a}^{\prime}\). Use of a different kernel would result in different functional forms for these limits. Because it is well known for the Bartlett kernel (see e.g. Kiefer and Vogelsang (2005)) that \[E\left[P\left(b,\widetilde{W}_{k}(r)\right)\right]=\left(1-b+\frac{1}{3}b^{2} \right)I_{k},\] where \(I_{k}\) is a \(k\times k\) identity matrix, it follows that \[E\left(V_{k}(b)\right) =\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_{a}\Lambda_{a}^{\prime }+c\Lambda_{g}E\left[P\left(b,\widetilde{W}_{b}(r)\right)\right]\Lambda_{g}^{\prime}\] \[=\left(1-b+\frac{1}{3}b^{2}\right)\left(\Lambda_{a}\Lambda_{a}^{ \prime}+c\Lambda_{g}\Lambda_{g}^{\prime}\right). \tag{3.12}\] The term \(\left(1-b+\frac{1}{3}b^{2}\right)\) is a multiplicative bias term that depends on the bandwidth sample size ratio, \(b=M/T\). We leverage this fact to implement a simple feasible bias correction for the CHS variance estimator that is explored below. Using the theoretical results developed in this section, we next examine the properties of test statistics based on the POLS estimator and CHS variance estimator. We also analyze tests based on two variants of the CHS variance estimator. One is a bias corrected estimator. The other is a variance estimator guaranteed to be positive semi-definite that is also bias corrected. ## 4 Inference In regression model (2.1) we focus on tests of linear hypothesis of the form: \[H_{0}:R\beta=r,\hskip 14.226378ptH_{1}:R\beta\neq r,\] where \(R\) is a \(q\times k\) matrix (\(q\leq k\)) with full rank equal to \(q\), and \(r\) is a \(q\times 1\) vector. Using \(V\widehat{a}r_{CHS}(\widehat{\beta})\) as given by (3.10), define a Wald statistic as \[Wald_{CHS} =\Big{(}R\widehat{\beta}-r\Big{)}^{\prime}\Big{(}RV\widehat{a}r_{ CHS}(\widehat{\beta})R^{\prime}\Big{)}^{-1}\left(R\widehat{\beta}-r\right)\] \[=\Big{(}R\widehat{\beta}-r\Big{)}^{\prime}\Big{(}R\widehat{Q}^{- 1}\widehat{\Omega}_{CHS}\widehat{Q}^{-1}R^{\prime}\Big{)}^{-1}\left(R\widehat {\beta}-r\right).\] When \(q=1\), we can define a \(t\)-statistic as \[t_{CHS}=\frac{R\widehat{\beta}-r}{\sqrt{RV\widehat{a}r_{CHS}(\widehat{\beta}) R^{\prime}}}=\frac{R\widehat{\beta}-r}{\sqrt{R\widehat{Q}^{-1}\widehat{ \Omega}_{CHS}\widehat{Q}^{-1}R^{\prime}}}.\] Appropriately scaling the numerators and denominators of the test statistics and applying Theorem 2, we obtain under \(H_{0}\): \[Wald_{CHS} =\sqrt{N}\Big{(}R\widehat{\beta}-r\Big{)}^{\prime}\Big{(}RV \widehat{a}r_{CHS}(\widehat{\beta})R^{\prime}\Big{)}^{-1}\sqrt{N}\left(R \widehat{\beta}-r\right)\] \[\Rightarrow\big{(}RQ^{-1}B_{k}\big{)}^{\prime}\big{(}RQ^{-1}V_{k }(b)Q^{-1}R^{\prime}\big{)}^{-1}\left(RQ^{-1}B_{k}\right), \tag{4.1}\] \[t_{CHS}=\frac{\sqrt{N}\Big{(}R\widehat{\beta}-r\Big{)}}{\sqrt{RV\widehat{a}r_ {CHS}(\widehat{\beta})R^{\prime}}}\Rightarrow\frac{RQ^{-1}B_{k}}{\sqrt{RQ^{-1}V _{k}(b)Q^{-1}R^{\prime}}}. \tag{4.2}\] The limits of \(Wald_{CHS}\) and \(t_{CHS}\) are similar to the fixed-\(b\) limits obtained by Kiefer and Vogelsang (2005) but have distinct differences. First, the form of \(V_{k}(b)\) depends on two variance matrices rather than one. Second, the variance matrices do not scale out of the statistics. Therefore, the fixed-\(b\) limits given by (4.1) and (4.2) are not pivotal. We propose a plug-in method for the simulation of critical values from these asymptotic random variables. For the case where \(b\) is small, the fixed-\(b\) critical values are close to \(\chi_{q}^{2}\) and \(N(0,1)\) critical values respectively. This can be seen by computing the probability limits of the asymptotic distributions as \(b\to 0\). In particular, using the fact that \(p\lim_{b\to 0}P\left(b,\widetilde{W}_{k}(r)\right)=I_{k}\) (see Kiefer and Vogelsang (2005)), it follows that \[p\lim_{b\to 0}V_{k}(b) =p\lim_{b\to 0}\left[\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_{a} \Lambda_{a}^{\prime}+c\Lambda_{g}P\left(b,\widetilde{W}_{k}(r)\right)\Lambda_ {g}^{\prime}\right]\] \[=\Lambda_{a}\Lambda_{a}^{\prime}+c\Lambda_{g}\Lambda_{g}^{\prime }=var(B_{k}).\] Therefore, it follows that \[p\lim_{b\to 0}\left[\left(RQ^{-1}B_{k}\right)^{\prime}\!\left(RQ^{-1}V_{k}(b )Q^{-1}R^{\prime}\right)^{-1}\left(RQ^{-1}B_{k}\right)\right]\] \[=\left(RQ^{-1}B_{k}\right)^{\prime}\!\left(RQ^{-1}var(B_{k})Q^{- 1}R^{\prime}\right)^{-1}\left(RQ^{-1}B_{k}\right)\sim\chi_{q}^{2},\] and \[p\lim_{b\to 0}\left[\frac{RQ^{-1}B_{k}}{\sqrt{RQ^{-1}V_{k}(b)Q^{-1}R^{ \prime}}}\right]=\frac{RQ^{-1}B_{k}}{\sqrt{RQ^{-1}var(B_{k})Q^{-1}R^{\prime}}} \sim N(0,1).\] In practice there will not be a substantial difference between using \(\chi_{q}^{2}\) and \(N(0,1)\) critical values and fixed-\(b\) critical values for small bandwidths. However, for larger bandwidths more reliable inference can be obtained with fixed-\(b\) critical values. ### Bias-Corrected CHS Variance Estimator We now leverage the form of the mean of the fixed-\(b\) limit of the CHS variance estimator as given by (3.12) to propose a biased corrected version of the CHS variance estimator. The idea is simple. We can scale out the \(\left(1-b+\frac{1}{3}b^{2}\right)\) multiplicative term evaluated at \(b=M/T\) to make the CHS variance estimator an asymptotically unbiased estimator of \(\Lambda_{a}\Lambda_{a}^{\prime}+c\Lambda_{g}\Lambda_{g}^{\prime}\), the variance of \(B_{k}\equiv\Lambda_{a}z_{k}+\sqrt{c}\Lambda_{g}W_{k}(1)\). Define the bias corrected CHS variance estimators as \[\widehat{\Omega}_{BCCHS} =\left(1-\frac{M}{T}+\frac{1}{3}\left(\frac{M}{T}\right)^{2} \right)^{-1}\widehat{\Omega}_{CHS},\] \[\widehat{Var}_{BCCHS}\left(\widehat{\beta}\right) =\widehat{Q}^{-1}\widehat{\Omega}_{BCCHS}\widehat{Q}^{-1},\] and the corresponding test statistics \[Wald_{BCCHS} =\left(R\widehat{\beta}-r\right)^{\prime}\!\left(R\widehat{Var}_ {BCCHS}\left(\widehat{\beta}\right)R^{\prime}\right)^{-1}\left(R\widehat{\beta }-r\right)\] \[=\left(R\widehat{\beta}-r\right)^{\prime}\!\left(R\widehat{Q}^{- 1}\widehat{\Omega}_{BCCHS}\widehat{Q}^{-1}R^{\prime}\right)^{-1}\left(R \widehat{\beta}-r\right),\] \[t_{BCCHS}=\frac{R\widehat{\beta}-r}{\sqrt{R\widehat{Var}_{BCCHS}\left(\widehat{\beta} \right)R^{\prime}}}=\frac{R\widehat{\beta}-r}{\sqrt{R\widehat{Q}^{-1}\widehat {\Omega}_{BCCHS}\widehat{Q}^{-1}R^{\prime}}}.\] Using Theorem 2 we easily obtain the fixed-\(b\) limits as \[\widehat{\Omega}_{BCCHS} \Rightarrow\left(1-b+\frac{1}{3}b^{2}\right)^{-1}V_{k}(b),\] \[Wald_{BCCHS} \Rightarrow(RQ^{-1}B_{k})^{\prime}\left(RQ^{-1}\left(1-b+\frac{1 }{3}b^{2}\right)^{-1}V_{k}(b)R\right)^{-1}(RQ^{-1}B_{k}),\] \[t_{BCCHS} \Rightarrow\frac{RQ^{-1}B_{k}}{\sqrt{RQ^{-1}\left(1-b+\frac{1}{3} b^{2}\right)^{-1}V_{k}(b)Q^{-1}R^{\prime}}}.\] Notice that while the fixed-\(b\) limits are different when using the bias corrected CHS variance estimator, they are scalar multiples of the fixed-\(b\) limits when using the original CHS variance estimator. Therefore, the fixed-\(b\) critical values of the \(Wald_{BCCHS}\) and \(t_{BCCHS}\) are proportional to the the fixed-\(b\) critical values of \(Wald_{CHS}\) and \(t_{CHS}\). As long as fixed-\(b\) critical values are used, there is no practical effect on inference from using the bias-corrected CHS variance estimator. Where the bias correction matters is when \(\chi_{q}^{2}\) and \(N(0,1)\) critical values are used. In this case, the bias-corrected CHS variance can provide more accurate finite sample inference. This will be illustrated by our finite sample simulations. ### An Alternative Bias-Corrected Variance Estimator As noted by Chiang et al. (2022), the CHS variance estimator does not ensure positive-definiteness, which is also the case for the clustered estimator proposed by Cameron et al. (2011). Davezies et al. (2018) and MacKinnon et al. (2021) point out that the double-accounting adjustment term in the estimator of Cameron et al. (2011) is of small order, and removing the adjustment term has the computational advantage of guaranteeing positive semi-definiteness. Analogously, we can think of \(\widehat{\Omega}_{NW}\), as given by (2.4), as a double-accounting adjustment term. If we exclude this term, the variance estimator becomes the sum of two positive semi-definite terms and is guaranteed to be positive definite. Another motivation for dropping (2.4) is that, under fixed-\(b\) asymptotics, (2.4) simply contributes downward bias in the estimation of the \(\Lambda_{a}\Lambda_{a}^{\prime}\) term of \(var(B_{k})\) through the \(-b+\frac{1}{3}b^{2}\) part of \(\left(1-b+\frac{1}{3}b^{2}\right)\Lambda_{a}\Lambda_{a}^{\prime}\) in \(V_{k}(b)\). Intuitively, the Arellano cluster estimator takes care of the serial correlation introduced by \(a_{i}\), and the DK estimator takes care of the cross-sectional and cross-time dependence introduced by \(g_{t}\). From this perspective there is no need to include \(\widehat{\Omega}_{NW}\). Accordingly, we propose a variance estimator which is the sum of the Arellano variance estimator and the bias-corrected DK variance estimator (labelled as DKA hereafter) defined as \[\widehat{\Omega}_{DKA}=:\widehat{\Omega}_{A}+\left(1-b+\frac{1}{3}b^{2}\right)^{- 1}\widehat{\Omega}_{DK},\] where \(\widehat{\Omega}_{A}\) and \(\widehat{\Omega}_{DK}\) are defined in (2.2) and (2.3). Notice that we bias correct the DK component so that the resulting variance estimator is asymptotically unbiased under fixed-\(b\) asymptotics. This can improve inference should \(\chi_{q}^{2}\) or \(N(0,1)\) critical values be used in practice. The following theorem gives the fixed-\(b\) limit of the scaled DKA variance estimator. **Theorem 3**.: _Suppose Assumption 2 holds for model (2.1), then as \((N,T)\rightarrow\infty\),_ \[N\widehat{\Omega}_{DKA}\Rightarrow\Lambda_{a}\Lambda_{a}^{\prime}+c\Lambda_{g} \left(1-b+\frac{1}{3}b^{2}\right)^{-1}P\left(b,\widetilde{W}_{b}\left(r\right) \right)\Lambda_{g}^{\prime}=\left(1-b+\frac{1}{3}b^{2}\right)^{-1}V_{k}(b). \tag{4.3}\] The proof of Theorem 3 can be found in Appendix A. Define the statistics \(Wald_{DKA}\) and \(t_{DKA}\) analogous to \(Wald_{BCCHS}\) and \(t_{BCCHS}\) using the variance estimator for \(\widehat{\beta}\) given by \[\widehat{Var}_{DKA}\left(\widehat{\beta}\right)=\widehat{Q}^{-1}\widehat{ \Omega}_{DKA}\widehat{Q}^{-1}.\] Using Theorem 3, the fixed-\(b\) limits easily follow as \[Wald_{DKA} \Rightarrow(RQ^{-1}B_{k})^{\prime}\left(R\widehat{Q}^{-1}\left(1 -b+\frac{1}{3}b^{2}\right)^{-1}V_{k}(b)\widehat{Q}^{-1}R^{\prime}\right)^{-1 }(RQ^{-1}B_{k}),\] \[t_{DKA} \Rightarrow\frac{RQ^{-1}B_{k}}{\sqrt{RQ^{-1}\left(1-b+\frac{1}{3 }b^{2}\right)^{-1}V_{k}(b)Q^{-1}R^{\prime}}},\] which are the same as the limits of \(Wald_{BCCHS}\) and \(t_{BCCHS}\). ### Simulated fixed-\(b\) Critical Values As we have noted, the fixed-\(b\) limits of the test statistics given by (4.1) and (4.2) are not pivotal due to the nuisance parameters \(\Lambda_{a}\) and \(\Lambda_{g}\). This is also true for tests based on the BCCHS and DKA variance estimators. To estimate \(\Lambda_{a}\) and \(\Lambda_{g}\) we propose the following estimators: \[\widehat{\Lambda_{a}\Lambda_{a}} \equiv\frac{1}{NT^{2}}\sum_{i=1}^{N}\left(\sum_{t=1}^{T}\widehat {v}_{it}\right)\left(\sum_{s=1}^{T}\widehat{v}_{is}^{\prime}\right)\!,\] \[\widehat{\Lambda_{g}\Lambda_{g}^{\prime}} \equiv\left(1-b_{dk}+\frac{1}{3}b_{dk}^{2}\right)^{-1}\frac{1}{N^ {2}T}\sum_{t=1}^{T}\sum_{s=1}^{T}k\left(\frac{|t-s|}{M_{dk}}\right)\left(\sum _{i=1}^{N}\widehat{v}_{it}\right)\left(\sum_{j=1}^{N}\widehat{v}_{js}^{\prime }\right)\!.\] where \(b_{dk}=\frac{M_{dk}}{T}\) and \(M_{dk}\) is the truncation parameter for Driscoll-Kraay variance estimator.1 By Lemma 2 of Chiang et al. (2022), we have, Footnote 1: Note that, in principle, \(b_{dk}\) can be different from the \(b\) used for CHS variance estimator. For simulating asymptotic critical values we used the data dependent rule of Andrews (1991) to obtain \(b_{dk}\). \[\widehat{\Lambda_{a}\Lambda_{a}^{\prime}}=\Lambda_{a}\Lambda_{a}^{\prime}+o_{p} (1),\] And by (A.10) in Appendix A, we have, as \((N,T)\rightarrow\infty\), \[\widehat{\Lambda_{g}\Lambda_{g}^{\prime}}\Rightarrow\Lambda_{g}\frac{P\left(b,\widetilde{W}\left(r\right)\right)}{1-b_{dk}+\frac{1}{3}b_{dk}^{2}}\Lambda_{ g}^{\prime}.\] We can see that \(\widehat{\Lambda_{a}\Lambda_{a}^{\prime}}\) is a consistent estimator for \(\Lambda_{a}\Lambda_{a}^{\prime}\); \(\widehat{\Lambda_{g}\Lambda_{g}^{\prime}}\) is a bias-corrected estimator of \(\Lambda_{g}\Lambda_{g}^{\prime}\) with the mean of the limit equal to \(\Lambda_{g}\Lambda_{g}^{\prime}\) and the limit converges to \(\Lambda_{g}\Lambda_{g}^{\prime}\) as \(b_{dk}\to 0\). The matrices \(\widehat{\Lambda}_{a}\) and \(\widehat{\Lambda}_{b}\) are matrix square roots of \(\widehat{\Lambda_{a}\Lambda_{a}^{\prime}}\) and \(\widehat{\Lambda_{b}\Lambda_{b}^{\prime}}\) respectively such \(\widehat{\Lambda}_{a}\widehat{\Lambda}_{a}^{\prime}=\widehat{\Lambda_{a} \Lambda_{a}}\) and \(\widehat{\Lambda_{b}\Lambda_{b}^{\prime}}=\widehat{\Lambda_{b}\Lambda_{b}^{ \prime}}\) We propose the following plug-in method for simulating the asymptotic critical values of the fixed-\(b\) limits. Details are given for a \(t\)-test with the modifications needed for Wald tests being obvious. 1. For a given data set with sample sizes \(N\) and \(T\), calculate \(\widehat{Q}\), \(\widehat{\Lambda}_{a}\) and \(\widehat{\Lambda}_{g}\). Let \(b=M/T\) where \(M\) is the bandwidth used for \(\widehat{\Omega}_{CHS}\). Let \(c=N/T\). 2. Taking \(\widehat{Q}\), \(\widehat{\Lambda}_{a}\), \(\widehat{\Lambda}_{g}\), \(b\), \(c\), and \(R\) as given, use Monte Carlo methods to simulate critical values for the distributions \[\widehat{t}_{CHS}=\frac{R\widehat{Q}^{-1}\left(\widehat{\Lambda}_{a}z_{k}+ \sqrt{c}\widehat{\Lambda}_{g}W_{k}(1)\right)}{\sqrt{R\widehat{Q}^{-1}\left( \left(1-b+\frac{1}{3}b^{2}\right)\widehat{\Lambda}_{a}\widehat{\Lambda}_{a}^{ \prime}+c\widehat{\Lambda}_{g}P\left(b,\widetilde{W}_{k}(r)\right)\widehat{ \Lambda}_{g}^{\prime}\right)\widehat{Q}^{-1}R^{\prime}}},\] (4.4) \[\widehat{t}_{BCCHS}=\widehat{t}_{DKA}=\left(1-b+\frac{1}{3}b^{2} \right)^{\frac{1}{2}}\widehat{t}_{CHS}.\] (4.5) 3. Typically the process \(W_{k}(r)\) is approximated using scaled partial sums of a large number of i.i.d \(N(0,I_{k})\) realizations (increments) for each replication of the Monte Carlo simulation. ## 5 Monte Carlo Simulations To illustrate the finite sample performance of the various variance estimators and corresponding test statistics, we use Monte Carlo simulations with 1,000 replications to compute coverage probabilities of 95% confidence intervals (equivalently 5% significance level \(t\)-statistics) for the slope parameter in a regression with one regressor. We first focus the POLS estimator with confidence intervals (C.I.s) computed using the following variance estimators: Eicker-Huber-White (EHW), cluster-by-\(i\) (Ci), cluster-by-\(t\) (Ct), DK, CHS, BCCHS, and DKA. For the variance estimators that require a bandwidth choice (DK, CHS, BCCHS, and DKA) we report results using the Andrews (1991) AR(1) plug-in data-dependent bandwidth (same formula for all four variance estimators) designed to minimize the approximate mean square of the variance estimator. We label this bandwidth \(\widehat{M}\) and its corresponding bandwidth sample size ratio \(\widehat{b}=\widehat{M}/T\). We also report results for a grid of bandwidths to show how the choice of bandwidth matters. For tests based on CHS and DKA, we use both the standard normal critical values and the simulated fixed-\(b\) critical values. The simulated critical values use 1,000 replications with scaled partial sums of 500 i.i.d \(N(0,1)\) increments to approximate the Wiener processes in the asymptotic limits. While these are relatively small numbers of replications and increments for an asymptotic critical value simulation, it was necessitated by computational considerations given the need to run an asymptotic critical value simulation for _each_ replication of the finite sample simulation. We generate data according to the simple two variable panel regression give by \[y_{it}=\beta_{0}+\beta_{1}x_{it}+u_{it},\] where the true parameters are \((\beta_{0},\beta_{1})=(1,1)\). We consider three data generating processes (DGP) for \(x_{it}\) and \(u_{it}\). To allow direct comparisons with Chiang et al. (2022) the first DGP is given by \[DGP(1): x_{it}=\omega_{\alpha}\alpha_{i}^{x}+\omega_{\gamma}\gamma_{t}^{x}+ \omega_{\varepsilon}\varepsilon_{it}^{x},\] \[u_{it}=\omega_{\alpha}\alpha_{i}^{u}+\omega_{\gamma}\gamma_{t}^{ u}+\omega_{\varepsilon}\varepsilon_{it}^{u},\] \[\gamma_{t}^{(j)}=\rho_{\gamma}\gamma_{t-1}^{(j)}+\widetilde{ \gamma}_{t}^{(j)}\text{ for }j=x,u,\] where the latent components \(\{\alpha_{i}^{x},\alpha_{i}^{u},\varepsilon_{it}^{x},\varepsilon_{it}^{u}\}\) are each i.i.d \(N(0,1)\), and the error terms \(\widetilde{\gamma}_{t}^{(j)}\) for the AR(1) processes are i.i.d \(N(0,1-\rho_{\gamma}^{2})\) for \(j=x,u\). To explore the role played by the component structure representation, we consider a second DGP where the latent components enter \(x_{it}\) and \(u_{it}\) in a non-linear way: \[DGP(2): x_{it}=log(p_{it}^{(x)}/(1-p_{it}^{(x)})),\] \[u_{it}=log(p_{it}^{(u)}/(1-p_{it}^{(u)})),\] \[p_{it}^{(j)}=\Phi(\omega_{\alpha}\alpha_{i}^{(j)}+\omega_{\gamma} \gamma_{t}^{(j)}+\omega_{\varepsilon}\varepsilon_{it}^{(j)})\text{ for }j=x,u,\] where \(\Phi(\cdot)\) is the cumulative distribution function of a standard normal distribution and the latent components are generated in the same way as DGP(1). ### Simulation Results We first focus on DGP(1) to make direct comparisons to the simulation results of Chiang et al. (2022). Empirical null coverage probabilities of the confidence intervals for \(\widehat{\beta}_{1}\) are presented in Table 1. Both sample sizes are 25 and weights on the latent components are \(\omega_{\alpha}=0.25\), \(\omega_{\gamma}=0.5\), \(\omega_{\varepsilon}=0.25\). Because the time effect, \(\gamma_{t}\), has a relatively large weight, cross-sectional dependence dominates the temporal dependence. We can see that the C.I.s using EHW, Ci, and Ct suffer from a severe under-coverage problem as they fail to capture both cross-sectional and time dependence. With the time effect, \(\gamma_{t}\), being mildly persistent (\(\rho_{\gamma}=0.425\)), the DK and CHS C.I.s undercover with small bandwidths with rejections around 0.85. When using standard normal critical values, the under-coverage problem becomes more severe as \(M\) increases because of the well known downward bias in kernel variance estimators that reflects the need to estimate \(\beta_{0}\) and \(\beta_{1}\). Coverages of DK and CHS using \(\widehat{M}\) are similar to the smaller bandwidth cases which makes sense given that the average \(\widehat{M}\) across replications is 3.2. Because they are bias corrected, the BCCHS and DKA variance estimators provide coverage that is less sensitive to the bandwidth. This is particularly true for DKA. If the simulated fixed-\(b\) critical values are used, coverages are closest to 0.95 and very stable across bandwidths with DKA having the best coverage. Because the CHS variance estimator is not guaranteed to be positive definite, we report the number of times that CHS/BCCHS estimates are negative out of the 1,000 replications. In Table 1 there were no cases where CHS/BCCHS estimates are negative. Tables 2-5 give results for DGP(2) where the latent components enter in a non-linear way. Tables 2-4 have both sample sizes equal to 25 with weights across latent components being the same as DGP(1) (\(\omega_{\alpha}=\omega_{\varepsilon}=0.25\), \(\omega_{\gamma}=0.5\)). Table 2 has mild persistence in \(\gamma_{t}\) (\(\rho_{\gamma}=0.25\)). Table 3 has moderate persistence (\(\rho_{\gamma}=0.5\)) and Table 4 has strong persistence (\(\rho_{\gamma}=0.75\)). The patterns in Tables 2-4 are similar to each other and to the patterns in Table 1. C.I.s with variance estimators non-robust to both the individual and time latent components under-cover with the under-coverage problem increasing with \(\rho_{\gamma}\). With \(\rho_{\gamma}=0.25\) CHS has reasonable coverage (about \begin{table} \begin{tabular}{c c|c c c c c c c c|c} \multicolumn{10}{c}{} \\ & & & & & & \multicolumn{3}{c}{BC-} & & \multicolumn{3}{c|}{fixed-b c.v.} & \multicolumn{1}{c}{\# of negative} \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & \multicolumn{1}{c}{* CHS} & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 37.5 & 45.5 & 82.7 & 83.0 & 84.9 & 87.8 & 90.0 & 89.3 & 91.6 & 0 \\ 2 & 0.08 & 37.5 & 45.5 & 82.7 & 84.1 & 85.7 & 87.7 & 89.7 & 88.8 & 90.5 & 0 \\ 3 & 0.12 & 37.5 & 45.5 & 82.7 & 82.5 & 85.2 & 87.3 & 90.1 & 89.3 & 91.3 & 0 \\ 4 & 0.16 & 37.5 & 45.5 & 82.7 & 81.9 & 84.4 & 87.1 & 89.6 & 89.4 & 91.8 & 0 \\ 5 & 0.20 & 37.5 & 45.5 & 82.7 & 80.0 & 82.5 & 86.7 & 89.4 & 88.9 & 91.8 & 0 \\ 10 & 0.40 & 37.5 & 45.5 & 82.7 & 73.6 & 75.7 & 84.5 & 87.4 & 88.8 & 91.6 & 0 \\ 20 & 0.80 & 37.5 & 45.5 & 82.7 & 60.9 & 62.3 & 83.9 & 87.1 & 89.3 & 91.7 & 0 \\ 0.88) with small bandwidths but under-covers severely with large bandwidths. BCCHS performs much better because of the bias correction and fixed-\(b\) critical values provide some additional modest improvements. DKA has better coverage especially when fixed-\(b\) critical values are used with large bandwidths. As \(\rho_{\gamma}\) increases, all approaches have increasing under-coverage problems with DKA continuing to perform best. Table 5 has the same configuration as Table 4 but with both sample sizes increased to 50. Both BCCHS and DKA show some improvements in coverage. This illustrates the well known trade-off between the sample size and magnitude of persistence for accuracy of asymptotic approximations with dependent data. Regarding bandwidth choice, the data dependent bandwidth performs reasonably well for CHS, BCCHS, and DKA. Finally, the chances of CHS/BCCHS being negative are very small but not zero. ### Additional Simulation Results not Covered by the Theory In the main theorems, the existence of either the individual component or the common time component is required to obtain the fixed-\(b\) limits as stated. Table 6 gives results for the i.i.d data case where the aforementioned assumption is clearly violated. \begin{table} \begin{tabular}{c c|c c c c c c c c|c} \hline \hline & & \multicolumn{8}{c}{BC-} & \multicolumn{2}{c|}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 39.1 & 44.3 & 87.6 & 86.0 & 87.6 & 89.8 & 92.2 & 91.6 & 92.9 & 0 \\ 2 & 0.08 & 39.1 & 44.3 & 87.6 & 87.0 & 88.3 & 89.7 & 92.1 & 91.2 & 93.2 & 0 \\ 3 & 0.12 & 39.1 & 44.3 & 87.6 & 86.0 & 87.6 & 89.8 & 92.2 & 92.0 & 93.7 & 0 \\ 4 & 0.16 & 39.1 & 44.3 & 87.6 & 84.5 & 86.4 & 89.5 & 91.8 & 92.2 & 93.4 & 0 \\ 5 & 0.20 & 39.1 & 44.3 & 87.6 & 83.1 & 84.8 & 89.1 & 91.2 & 91.5 & 93.5 & 0 \\ 10 & 0.40 & 39.1 & 44.3 & 87.6 & 76.4 & 77.8 & 85.9 & 88.8 & 91.2 & 93.5 & 0 \\ 20 & 0.80 & 39.1 & 44.3 & 87.6 & 64.5 & 66.0 & 84.7 & 87.9 & 90.7 & 93.8 & 0 \\ 25 & 1.00 & 39.1 & 44.3 & 87.6 & 59.4 & 60.5 & 84.6 & 88.2 & 90.8 & 93.5 & 0 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 12, with an average of 3.0.} \\ \end{tabular} \end{table} Table 2: Sample Coverage Probabilities (%), Nominal Coverage 95% \begin{table} \begin{tabular}{c c|c c c c c c c c c|c} \hline \hline & & \multicolumn{8}{c}{BC-} & \multicolumn{2}{c|}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 36.5 & 45.3 & 79.6 & 81.1 & 83.3 & 85.8 & 88.2 & 87.9 & 89.8 & 0 \\ 2 & 0.08 & 36.5 & 45.3 & 79.6 & 81.8 & 83.3 & 85.0 & 87.6 & 86.4 & 88.6 & 0 \\ 3 & 0.12 & 36.5 & 45.3 & 79.6 & 81.2 & 83.4 & 85.9 & 88.4 & 87.7 & 89.7 & 0 \\ 4 & 0.16 & 36.5 & 45.3 & 79.6 & 80.4 & 82.4 & 85.9 & 88.4 & 88.8 & 90.5 & 0 \\ 5 & 0.20 & 36.5 & 45.3 & 79.6 & 78.6 & 81.1 & 85.6 & 88.5 & 87.5 & 90.3 & 0 \\ 10 & 0.40 & 36.5 & 45.3 & 79.6 & 71.5 & 73.9 & 83.4 & 86.3 & 88.2 & 90.4 & 0 \\ 20 & 0.80 & 36.5 & 45.3 & 79.6 & 59.5 & 61.3 & 83.2 & 86.4 & 87.6 & 90.7 & 0 \\ 25 & 1.00 & 36.5 & 45.3 & 79.6 & 54.7 & 56.6 & 82.5 & 86.8 & 88.1 & 90.9 & 0 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 20, with an average of 3.4.} \\ \end{tabular} \end{table} Table 3: Sample Coverage Probabilities (%), Nominal Coverage 95% By setting \(\omega_{\alpha}=0\), \(\omega_{\gamma}=0\) in DGP(1), we present coverage probabilities for the i.i.d case in Table 6. There are some important differences between the coverage probabilities in Table 6 relative to previous tables. First, notice that the coverages using EHW, Ci, and Ct are close to the nominal level as one would expect. Coverages of CHS are close to 0.89 for small bandwidths although severe under-coverage problems occur with larger bandwidths. BCCHS is less prone to under-coverage as the bandwidth increases and has stable coverage when fixed-\(b\) critical values are used. Interestingly, DKA over-covers regardless of the bandwidth and whether or not fixed-\(b\) critical values are used. It is not surprising that coverages of DKA and CHS C.I.s with fixed-\(b\) critical values are not as close to 0.95 due to the missing components, but it is not clear DKA over-covers whereas CHS under-covers. In Appendix B we provide the fixed-\(b\) limiting behavior of the CHS, BCCHS and DKA statistics for the case where neither the individual nor the time component are in the model and the idiosyncratic error is i.i.d. The fixed-\(b\) limits are different than those given by Theorem 2. In particular, DKA associated critical values simulated from the fixed-\(b\) limits under Assumption 2 are too big for i.i.d data, due to the numerator being too large (see Appendix B for details). As for CHS (and BCCHS) associated statistics, both the numerator and the denominator are too large in \begin{table} \begin{tabular}{c c|c c c c c c c c c|c} \hline \hline & & \multicolumn{8}{c}{BC-} & \multicolumn{3}{c|}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 34.7 & 49.5 & 64.5 & 68.3 & 71.8 & 77.3 & 81.7 & 79.5 & 83.7 & 0 \\ 2 & 0.08 & 34.7 & 49.5 & 64.5 & 69.0 & 73.7 & 75.8 & 79.2 & 76.3 & 79.6 & 0 \\ 3 & 0.12 & 34.7 & 49.5 & 64.5 & 70.2 & 74.5 & 77.1 & 81.1 & 78.9 & 82.1 & 0 \\ 4 & 0.16 & 34.7 & 49.5 & 64.5 & 70.0 & 74.2 & 77.0 & 80.9 & 79.4 & 83.5 & 0 \\ 5 & 0.20 & 34.7 & 49.5 & 64.5 & 69.4 & 73.1 & 77.4 & 80.6 & 80.2 & 84.4 & 0 \\ 10 & 0.40 & 34.7 & 49.5 & 64.5 & 62.5 & 65.1 & 75.8 & 80.9 & 80.0 & 85.5 & 1 \\ 20 & 0.80 & 34.7 & 49.5 & 64.5 & 53.9 & 56.1 & 73.8 & 80.9 & 79.1 & 85.4 & 4 \\ 25 & 1.00 & 34.7 & 49.5 & 64.5 & 48.5 & 51.0 & 74.1 & 81.0 & 80.1 & 85.1 & 4 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 25, with an average of 3.4.} \\ \multicolumn{10}{l}{\(N=T=50\); DGP(2): \(\omega_{\alpha}=\omega_{\varepsilon}=0.25\), \(\omega_{\gamma}=0.5\), \(\rho_{\gamma}=0.75\); POLS.} \\ \hline \hline \end{tabular} \end{table} Table 4: Sample Coverage Probabilities (%), Nominal Coverage 95% \begin{table} \begin{tabular}{c c|c c c c c c c c|c} \hline \hline & & \multicolumn{8}{c}{BC-} & \multicolumn{3}{c|}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 22.6 & 42.2 & 66.5 & 77.2 & 80.4 & 84.2 & 86.6 & 86.7 & 88.1 & 0 \\ 4 & 0.08 & 22.6 & 42.2 & 66.5 & 78.3 & 81.8 & 83.8 & 85.8 & 85.1 & 86.3 & 0 \\ 6 & 0.12 & 22.6 & 42.2 & 66.5 & 78.2 & 81.0 & 84.3 & 86.5 & 86.8 & 88.5 & 0 \\ 8 & 0.16 & 22.6 & 42.2 & 66.5 & 77.5 & 80.3 & 84.5 & 86.7 & 88.1 & 89.7 & 0 \\ 10 & 0.20 & 22.6 & 42.2 & 66.5 & 76.0 & 79.0 & 84.3 & 86.2 & 87.9 & 89.3 & 0 \\ 20 & 0.40 & 22.6 & 42.2 & 66.5 & 68.3 & 71.7 & 82.4 & 84.5 & 87.7 & 89.4 & 0 \\ 40 & 0.80 & 22.6 & 42.2 & 66.5 & 58.9 & 61.1 & 80.0 & 82.9 & 88.7 & 90.5 & 0 \\ 50 & 1.00 & 22.6 & 42.2 & 66.5 & 53.5 & 56.7 & 79.7 & 83.1 & 88.5 & 90.1 & 0 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 16, with an average of 6.1.} \\ \end{tabular} \end{table} Table 5: Sample Coverage Probabilities (%), Nominal Coverage 95% the asymptotics given in Section 4 when the data is i.i.d, and that is why it does not make a big difference when the critical values simulated from (4.4) are used for i.i.d data. We can think of the i.i.d data case as one extreme where weights are zero on the single indexed components and all weight is on the idiosyncratic component. The other extreme has all weight on the singled indexed component and zero weight on the idiosyncratic component. In the latter case we would expect the sample coverages of CHS and DKA C.I.s to work very well. For other cases with some weight on all components, we would anticipate the sample coverages to be between the two extreme cases. For example, given the results in Table 4 and Table 6, we would expect the sample coverage for DKA to be between the undercoverage case in Table 4 and the overcoverage case in Table 6 if we rescale the weights on \(\alpha_{i}\), \(\gamma_{t}\), and \(\varepsilon_{it}\) appropriately. That is exactly what we see in Table 7 where the relative weight on \(\varepsilon_{it}\) is increased compared to Table 4. Relative to Table 4 coverages increase for DKA and all other tests and move towards the coverages in Table 6. Up to this point temporal and cross-sectional dependence in the simulated data are only generated by the individual and common time effects and we see the theory holds up reasonably well. It is also interesting to see if the theory extends to more general dependence structure. Fixed- \begin{table} \begin{tabular}{c c|c c c c c c c c|c} \hline \hline & & & & & & & BC- & & & fixed-b c.v. & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 94.7 & 93.8 & 92.4 & 89.9 & 89.0 & 90.8 & 99.1 & 91.0 & 99.2 & 9 \\ 2 & 0.08 & 94.7 & 93.8 & 92.4 & 91.1 & 89.6 & 90.8 & 99.2 & 90.9 & 99.3 & 5 \\ 3 & 0.12 & 94.7 & 93.8 & 92.4 & 89.7 & 88.7 & 90.3 & 99.1 & 91.0 & 99.4 & 10 \\ 4 & 0.16 & 94.7 & 93.8 & 92.4 & 89.0 & 87.4 & 89.7 & 98.8 & 90.6 & 99.3 & 11 \\ 5 & 0.20 & 94.7 & 93.8 & 92.4 & 87.6 & 85.5 & 89.6 & 98.8 & 89.7 & 99.0 & 19 \\ 10 & 0.40 & 94.7 & 93.8 & 92.4 & 78.5 & 79.7 & 87.2 & 98.5 & 88.0 & 99.2 & 50 \\ 20 & 0.80 & 94.7 & 93.8 & 92.4 & 66.8 & 67.8 & 85.9 & 98.8 & 87.0 & 99.4 & 66 \\ 25 & 1.00 & 94.7 & 93.8 & 92.4 & 62.8 & 63.6 & 85.5 & 98.6 & 86.9 & 99.3 & 68 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 7, with an average of 3.0.} \\ \end{tabular} \end{table} Table 6: Sample Coverage Probabilities (%), Nominal Coverage 95% \(N=T=25\), i.i.d: DGP (1) with \(\omega_{\alpha}=\omega_{\gamma}=0\) and \(\omega_{\varepsilon}=1\); POLS. \begin{table} \begin{tabular}{c c|c c c c c c c c|c} \hline \hline & & & & & & BC- & & & fixed-b c.v. & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 76.3 & 76.4 & 81.4 & 82.2 & 82.9 & 85.7 & 94.2 & 86.1 & 94.0 & 2 \\ 4 & 0.08 & 76.3 & 76.4 & 81.4 & 82.5 & 83.2 & 84.6 & 93.4 & 84.3 & 92.7 & 0 \\ 6 & 0.12 & 76.3 & 76.4 & 81.4 & 81.9 & 82.4 & 85.2 & 94.0 & 85.3 & 93.3 & 2 \\ 8 & 0.16 & 76.3 & 76.4 & 81.4 & 81.2 & 81.2 & 84.8 & 93.7 & 85.6 & 94.0 & 5 \\ 10 & 0.20 & 76.3 & 76.4 & 81.4 & 79.5 & 79.9 & 83.6 & 93.9 & 85.2 & 94.4 & 6 \\ 20 & 0.40 & 76.3 & 76.4 & 81.4 & 73.1 & 73.2 & 80.9 & 93.3 & 83.1 & 94.0 & 16 \\ 40 & 0.80 & 76.3 & 76.4 & 81.4 & 62.3 & 62.9 & 80.4 & 92.4 & 82.3 & 94.2 & 27 \\ 50 & 1.00 & 76.3 & 76.4 & 81.4 & 57.3 & 58.1 & 80.4 & 92.6 & 83.1 & 94.4 & 27 \\ \hline \multicolumn{10}{l}{Note: \(\widehat{M}\) ranged from 2 to 12, with an average of 3.6.} \\ \end{tabular} \end{table} Table 7: Sample Coverage Probabilities (%), Nominal Coverage 95% \(N=T=25\); DGP(2): \(\omega_{\alpha}=0.167\), \(\omega_{\gamma}=0.333\), \(\omega_{\varepsilon}=0.5,\rho_{\gamma}=0.75\); POLS. asymptotic theory for data without the individual and time components but with general stationary dependence in \(\varepsilon_{it}\) is developed for tests based on the Driscoll-Kraay variance estimator in Vogelsang (2012) including the large-\(N\) and large-\(T\) case. In contrast, Vogelsang (2012) only provides fixed-\(b\) results for the the Arellano and the "average of HACs" variance estimators for the fixed-\(N\), large-\(T\) case. The extension to the large-\(N\) and large-\(T\) case appears challenging and is beyond the scope of this paper. A reasonable conjecture is that the i.i.d results we give in Appendix B extend to the case where \(\alpha_{i}\), \(\gamma_{t}\), are not in the data generative process and \(\varepsilon_{it}\) has stationary dependence across both \(i\) and \(t\). To shed some light on this conjecture we extend DGP(2) to allow \(\varepsilon_{it}\) to be dependent in both dimensions as follows: \[\varepsilon_{it}^{x}=\rho_{\varepsilon 1}\varepsilon_{i,t-1}^{x}+ \rho_{\varepsilon 2}\varepsilon_{i-1,t}^{x}+\epsilon_{it}^{x},\ \ \varepsilon_{0,0}^{x}=\epsilon_{0,0}^{x}, \tag{5.1}\] \[\varepsilon_{it}^{u}=\rho_{\varepsilon 1}\varepsilon_{i,t-1}^{u}+ \rho_{\varepsilon 2}\varepsilon_{i-1,t}^{u}+\epsilon_{it}^{u},\ \ \varepsilon_{0,0}^{u}=\epsilon_{0,0}^{u}, \tag{5.2}\] where \((\epsilon_{it}^{(x)},\epsilon_{it}^{(u)})\) for \(i=0,...,N\) and \(t=0,...,T\) are each i.i.d \(N(0,1)\) random vectors and mutually independent. This DGP has autoregressive dependence in both the cross-section and time dimensions. Notice that we assume the data has an ordering in the cross-section dimension. We realize this structure would not be typical in practice, but the structure nonetheless generates data with sufficient dependence in cross-section and time dimensions to give some finite sample evidence regarding our conjecture. To make good comparisons, we first set the weights on \(\alpha_{i}\) and \(\gamma_{t}\) to be zero and the weight on \(\varepsilon_{it}\) to be 1 in Tables 8 and 9 while varying the AR(1) parameters in (5.1) and (5.2). In Table 8, we see that when \(\rho_{\varepsilon 1}=\rho_{\varepsilon 2}=0.25\), there is not much dependence in either dimension, and the coverages for EHW, \(C_{i}\), and \(C_{t}\) are all above 90% while DKA over-covers. Increasing the autocorrelation parameters from 0.25 to 0.45 in Tables 9 shows that with stronger dependence in the data, coverages decrease as expected because the autocorrelation is positive. While not reported, if we increase both \(N\) and \(T\), the coverages in Table 9 increase towards the i.i.d case as conjectured. Table 10 give results where the weights are the same as in Table 2 but \(\varepsilon_{it}\) has the relatively strong autocorrelation as in Table 9. We would expect the coverages in Table 10 to be between those in Table 2 and Table 9, respectively, and that is exactly what we see. ### Some Simulation Results for TWFE A popular alternative to the pooled OLS estimator is the additive TWFE estimator where individual and time period dummies are included in (2.1). It is well known that individual and time dummies will project out any latent individual or time components that enter linearly (as would be the case in DGP(1)) leaving only variation from the idiosyncratic component \(e_{it.}\). Under a general component structure representation \((y_{it},x^{\prime}_{it},u_{it})^{\prime}=f(\alpha_{i},\gamma_{t},\varepsilon_{ it})\), the TWFE estimator removes the individual and time components in \(x_{it}\) and \(u_{it}\) with any possible remaining dependence being in the idiosyncratic errors2. In this case, we would expect the sample coverages of CHS and DKA to \begin{table} \begin{tabular}{c c|c c c c c c c c|c} & & & & & \multicolumn{4}{c}{BC-} & \multicolumn{4}{c}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 44.2 & 51.8 & 84.5 & 84.3 & 85.4 & 87.2 & 89.6 & 87.8 & 90.3 & 0 \\ 2 & 0.08 & 44.2 & 51.8 & 84.5 & 85.4 & 86.4 & 87.6 & 89.7 & 87.9 & 89.8 & 0 \\ 3 & 0.12 & 44.2 & 51.8 & 84.5 & 84.5 & 85.7 & 87.6 & 89.5 & 88.2 & 90.7 & 0 \\ 4 & 0.16 & 44.2 & 51.8 & 84.5 & 82.9 & 84.4 & 86.9 & 89.5 & 87.6 & 90.8 & 0 \\ 5 & 0.20 & 44.2 & 51.8 & 84.5 & 81.5 & 82.9 & 86.6 & 89.2 & 88.3 & 90.5 & 0 \\ 10 & 0.40 & 44.2 & 51.8 & 84.5 & 76.0 & 76.4 & 84.3 & 88.6 & 88.2 & 91.3 & 1 \\ 20 & 0.80 & 44.2 & 51.8 & 84.5 & 64.8 & 66.2 & 82.7 & 87.4 & 88.2 & 91.5 & 2 \\ 25 & 1.00 & 44.2 & 51.8 & 84.5 & 59.4 & 62.0 & 83.0 & 86.9 & 88.0 & 91.8 & 1 \\ \hline \end{tabular} Note: \(\widehat{M}\) ranged from 2 to 9, with an average of 3.0. \end{table} Table 10: Sample Coverage Probabilities (%), Nominal Coverage 95% \(N=T=25\); DGP(2): \(\omega_{\alpha}=\omega_{\varepsilon}=0.25\), \(\omega_{\gamma}=0.5\), \(\rho_{\gamma}=0.25\); \(\rho_{\varepsilon 1}=\rho_{\varepsilon 2}=0.45\); POLS. \begin{table} \begin{tabular}{c c|c c c c c c c c c|c} & & & & & & \multicolumn{4}{c}{BC-} & \multicolumn{4}{c}{fixed-b c.v.} & \# of negative \\ \(M\) & \(b\) & EHW & Ci & Ct & DK & CHS & CHS & DKA & CHS & DKA & CHS estimates \\ \hline \(\widehat{M}\) & \(\widehat{b}\) & 56.3 & 66.3 & 65.5 & 73.4 & 73.0 & 78.1 & 89.4 & 81.0 & 90.1 & 0 \\ 2 & 0.08 & 56.3 & 66.3 & 65.5 & 72.5 & 75.7 & 77.0 & 87.0 & 77.9 & 88.1 & 0 \\ 3 & 0.12 & 56.3 & 66.3 & 65.5 & 74.6 & 75.5 & 78.7 & 88.6 & 79.2 & 89.0 & 0 \\ 4 & 0.16 & 56.3 & 66.3 & 65.5 & 74.5 & 74.8 & 77.9 & 89.4 & 79.0 & 89.8 & 0 \\ 5 & 0.20 & 56.3 & 66.3 & 65.5 & 73.6 & 74.4 & 78.2 & 89.5 & 81.0 & 89.9 & 3 \\ 10 & 0.40 & 56.3 & 66.3 & 65.5 & 68.9 & 68.6 & 77.0 & 89.0 & 80.1 & 89.4 & 12 \\ 20 & 0.80 & 56.3 & 66.3 & 65.5 & 57.5 & 57.9 & 76.9 & 87.3 & 79.5 & 89.6 & 21 \\ 25 & 1.00 & 56.3 & 66.3 & 65.5 & 53.0 & 53.4 & 76.4 & 87.3 & 79.1 & 89.3 & 21 \\ \hline \end{tabular} Note: \(\widehat{M}\) ranged from 2 to 17, with an average of 5.9. \end{table} Table 9: Sample Coverage Probabilities (%), Nominal Coverage 95% \(N=T=25\); DGP(2): \(\omega_{\alpha}=\omega_{\gamma}=0\), \(\omega_{\varepsilon}=1\), \(\rho_{\varepsilon 1}=0.45\), \(\rho_{\varepsilon 2}=0.45\); POLS. be similar to the i.i.d case in Table 6. Footnote 10: The \(\widehat{M}\) ranged from 2 to 16, with an average of 4.6. Table 11 uses the same DGP and parameter values as Table 4 but provides results for C.I.s constructed using the TWFE estimator of \(\beta_{1}\). While the individual and time components enter nonlinearly in DGP(2), the dummies project out a substantial portion of their variability. Indeed, we observe that the sample coverage probabilities in Table 11 are very similar to those in Table 6, while slightly smaller. Because the TWFE estimator only removes the single indexed components, intuitively it would not help remove the dependence in \(x_{it}u_{it}\) when the dependence is generated by the double \(AR(1)\) process as in Tables 8 and 9. Notice that in Table 10, the dependence is introduced by both the single indexed components as in Table 2_and_ the dependent idiosyncratic component as in Table 9. Suppose we replace POLS in Table 10 with TWFE. Because TWFE removes most of the variation from the single indexed components, we should expect to see coverages as in Table 9 where POLS is used and the single indexed components are not included. Table 12 gives the results and, as expected, coverages are very similar to those in Table 9. Generally speaking, the TWFE estimator removes much (or all) of the dependence introduced by the individual and time components under the component structure representation and pushes finite sample coverages towards the i.i.d case in which case DKA tend to over-cover unless there is strong positive dependence (relative to the sample sizes) in the idiosyncratic component which generates under-coverage in finite samples. While our simulations results suggest some conjectures about POLS and TWFE test statistics when there is dependence in the idiosyncratic component, the theory appears nontrivial and is left as part of ongoing and future research. ## 6 Empirical Application We illustrate how the choice of variance estimator affects \(t\)-tests and confidence intervals using an empirical example from Thompson (2011). We test the predictive power of market concentration on the profitability of industries where the market concentration is measured by the Herfindahl-Hirschman Index (HHI, hereafter). This example features data where dependence exists in both cross-sectional and temporal dimensions with common shocks being correlated across time. Specifically, consider the following linear regression model of profitability measured by \(ROA_{m,t}\), the ratio of return on total assets for industry \(m\) at time \(t\): \[ROA_{m,t}=\beta_{0}+\beta_{1}ln(HHI_{m,t-1})+\beta_{2}PB_{m,t-1}+\beta_{3}DB_{ m.t-1}+\beta_{4}\overline{ROA}_{t-1}+u_{m,t}\] where \(PB\) is the price-to-book ratio, \(DB\) is the dividend-to-book ratio, and \(\overline{ROA}\) is the market average \(ROA\) ratio. The data set used to estimate the model is composed of 234 industries in the US from 1972 to 2021. We obtain the annual level firm data from Compustat and aggregate it to industry level based on Standard Industry Classification (SIC) codes. The details of data construction can be found in Section 6 and Appendix B of Thompson (2011). In Table 13, we present the POLS estimates for the five parameters and \(t\)-statistics (with the null \(H_{0}:\beta_{j}=0\) for each \(j=1,2,...,5\)) based on the various variance estimators. We use the data \begin{table} \begin{tabular}{c c c c c c c c c} \hline \hline & POLS & \multicolumn{6}{c}{t-statistics} \\ \cline{2-10} Regressors & Estimates & EHW & \(Cluster_{i}\) & \(Cluster_{t}\) & DK & CHS & BCCHS & DKA \\ \hline \(ln(HHI_{m,t-1})\) & 0.0097 & 12.42 & 3.93 & 10.57 & 6.40 & 3.76 & 3.58 & 3.30 \\ \(Price/Book_{m,t-1}\) & -0.0001 & -0.15 & -0.09 & -0.13 & -0.07 & -0.07 & -0.06 & -0.05 \\ \(DIV/Book_{m,t-1}\) & 0.0167 & 6.89 & 3.93 & 3.81 & 2.04 & 1.89 & 1.79 & 1.74 \\ Market \(ROA_{t-1}\) & 0.6129 & 32.31 & 14.47 & 12.05 & 12.06 & 10.27 & 9.76 & 8.99 \\ Intercept & -0.0564 & -8.94 & -2.76 & -7.52 & -4.69 & -2.67 & -2.53 & -2.35 \\ \hline \multicolumn{10}{l}{Notes: \(R^{2}=0.117\), \(\widehat{M}=5\).} \\ \end{tabular} \end{table} Table 13: Industry Profitability, 1972-2021: POLS estimates and t-statistics dependent bandwidth, \(\widehat{M}\), in all relevant cases. We can see the \(t\)-statistics vary non-trivially across different variance estimators. Notice that the estimated coefficient of \(DIV/Book\) is significant at the 5% significance level in a two-sided test when EHW, cluster-by-industry, cluster-by-time, and DK variances are used, while it is only marginally significant when CHS is used and marginally insignificant when BCCHS and DKA are used. In Table 14 we present 95% confidence intervals. For CHS/BCCHS and DKA we give confidence interval using both normal and simulated fixed-\(b\) critical values. For the bias corrected variance estimators (BCCHS and DKA) the differences in confidence intervals between normal and fixed-\(b\) critical values are not large consistent with our simulation results. In Table 15, we include the results for TWFE estimator to see how the inclusion of firm level and time period dummies matter in practice. The presence of the dummies results in the intercept and \(\overline{ROA}_{t-1}\) being dropped from the regression. The \(t\)-statistics based on CHS, BCCHS, and DKA are very similar to that based on DK except for the regressor \(ln(HHI_{m,t-1})\) where the they are more similar to the \(t\)-statistic using the cluster-by-industry variance estimator. This makes sense since the industry cluster dependence is more relevant to \(ln(HHI_{m,t-1})\), and cluster-by-industry, CHS, BCCHS, and DKA estimators can well capture the within-industry dependence while the DK estimator is more suitable for across-industry dependence. The 95% confidence intervals for TWFE case are presented in Table 16. Confidence intervals tend to be wider with fixed-\(b\) critical values. This is expected given that fixed-\(b\) critical values are larger in magnitude than standard normal critical values. ## 7 Summary and Discussions This paper investigates the fixed-\(b\) asymptotic properties of the CHS variance estimator and tests. An important algebraic observation is that the CHS variance estimator can be expressed as a linear combination of the cluster variance estimator, "HAC of averages" estimator, and "average of HACs" estimator. Building upon this observation, we derive fixed-\(b\) asymptotic results for the CHS variance estimator when both the sample sizes \(N\) and \(T\) tend to infinity. Our analysis reveals the presence of an asymptotic bias in the CHS variance estimator which depends on the ratio of the bandwidth parameter, \(M\), to the time sample size, \(T\). To address this bias, we propose two simple bias correction approaches leading to two new variance estimators. Because fixed-\(b\) limits of tests based on the CHS variance estimator and bias-corrected versions are not pivotal, we propose a straightforward plug-in method for simulating the fixed-\(b\) asymptotic critical values. In a Monte Carlo simulation study, we compare the sample coverage probabilities of confidence intervals constructed using the CHS variance estimator and bias-corrected versions, along with other common variance estimators. When normal critical values are used, tests based on the bias corrected variances show significant improvements in the finite sample coverage probabilities. Furthermore, when the simulated fixed-\(b\) critical values are employed, additional improvements in finite sample coverage probabilities are obtained. Finally, it is important to acknowledge some limitations and to highlight areas of future research. We notice that the finite sample coverage probabilities of all confidence intervals exhibit under-coverage problems when the autocorrelation of the time effects becomes strong relative to the time sample size. In such cases, potential improvements resulting from the fixed-\(b\) adjustment is limited. Part of this limitation arises because the test statistics are not asymptotically pivotal, necessitating plug-in simulation of critical values. The estimation uncertainty in the plug-in estimators can introduce sampling errors to the simulated critical values that can be acute when persistence is strong. Finding a variance estimator that results in a pivotal fixed-\(b\) limit would address this problem although appears to be challenging. An empirically relevant question is what extent our results can be extended to the TWFE estimator. As we discussed in the paper and illustrated in our simulations, when the dependence is introduced only through the individual and time components, the TWFE estimator renders the clustering adjustment in the standard errors as unnecessary. Therefore, any useful theoretical results for TWFE estimator should, inevitably, be studied using a DGP where the dependence is generated by not only clustering effects but also dependence in the idiosyncratic component that is not eliminated by the TWFE estimator. Obtaining fixed-\(b\) results for CHS/BCCHS and DKA is beyond the scope of this paper and also appears challenging. However, patterns in our simulations point to some theoretical conjectures that are part of ongoing and future research. A second empirically relevant case we do not address in this paper is the unbalanced panel data case. There are several challenges in establishing formal fixed-\(b\) asymptotic results for unbalanced panels. Unbalanced panels have time periods that are potentially different across individuals and this potentially complicates the choice of bandwidths for the individual-by-individual variance estimators in the average of HACs part of the variance. For the Driscoll-Kraay part in the variance estimators, the averaging by time period will have potentially different cross-section sample sizes for each time period. Theoretically obtaining fixed-\(b\) results for unbalanced panels depends on how the missing data is modeled. For example, one might conjecture that if missing observations in the panel occur randomly (missing at random), then extending fixed-\(b\) theory would be straightforward. While that is true in pure time series settings (see Rho and Vogelsang 2019), the presence of the individual and time random components in the panel setting complicate things due to the fact that the asymptotic behavior of the components in the partial sums are very different from the balanced panel case. Obtaining useful results for the unbalanced panel case is challenging and is a topic of ongoing research.
2309.07434
Exact and local compression of quantum bipartite states
We study an exact local compression of a quantum bipartite state; that is, applying local quantum operations to the state to reduce the dimensions of Hilbert spaces while perfectly maintaining the correlation. We provide a closed formula for calculating the minimal achievable dimensions, provided as a minimization of the Schmidt rank of a particular pure state constructed from that state. Numerically more tractable upper and lower bounds of the rank were also obtained. Subsequently, we consider the exact compression of quantum channels as an application. Using this method, a post-processing step that can reduce the output dimensions while retaining information on the output of the original channel can be analyzed.
Kohtaro Kato
2023-09-14T05:32:36Z
http://arxiv.org/abs/2309.07434v1
# Exact and local compression of quantum bipartite states ###### Abstract We study an exact local compression of a quantum bipartite state; that is, applying local quantum operations to the state to reduce the dimensions of Hilbert spaces while perfectly maintaining the correlation. We provide a closed formula for calculating the minimal achievable dimensions, provided as a minimization of the Schmidt rank of a particular pure state constructed from that state. Numerically more tractable upper and lower bounds of the rank were also obtained. Subsequently, we consider the exact compression of quantum channels as an application. Using this method, a post-processing step that can reduce the output dimensions while retaining information on the output of the original channel can be analyzed. ## I Introduction Quantum data compression is one of the most fundamental quantum information processing. Its concept is analogous to Shannon's classical data compression and aims to reduce the dimensions of the storage of quantum states while minimizing information loss. Various approaches to quantum compression, primarily focusing on achieving asymptotically or approximately accurate representations of quantum states, have been proposed previously. These protocols have yielded valuable insights into the trade-offs between compression efficiency, fidelity, and additional resources. Nevertheless, there remains room to investigate _exact_ compression of quantum states, where the compressed representation fully retains the information content of the original state. This compression type has gained interest in information theory and the condition is known as the sufficiency of statistics [1; 2]. In classical statistics, statistics are _sufficient_ concerning a given statistical model if they are as informative as the original model. It is well known that the minimal random variable for describing a statistical model (a family of probability distributions) is given as the minimal sufficient statistics associated with it [3]. The concept of sufficiency has been extended to quantum systems [4; 5], where the concept of sufficient statistics is replaced by that of sufficient subalgebras. In this work, we comprehensively study the local and exact compressions of bipartite quantum states. We aim to uncover the fundamental limitations and possibilities of compressing quantum states while preserving their exact correlations. In other words, we consider quantum operations in each subsystem with a smaller output Hilbert space that can be reverted by another quantum operation. This type of data compression is an exact and noiseless one-shot quantum data compression of general mixed state sources without side information or entanglement assistance. An asymptotic scenario of local compressions is investigated in Ref. [6], and the optimal rate is given by the entropy of the state restricted on the subalgebra defined via the Koashi-Imoto decomposition. However, the explicit calculation of the Koashi-Imoto decomposition is highly complicated, and thus, no closed formula for the optimal rate has been obtained so far. We obtain a closed formula to calculate the minimal dimension of the output Hilbert spaces. The formula is obtained by minimizing the Schmidt rank (i.e., the rank of the reduced matrix) over unitarily related states. As a corollary, we provide additional tractable lower and upper bounds for the minimal dimensions. Our result is based on the recent development of quantum sufficiency [7] and operator algebra quantum error correction [8; 9]. The remainder of this manuscript is structured as follows. Section II provides relevant background concepts and a summary of the main results. Section III provides a new characterization of the minimally sufficient subalgebra in our setting, which plays an important role in validation. Section IV discusses the validation of the main results. Section V presents applications of the main results. Finally, Section VI concludes the paper by summarizing our findings and outlining potential directions for future research. ## II Exact local compression of bipartite states ### Notations Throughout this study, we only consider finite-dimensional Hilbert spaces. For a set of matrices \(S\), \(\mathrm{Alg}\{S\}\) denotes the \(C^{*}\)-algebra generated by \(S\). \(Mat(\mathcal{H},\mathbb{C})\) denotes the matrix algebra of Hilbert space \(\mathcal{H}\). \(I_{A}\) is the identity operator on system \(A\) and \(\tau_{A}=I_{A}/d_{A}\) is the completely mixed state on \(A\), where \(d_{A}=\mathrm{dim}\mathcal{H}_{A}\). We define \(|\mathcal{I}\rangle_{X\bar{X}}:=\sum_{k}|\phi^{k}\rangle_{X}|\phi^{k}\rangle_{ \bar{X}}\) in an orthogonal basis \(\{|\phi^{k}\rangle\}_{k}\) for \(\mathcal{H}_{X}\cong\mathcal{H}_{\bar{X}}\). Similarly, the normalized maximally entangled state is denoted as \(|\Psi\rangle_{X\bar{X}}:=1/\sqrt{d_{X}}|\mathcal{I}\rangle_{X\bar{X}}\). For \(O\in\mathcal{B}(\mathcal{H}_{A})\), \(O^{T_{A}}\) is the transposition, and \(\bar{O}\) is the complex conjugate with a fixed basis of \(A\). We will omit the tensor products with identities. For a completely positive and trace-preserving (CPTP) map (i.e., a quantum channel), \(\mathcal{E}:\mathcal{B}(\mathcal{H})\rightarrow\mathcal{B}(\mathcal{K})\), \(\mathcal{E}^{\dagger}\) de notes its Hilbert-Schmidt dual, and \(\mathcal{F}(\mathcal{E}^{\dagger})\) is the fixed-point \(C^{*}\)-algebra of \(\mathcal{E}^{\dagger}\). \(\mathcal{E}^{c}\) denotes a complementary channel of \(\mathcal{E}\). We also define the _correctable algebra_\(\mathcal{A}_{\mathcal{E}}\) of \(\mathcal{E}\)[8; 9] on \(\mathcal{H}\) as follows: \[\mathcal{A}_{\mathcal{E}} :=(\{E_{a}^{\dagger}E_{b}\}_{a,b})^{\prime}\] \[=\left\{X\in\mathcal{B}(\mathcal{H})\mid[\,E_{a}^{\dagger}E_{b}, X]=0\ \forall a,b\right\}\,.\] This is the unital \(C^{*}\)-subalgebra of \(\mathcal{B}(\mathcal{H})\) that describes perfectly preserved information under the action of \(\mathcal{E}\). For a state \(\rho>0\), we denote the modular transformation as \(\Delta_{\rho}^{t}(\cdot):=\rho^{it}\cdot\rho^{-it}\) and \(\Gamma_{\rho}(\cdot):=\rho^{\frac{1}{2}}\cdot\rho^{\frac{1}{2}}\). For a given CPTP map \(\mathcal{E}\), we denote the Petz recovery map [4; 5] with respect to \(\rho>0\) as follows: \[\mathcal{R}^{\rho,\mathcal{E}}(\cdot) :=\rho^{\frac{1}{2}}\mathcal{E}^{\dagger}\left(\mathcal{E}(\rho)^ {-\frac{1}{2}}\cdot\mathcal{E}(\rho)^{-\frac{1}{2}}\right)\rho^{\frac{1}{2}} \tag{1}\] \[=\Gamma_{\rho}\circ\mathcal{E}^{\dagger}\circ\Gamma_{\mathcal{E}( \rho)}^{-1}\,.\] For a bipartite pure state \(\psi_{AB}=|\psi\rangle\langle\psi|_{AB}\), \(\psi_{A}:=\mathrm{tr}_{B}\psi_{AB}\) is the reduced state on \(A\), \(S(A)_{\psi}=-\mathrm{tr}\psi_{A}\log\psi_{A}\) is the entanglement entropy and \(\mathrm{SchR}(A:B)_{\psi}\) is the Schmidt rank of \(|\psi\rangle_{AB}\). \(\mathrm{SchR}(A:B)_{\psi}\) is equal to the rank of the reduced state, \(\mathrm{rank}(\psi_{A})=\mathrm{rank}(\psi_{B})\). ### Exact local compression of bipartite states We are interested in the quantum bipartite state \(\rho_{AB}\) in the finite-dimensional Hilbert space \(\mathcal{H}_{AB}=\mathcal{H}_{A}\otimes\mathcal{H}_{B}\). We assume without loss of generality that \(\rho_{A},\rho_{B}>0\). **Definition 1**.: _We say a CPTP-map \(\mathcal{E}_{B\rightarrow\tilde{B}}:\mathcal{B}(\mathcal{H}_{\tilde{B}}) \rightarrow\mathcal{B}(\mathcal{H}_{\tilde{B}})\) is an exact local compression of \(\rho_{AB}\) on \(B\), if there exists a CPTP-map \(R_{\tilde{B}\rightarrow\tilde{B}}\) satisfying_ \[R_{\tilde{B}\rightarrow\tilde{B}}\circ\mathcal{E}_{B\rightarrow\tilde{B}}( \rho_{AB})=\rho_{AB}. \tag{2}\] _We say that compression \(\mathcal{E}_{B\rightarrow\tilde{B}}\) is nontrivial if \(d_{\tilde{B}}<d_{B}\). We define the exact compression \(\mathcal{E}_{A\rightarrow\tilde{A}}\) on \(A\) similarly._ This study aims to calculate the minimal dimensions \(d_{\tilde{A}},d_{\tilde{B}}\) of the exact local compressions. It is sufficient to consider only the compressions on \(B\) (or \(A\)) because of the symmetry of the problem. ### Koashi-Imoto decomposition In Ref. [10], Koashi and Imoto analyzed the structure of quantum operations that remains a set of classically labeled quantum states \(\{\rho_{B}^{x}\}_{x\epsilon_{\chi}}\) unchanged. They found that for any set \(\{\rho_{B}^{x}\}\), there is a unique decomposition of \(\mathcal{H}_{B}\) that completely characterizes the operations preserving \(\{\rho_{B}^{x}\}\). This idea was generalized to a fully quantum setup in Ref. [11]. It has been shown that for any (finite-dimensional) quantum bipartite state \(\rho_{AB}\), there is a decomposition of the Hilbert space such that \[\mathcal{H}_{B} \cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\otimes\mathcal{H}_{B_{i }^{R}} \tag{3}\] \[\rho_{AB} =\bigoplus_{i}p_{i}\rho_{AB_{i}^{L}}\otimes\omega_{B_{i}^{R}}\,, \tag{4}\] where \(\{p_{i}\}\) is a probability distribution and \(\rho_{AB_{i}^{L}}\) and \(\omega_{B_{i}^{R}}\) are the states of \(\mathcal{H}_{A}\otimes\mathcal{H}_{B_{i}^{L}}\) and \(\mathcal{H}_{B_{i}^{R}}\), respectively. **Definition 2**.: _A decomposition in Eqs. (3)-(4) is said to be minimal if for any CPTP-map \(\Lambda_{B}\) satisfying_ \[\Lambda_{B}(\rho_{AB})=\rho_{AB}\,,\] _any \(\Lambda\)'s Stinespring dilation isometry \(V_{B\to BE}\) defined by_ \[\Lambda_{B}(\cdot)=\mathrm{tr}_{E}\left(V_{B\to BE}\cdot(V_{B\to BE })^{\prime}\right)\] _is decomposed into_ \[V_{B\to BE}=\bigoplus_{i}I_{B_{i}^{L}}\otimes V_{B_{i}^{R}\to B_{i}^{R}E} \tag{5}\] _satisfying_ \[\mathrm{tr}_{E}\Big{(}V_{B_{i}^{R}\to B_{i}^{R}E}\,\omega_{B_{i}^{R}} \big{(}V_{B_{i}^{R}\to B_{i}^{R}E}\big{)}^{\prime}\Big{)}=\omega_{B_{i}^{R}} \quad\forall i.\] Minimal decomposition is sometimes referred to as the _Koashi-Imoto decomposition_. The Koashi-Imoto decomposition can be considered as a special case of the factorization theorem in Refs. [12; 13], which is a quantum analogue of the classical Fisher-Neyman factorization theorem [1; 14; 15]. The factorization theorem [12; 13] further demonstrates that, if Eq. (2) holds, then there exists a decomposition \(\mathcal{H}_{\tilde{B}}\cong\bigoplus_{i}\mathcal{H}_{\tilde{B}_{i}^{L}}\otimes \mathcal{H}_{\tilde{B}_{i}^{R}}\) such that \(\mathcal{E}_{B\rightarrow\tilde{B}}\) in Eq. (2) must have the form \[\mathcal{E}_{B\rightarrow\tilde{B}}\,\big{|}_{B_{i}^{L}\tilde{B}_{i}^{R}}= \mathcal{U}_{B_{i}^{L}\rightarrow\tilde{B}_{i}^{L}}\otimes\beta_{B_{i}^{R} -\tilde{B}_{i}R}\,,\] where \(\mathcal{U}_{B_{i}^{L}\rightarrow\tilde{B}_{i}^{L}}\) is a unitary map (therefore, \(B_{i}^{L}\cong\tilde{B}_{i}^{L}\)) and \(\beta_{B_{i}^{R}\rightarrow\tilde{B}_{i}R}\) is a CPTP map. This implies that minimal compression is achieved when \(\tilde{B}\cong B^{L}\) and \[\mathcal{H}_{\tilde{B}} \cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\] \[\rho_{A\tilde{B}} =\bigoplus_{i}p_{i}\rho_{AB_{i}^{L}}\,.\] We summarize this subsection as follows. **Fact 1**.: _For a given \(\rho_{AB}\), consider the Koashi-Imoto decomposition (4). The minimal dimension of the exact compression for \(\rho_{AB}\) is then given by_ \[d_{\tilde{B}}:=\sum_{i}d_{B_{i}^{L}}\,.\] ### Main result To state the main theorem, the following notations are introduced: For a given \(\rho_{AB}\), we define a unital CP map as \[\Omega_{A\to B}^{\dagger}(X_{A}):=\mathrm{tr}_{A}\left(J_{AB}(X_{A}^{T_{A}} \otimes I_{B})\right)\,, \tag{6}\] where \(J_{AB}\) is the corresponding Choi-Jamilkowski operator, defined as follows: \[J_{AB}:=\rho_{B}^{-\frac{1}{2}}\rho_{AB}\rho_{B}^{-\frac{1}{2}}\geq 0\,,\qquad \operatorname{tr}_{A}J_{AB}=I_{B}\,. \tag{7}\] We denote the nonzero eigenvalues of \(J_{AB}\) as \(\omega_{i}\). Subsequently, the Kraus operators \(\{K_{i}^{\dagger}\}\) of \(\Omega_{A\to B}^{\dagger}(\cdot)=\sum_{i}K_{i}^{\dagger}\). \(K_{i}\) satisfy \(\operatorname{tr}(K_{i}K_{j})=\omega_{i}\delta_{ij}\): We then define another CP-map \(\tilde{\Omega}_{A\to B}^{\dagger}\) as follows: \[\tilde{\Omega}_{A\to B}^{\dagger}(\cdot)=\sum_{i=1}^{\operatorname{rank}(\rho _{AB})}\omega_{i}^{-\frac{1}{2}}K_{i}^{\dagger}\cdot K_{i}\,. \tag{8}\] \(\tilde{\Omega}_{A\to B}^{\dagger}\) is no longer unital but CP. The Choi operator of \(\tilde{\Omega}_{A\to B}^{\dagger}\) is given by \(\sqrt{J_{AB}}\), Let \(B_{1}\cong B\) and consider the two operators \[E_{\mathcal{T}} :=\sum_{a,b=1}^{d_{A}}\tilde{\Omega}_{A\to B}^{\dagger}(|a \rangle\langle b|)\otimes\overline{\tilde{\Omega}_{A\to B_{1}}^{ \dagger}(|a\rangle\langle b|)} \tag{9}\] \[RL_{B} :=I_{B}\otimes\log\rho_{B_{1}}^{T_{B_{1}}}-\log\rho_{B}\otimes I _{B_{1}}\,, \tag{10}\] where \(\{|a\rangle\}\) is an orthonormal basis of \(\mathcal{H}_{A}\). The spectral decompositions of these operators are as follows. \[E_{\mathcal{T}} =\bigoplus_{\lambda}\lambda P_{\lambda} \tag{11}\] \[RL_{B} =\bigoplus_{\eta}\eta Q_{\eta}\;, \tag{12}\] where \(P_{\lambda}\) and \(Q_{\eta}\) are orthogonal projections to the eigensubspaces of \(E_{\mathcal{T}}\) and \(RL_{B}\), respectively. Define \(P_{V}\) as the projector onto the subspace \[V:=\bigoplus_{\eta\text{spec}(RL)}(\text{supp}(Q_{\eta})\cap\text{supp}(P_{1}) )\;.\] Using the formula given in [16], \[P_{V}=2\bigoplus_{\eta\text{spec}(RL)}Q_{\eta}(Q_{\eta}+P_{1})^{-1}P_{1}\,,\] where \({}^{-1}\) is the Moore-Penrose inverse, \(P_{V}\) is the superoperator of a unital CPTP-map on \(B\) which we denote as \(\mathbb{E}_{B}\). Consider systems \(\bar{B}\simeq\bar{B}_{1}\cong B\) and define \(|\mathcal{I}\rangle\!\!\rangle_{BB_{1}}:=\sum_{i}|ii\rangle_{BB_{1}}\) in the transposition in Eq. (9) and (10). The normalized Choi state \(C_{BB_{1}}\) is defined as follows: \[C_{BB_{1}}:=\frac{1}{d_{B}}(\operatorname{id}_{B}\otimes\mathbb{E}_{B_{1}})\,( |\mathcal{I}\rangle\!\!\rangle(\mathcal{I}|_{BB_{1}})\;. \tag{13}\] \(C_{BB_{1}}\) and \(P_{V}\) are related via the reshuffling map: \[C_{BB_{1}}=\frac{1}{d_{B}}\sum_{i,j=1}^{d_{B}}(I_{B}\otimes|i\rangle\langle j|_ {B_{1}})P_{V}(|i\rangle\langle j|_{B}\otimes I_{B_{1}}).\] Consider the canonical purifications of \(C_{BB_{1}}\) denoted as \(|C\rangle_{BB_{1}\bar{B}\bar{B}_{1}}\). We then optimize all possible unitary to minimize the entanglement entropy between \(B\bar{B}\) and \(B_{1}\bar{B}_{1}\): \[\tilde{U}_{\bar{B}\bar{B}_{1}}:=\operatorname{argmin}_{U}S(B\bar{B})_{U_{BB_{1} }|C)}\,. \tag{14}\] The optimization can be performed by using e.g., a gradient algorithm [17]. **Theorem 1**.: _For any \(\rho_{AB}\) such that \(\rho_{A},\rho_{B}>0\), an isomorphism \(\mathcal{H}_{B}\cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\otimes\mathcal{H}_{B _{i}^{R}}\) exists such that_ \[d_{\bar{B}}=\sum_{i}d_{B_{i}^{L}}=\operatorname{SchR}\left(BB_{1}\bar{B}_{1}: \bar{B}\right)_{|\tilde{C})}\,,\] _is the minimal compression dimension, where_ \[|\tilde{C}\rangle_{BB_{1}\bar{B}\bar{B}_{1}}:=\tilde{U}_{\bar{B}\bar{B}_{1}}|C \rangle_{BB_{1}\bar{B}\bar{B}_{1}}\,. \tag{15}\] _It also holds that_ \[d_{B^{R}}:=\sum_{i}d_{B_{i}^{R}}=\operatorname{SchR}\left(BB_{1}:\bar{B}\bar{B} _{1}\right)_{|\tilde{C})}\,. \tag{16}\] From the definition of \(C_{BB_{1}}\) in (13), the following holds: **Corollary 1**.: \[\operatorname{rank}(C_{BB_{1}})=\sum_{i}d_{B_{i}^{L}}^{2}\] (17) _and_ \[\sqrt{\operatorname{rank}(C_{BB_{1}})}\leq d_{\bar{B}}\leq\operatorname{rank}(C_ {BB_{1}})\,. \tag{18}\] The proofs are presented in Sec. IV. ## III Characterizing minimal sufficient subalgebra Consider a set of density matrices on \(B\) \[\mathcal{S}:=\left\{\mu_{B}=\frac{\operatorname{tr}_{A}\left(M_{A}\rho_{AB} \right)}{\operatorname{tr}\left(M_{A}\rho_{A}\right)}\,|\,0\leq M_{A}\leq I_{A} \right\}\,, \tag{19}\] dominated by \(\rho_{B}\) (i.e., \(\text{supp}(\mu_{B})\subset\text{supp}(\rho_{B})\)). Eq. (2) is equivalent to: \[R_{\bar{B}\to B}\circ\mathcal{E}_{B\to\bar{B}}(\mu_{B})=\mu_{B}\,,\forall\mu_{ B}\in\mathcal{S}. \tag{20}\] This kind of condition has been extensively studied in the theory of sufficiency [4; 5], and it is known that the map \(R_{\bar{B}\to B}\) can be selected as the Pez recovery map concerning \(\rho_{B}\): \[\mathcal{R}_{\bar{B}\to B}^{\rho,\mathcal{E}}(\cdot)=\Gamma_{\rho_{B}}\circ \mathcal{E}_{\bar{B}\to B}^{\dagger}\circ\Gamma_{\mathcal{E}(\rho_{B})}^{-1}( \cdot).\] The theory of sufficiency states that the minimal dimension of \(B\) is associated with the _minimal sufficient subalgebra_\(\mathcal{M}_{B}^{S}\subset\mathcal{B}(\mathcal{H}_{B})\) of \(\mathcal{S}\). Because \(\mathcal{M}_{B}^{S}\) is finite-dimensional, a decomposition \[\mathcal{H}_{B}\cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\otimes\mathcal{H}_{B_{i} ^{R}} \tag{21}\] exists such that \[\mathcal{M}_{B}^{S} \cong \bigoplus_{i}Mat(\mathcal{H}_{B_{i}^{L}},\mathbb{C})\otimes I_{B_{i} ^{R}}\,, \tag{22}\] \[(\mathcal{M}_{B}^{S})^{\prime} \cong \bigoplus_{i}I_{B_{i}^{L}}\otimes Mat(\mathcal{H}_{B_{i}^{R}}, \mathbb{C})\,, \tag{23}\] where \((\mathcal{M}_{B}^{S})^{\prime}:=\{X_{B}\in\mathcal{B}(\mathcal{H}_{B})[|X_{B}, Y_{B}]=0\,,\;\forall Y_{B}\in\mathcal{M}_{B}^{S}\}\) is the commutant of \(\mathcal{M}_{B}^{S}\). The sufficiency theorem [5] states that Eq. (20) implies \(\mathcal{H}_{\tilde{B}}\) must support algebra isomorphic to \(\mathcal{M}_{B}^{S}\). Therefore, the minimal Hilbert space \(\mathcal{H}_{\tilde{B}}\) must be: \[\mathcal{H}_{\tilde{B}}\cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\,. \tag{24}\] As discussed in Sec. II.3, decomposition (21) must match that of the Koashi-Imoto decomposition (3): An algorithm can be used to calculate the decomposition (22) [10; 18]; however, the algorithm has no explicit formula to calculate the decomposition or the dimension \(d_{\tilde{B}}\). The same problem appears in the asymptotic results in Ref. [6]. We examine the details of the algebraic equations \(\mathcal{M}_{B}^{S}\) and \((\mathcal{M}_{B}^{S})^{\prime}\). Originally, Petz demonstrated that \(\mathcal{M}_{B}^{S}\) has the form [4] \[\mathcal{M}_{B}^{S}=\text{Alg}\left\{\mu_{B}^{it}\rho_{B}^{-it},\mu\in \mathcal{S},t\in\mathbb{R}\right\} \tag{25}\] in a finite-dimensional system. Ref. [7] showed that a minimal sufficient subalgebra can also be written as \[\mathcal{M}_{B}^{S}=\text{Alg}\left\{\rho_{B}^{it}d(\mu,\rho)\rho_{B}^{-it}, \mu\in\mathcal{S},t\in\mathbb{R}\right\}\,, \tag{26}\] where \(d(\mu,\rho):=\rho^{-\frac{1}{2}}\mu\rho^{-\frac{1}{2}}\) is the _commutant Radon-Nikodym derivative_ introduced in Ref. [7]. By inserting the definition of \(\mu_{B}\) (19), \(d(\mu,\rho)\) can be rewritten as \[d(\mu,\rho) = \rho_{B}^{-\frac{1}{2}}\frac{\text{tr}_{A}\left(M_{A}\rho_{AB} \right)}{\text{tr}\left(M_{A}\rho_{A}\right)}\rho_{B}^{-\frac{1}{2}} \tag{27}\] \[= \frac{1}{\text{tr}\left(M_{A}\rho_{A}\right)}\text{tr}_{A}\left( \rho_{B}^{-\frac{1}{2}}\rho_{AB}\rho_{B}^{-\frac{1}{2}}(M_{A}\otimes I_{B})\right)\] \[= \frac{1}{\text{tr}\left(M_{A}\rho_{A}\right)}\text{tr}_{A}\left( J_{AB}(M_{A}\otimes I_{B})\right)\] \[= \frac{\Omega_{A\to B}^{\dagger}(M_{A}^{T_{A}})}{\text{tr}\left(M_{A} \rho_{A}\right)}\,,\] where \(J_{AB}\) is defined by Eq. (7) and \(\Omega_{A\to B}^{\dagger}\) are defined in Eq. (6). It might be easier to characterize \((\mathcal{M}_{B}^{S})^{\prime}\) than \(\mathcal{M}_{B}^{S}\) in our setting. Let us denote the complementary channel of \(\Omega_{B\to A}\) by \(\Omega_{B\to E}\). We then obtain the following characterizations: **Lemma 1**.: \((\mathcal{M}_{B}^{S})^{\prime}\) _is equivalent to_ 1. _The intersection of the correctable algebras_ \[\bigcap_{t\in\mathbb{R}}\mathcal{A}_{\left(\Omega_{B\to E}\circ\Delta_{\rho_{ B}}^{t}\right)}\] (28) 2. _The intersection of the fixed-point algebras_ \[\bigcap_{t\in\mathbb{R}}\mathcal{F}\left(\Delta_{\rho_{B}}^{-t}\circ \mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E}\circ\Delta_{\rho_{B}}^ {t}\right)\] (29) 3. _The largest subalgebra of_ \(\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E})\) _that is invariant under_ \(\Delta_{\rho_{B}}^{t}\) _for any_ \(t\in\mathbb{R}\)_._ 4. _The largest subalgebra of_ \(\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E})\) _that is invariant under_ \(ad_{\rho_{B}}(\cdot):=[\cdot,\log\rho_{B}]\)_:_ \[ad_{\rho_{B}}((\mathcal{M}_{B}^{S})^{\prime})=[(\mathcal{M}_{B}^{S})^{ \prime},\log\rho_{B}]\subset(\mathcal{M}_{B}^{S})^{\prime}\,.\] (30) _Note that \(\mathcal{R}_{E\to B}^{\tau,\Omega}\) is the Petz recovery map with respect to \(\tau_{B}\) for \(\Omega_{B\to E}\)._ \[\mathcal{R}_{E\to B}^{\tau,\Omega}(\cdot)=\Omega_{B\to E}^{\dagger}\circ \Gamma_{\Omega_{B\to E}(I_{B})}^{-1}(\cdot)\,. \tag{31}\] Proof.: From Eq. (27), we obtain \[X_{B}\in(\mathcal{M}_{B}^{S})^{\prime}\] \[\Leftrightarrow [X_{B},\rho_{B}^{it}d(\mu,\rho)\rho_{B}^{-it}]=0\,,\;\;\forall t \in\mathbb{R},\forall\mu_{B}\in\mathcal{S}\,,\] \[\Leftrightarrow \left[X_{B},\Delta_{\rho_{B}}^{\dagger}\circ\Omega_{A\to B}^{ \dagger}(M_{A}^{T_{A}})\right]=0\,,\;\;\forall t\in\mathbb{R},\,0\leq\forall M _{A}\leq I_{A}\,,\] \[\Leftrightarrow X_{B}\in\text{Alg}\left(\text{Im}\left(\Delta_{\rho_{B}}^{t} \circ\Omega_{A\to B}^{\dagger}\right)\right)^{\prime}\,,\;\;\forall t\in \mathbb{R}\,,\] \[\Leftrightarrow X_{B}\in\mathcal{A}_{\left(\Omega_{B\to A}\circ\Delta_{\rho_{B}}^ {\dagger}\right)}\,\;\forall t\in\mathbb{R}\,,\] where \(\mathcal{A}_{\mathcal{E}^{c}}=(\text{Im}\mathcal{E}^{\dagger})^{\prime}\)[9] and \((\Delta_{\rho_{B}}^{t}\circ\Omega_{A\to B}^{\dagger})^{\dagger}=\Omega_{B \to A}\circ\Delta_{\rho_{B}}^{-t}\) have been used. A simple calculation shows that \[\left(\Omega_{B\to A}\circ\Delta_{\rho_{B}}^{t}\right)^{c}=\Omega_{B\to A}^{c} \circ\Delta_{\rho_{B}}^{t}=\Omega_{B\to E}\circ\Delta_{\rho_{B}}^{t}\,,\] which completes step \((i)\). To show the equivalence of \((ii)\), we utilize the following facts regarding correctable algebra: **Lemma 2**.: _[_19_]_ _For any CPTP-map \(\mathcal{E}\), it holds that_ \[\mathcal{A}_{\mathcal{E}}=\mathcal{F}(\mathcal{R}^{\tau,\mathcal{E}}\circ \mathcal{E})=\mathcal{F}\left(\mathcal{E}^{\dagger}\circ(\mathcal{R}^{\tau, \mathcal{E}})^{\prime}\right)\,. \tag{32}\] The Petz recovery map \(\mathcal{R}_{E\to B}^{\tau,\Omega\circ A^{\dagger}}\) with respect to \(\tau_{B}\) for \(\Omega_{B\to E}\circ\Delta_{\rho_{B}}^{t}\) is simplified as follows: \[\mathcal{R}_{E\to B}^{\tau,\Omega\circ\Delta_{\rho_{B}}^{t}}(\cdot)\] \[:=\left(\Omega_{B\to E}\circ\Delta_{\rho_{B}}^{t}\right)^{ \dagger}\left(\Omega_{B\to E}(I_{B})^{-\frac{1}{2}}\cdot\Omega_{B\to E}(I_{B})^{- \frac{1}{2}}\right)\] \[=\Delta_{\rho_{B}}^{-t}\left(\Omega_{E\to B}^{\dagger}\left( \Omega_{B\to E}(I_{B})^{-\frac{1}{2}}\cdot\Omega_{B\to E}(I_{B})^{-\frac{1}{2}} \right)\right)\] \[=\Delta_{\rho_{B}}^{-t}\circ\mathcal{R}_{E\to B}(\cdot)\,. \tag{33}\] Combining with Lemma 2, this equality completes \((ii)\). To see \((ii)\Rightarrow(iii)\), \(X_{B}\in(ii)\) is equivalent to \[\Delta_{\rho_{B}}^{-t}\circ\Omega_{B\to E}\circ\Delta_{\rho_{B}}^{t}(X_{B})=X_ {B}\,,\;\forall t\in\mathbb{R}\] \[\Leftrightarrow \mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E}(X_{B}(t))=X_ {B}(t)\,,\;\forall t\in\mathbb{R}\,, \tag{34}\] where \(X_{B}(t):=\Delta_{\rho_{B}}^{t}(X_{B})\). Therefore, the algebra \((ii)\) is a modular invariant subalgebra in \(\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E})\). It is largest since any element of other invariant subalgebra must satisfies Eq. (34). Conversely, \((iii)\Rightarrow(ii)\) can be verified by \((iii)\Rightarrow\) Eq. (34). Lastly, we show \((ii)\Leftrightarrow(iv)\). For any \(X_{B}\in(\mathcal{M}_{B}^{S})^{\prime}\), Eq. (34) implies \[\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E} \left(X_{B}(t+\delta t)-X_{B}(t)\right)\] \[=X_{B}(t+\delta t)-X_{B}(t)\,,\,\forall t,\delta t\in\mathbb{R}\,.\] Therefore derivatives \[\frac{d^{n}X_{B}(t)}{dt^{n}} =-i\Bigg{[}\frac{d^{n-1}X_{B}(t)}{dt^{n-1}},\log\rho_{B}\Bigg{]}\] \[=(-i)^{n}[...[[X_{B}(t),\log\rho_{B}],\log\rho_{B}],...,\log\rho_ {B}]\] \[=(-i)^{n}ad_{\rho_{B}}^{n}(X_{B}(t))\] are the fixed points of \(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E}\) for all \(n\in\mathbb{N}\). Since \(\Delta_{\rho_{B}}^{t}(ad_{\rho_{B}}^{n}(X_{B}))=ad_{\rho_{B}}^{n}(X_{B}(t))\), \(ad_{\rho_{B}}^{n}(X_{B})\) satisfies Eq. (34), for all \(n\in\mathbb{N}\). Therefore, \(ad_{\rho_{B}}(X_{B})\in(\mathcal{M}_{B}^{S})^{\prime}\) and \([(\mathcal{M}_{B}^{S})^{\prime},\log\rho_{B}]\subset(\mathcal{M}_{B}^{S})^{\prime}\). Suppose that there is a subalgebra of \(\mathcal{N}_{B}\subset\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega} \circ\Omega_{B\to E})\) satisfying \([\mathcal{N}_{B},\log\rho_{B}]\subset\mathcal{N}_{B}\). Then, for any \(X_{B}\in\mathcal{N}_{B}\), the iterated commutators \(ad_{\rho_{B}}^{n}(X_{B})\) are also in \(\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E})\) for all \(n\in\mathbb{N}\). This implies that \[X_{B}(t)=\sum_{n=0}^{\infty}\frac{d^{n}X_{B}(t)}{dt^{n}}|_{t\prec 0}t^{n}= \sum_{n=0}^{\infty}(-i)^{n}ad_{\rho_{B}}^{n}(X_{B})t^{n}\] is also in \(\mathcal{F}(\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E})\) for all \(t\in\mathbb{R}\); thus, \(X_{B}\in(\mathcal{M}_{B}^{S})^{\prime}\) by Eq. (34). If \((\mathcal{M}_{B}^{S})^{\prime}\) is abelian, \(d_{B_{i}^{R}}=1\) in Eq. (21) for all \(i\). Thus, \[d_{B}=\sum_{i}d_{B_{i}^{L}}d_{B_{i}^{R}}=\sum_{i}d_{B_{i}^{L}}-d_{B^{L}}\,,\] Therefore, nontrivial exact compression is impossible. To achieve nontrivial compression, \((\mathcal{M}_{B}^{S})^{\prime}\) must be nonabelian. Lemma 1(i) for \(t=0\) implies that \((\mathcal{M}_{B}^{S})^{\prime}\) is a subalgebra of \(\mathcal{A}_{\Omega_{B\to E}}\) Thus, we have the following criteria, which we set as an independent theorem owing to its usefulness. **Theorem 2**.: _It holds that_ \[\mathcal{A}_{\Omega_{B\to E}}=\mathcal{F}\left(\mathcal{R}_{E\to B}^{\tau, \Omega}\circ\Omega_{B\to E}\right)=(\mathrm{Im}\Omega_{A\to B}^{\prime})^{\prime} \tag{35}\] _and a necessary condition for \(d_{B}<d_{B}\) is that the algebra (35) is non-abelian._ ## IV Calculation of the minimal dimension We provide a method for calculating the minimal dimensions, as shown in Theorem 1. For simplicity, we define \(\mathcal{T}_{B\to B}:=\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E}\) which is self-dual. ### Vectorization of \((\mathcal{M}_{h}^{S})^{\prime}\) Introducing vectorization of the operators. \[X_{B}\mapsto|X_{B}\rangle\hskip-2.845276pt\rangle:=(X_{B}\otimes I_{B_{1}})| \mathcal{I}\rangle\hskip-2.845276pt\rangle_{BB_{1}}\,,\quad B_{1}\cong B \tag{36}\] and the superoperator form of \(\mathcal{T}\). That is, \[\mathcal{T}_{B\to B}\mapsto E_{\mathcal{T}}=\sum_{i}T_{i}\otimes\bar{T}_{i} \in\mathcal{B}\left(\mathcal{H}_{B}\otimes\mathcal{H}_{B_{1}}\right)\,, \tag{37}\] where \(\mathcal{T}_{B\to B}(\cdot)=\sum_{i}T_{i}\cdot T_{i}^{\dagger}\) denotes the Kraus representation. It will be observed that \(E_{\mathcal{T}}\) in Eq. (37) is the same as the one appearing in Eq. (9): The invariance under \(ad_{\rho_{B}}\) is also rephrased by the superoperator \[RL_{B}=I_{B}\otimes\log\rho_{B}^{T_{B}}-\log\rho_{B}\otimes I_{B}\,. \tag{38}\] Then, we have a vector representation of \((\mathcal{M}_{B}^{S})^{\prime}\) as follows: **Lemma 3**.: _Consider the following spectral decomposition:_ \[E_{\mathcal{T}}=\bigoplus_{\lambda}\lambda P_{\lambda}\,, \tag{39}\] \[RL_{B}=\bigoplus_{\eta}\eta Q_{\eta}\,. \tag{40}\] _Then it holds that_ \[V:=\bigoplus_{\eta\in\mathrm{spec}(RL_{B})}\left(Q_{\eta}\cap P_{1}\right) \tag{41}\] _is the vector space spanned by the vectorization of \((\mathcal{M}_{B}^{S})^{\prime}\)._ Proof.: By definition, \(V\) is spanned by the eigenvectors of \(RL_{B}\), stabilized by \(P_{1}\). Moreover, this is the largest subspace of \(\mathrm{supp}(P_{1})\), satisfying \[E_{\mathcal{T}}|X_{B}\rangle\hskip-2.845276pt\rangle =|X_{B}\rangle\hskip-2.845276pt\rangle\,, \tag{42}\] \[RL_{B}|X_{B}\rangle\hskip-2.845276pt\rangle \in V \tag{43}\] for all \(|X_{B}\rangle\hskip-2.845276pt\rangle\in V\). This is because Eq. (43) implies \(RL_{B}=P_{V}RL_{B}P_{V}\oplus(I-P_{V})RL_{B}(I-P_{V})\), which further implies \(V\) is spanned by the eigenvectors of \(RL_{B}\). Eq. (42) implies that the eigenvectors must be in \(\mathrm{supp}(P_{1})\). Lemma 1(iv) clarifies that the vectorization of \((\mathcal{M}_{B}^{S})^{\prime}\) must be a subspace of \(V\). By inverting the vectorization of \(V\), we obtain an operator subspace \(\mathcal{V}\) that is fixed by \(\mathcal{T}\) and is invariant under \(ad_{\rho_{B}}\). Therefore, for any \(X_{B}\in\mathcal{V}\), \(\mathcal{T}(X_{B})=X_{B}\) and \(ad_{\rho_{B}}^{n}(X_{B})\in\mathcal{V}\)\(\forall n\in\mathbb{N}\). The latter condition implies \(X_{B}(t)\in\mathcal{V}\subset\mathcal{F}(\mathcal{T})\)\(\forall t\in\mathbb{R}\). Therefore Lemma 1(iii) yields \(\mathcal{V}\subset(\mathcal{M}_{B}^{S})^{\prime}\). Because we know that \(V\) is a vectorization of \((\mathcal{M}_{B}^{S})^{\prime}\), we also obtain the superoperator of the conditional expectation of \((\mathcal{M}_{B}^{S})^{\prime}\). **Corollary 2**.: _The projection operator, \(P_{V}\) on \(V\), is the superoperator representation of the conditional expectation (concerning the completely mixed state) on \((\mathcal{M}_{B}^{S})^{\prime}\)._ ### Proof of Eq. (9) Let \(\{K_{i}^{\dagger}\}\) be the Kraus operator of \(\Omega_{A\to B}^{\dagger}(\cdot)=\sum_{i}K_{i}^{\dagger}\cdot K_{i}\) satisfying the orthogonality condition \(\operatorname{tr}(K_{i}^{\dagger}K_{j})=\omega_{i}\delta_{ij}\), where \(\omega_{i}\) is a nonzero eigenvalue of \(J_{AB}\). \(T_{i}\) is calculated in more detail. By definition, it holds that: \[\mathcal{T}_{B\to B}=\mathcal{R}_{E\to B}^{\tau,\Omega}\circ\Omega_{B\to E}= \Omega_{E\to B}^{\dagger}\circ\Gamma_{\Omega_{E}}^{-1}\circ\Omega_{B\to E}\,, \tag{44}\] where \(\Omega_{E}\coloneqq\Omega_{B\to E}(I_{B})\) is a positive operator (we set \(d_{E}=\operatorname{rank}(\rho_{AB})\)). For a fixed basis, \(A\) and \(E\) are denoted by \(\{|a\rangle_{A}\}\) and \(\{|\phi_{i}\rangle_{E}\}\) respectively, we define \[F_{a}:=\sum_{i=1}^{\operatorname{rank}(\rho_{AB})}|\phi_{i}\rangle\langle a|K_ {i}\,, \tag{45}\] which provides the Kraus representation of \(\Omega_{B\to E}=\Omega_{B\to A}^{c}\). Owing to the orthogonality condition, we obtain \[\Omega_{E} :=\Omega_{B\to E}(I_{B})\] \[=\sum_{a}F_{a}F_{a}^{\dagger}\] \[=\sum_{i,j}\operatorname{tr}(K_{j}^{\dagger}K_{i})|\phi_{i}\rangle \langle\phi_{j}|\] \[=\sum_{i}\omega_{i}|\phi_{i}\rangle\langle\phi_{i}|, \tag{46}\] such that \(\Omega_{E}\) is diagonal. Combining Eqs. (44) (46), the Kraus operators of \(\mathcal{T}\) can be written as \[T_{(a,b)} =F_{a}^{\dagger}\Omega_{E}^{-\frac{1}{2}}F_{b}\] \[=\sum_{i,j}K_{i}^{\dagger}|a\rangle\langle\phi_{i}|\Omega_{E}^{- \frac{1}{2}}|\phi_{j}\rangle\langle b|K_{j}\] \[=\tilde{\Omega}_{A\to B}^{\dagger}(|a\rangle\langle b|)\,, \tag{47}\] where \(\tilde{\Omega}\) is defined by Eq. (8): In summary, the superoperator \(E_{\mathcal{T}}\) can be written as \[E_{\mathcal{T}}=\sum_{a,b}\tilde{\Omega}_{A\to B}(|a\rangle\langle b|) \otimes\overline{\tilde{\Omega}_{A\to B}(|a\rangle\langle b|)}\,. \tag{48}\] The conjugation on \(B\) must be performed on the same basis as the transposition in \(RL_{B}\). Note that Eq. (47) and Theorem 2 imply that: \[(\operatorname{Im}\!\tilde{\Omega}_{A\to B}^{\dagger})^{\prime}=\mathcal{F}( \mathcal{T})=(\operatorname{Im}\!\Omega_{A\to B}^{\dagger})^{\prime}\,. \tag{49}\] ### Proof of Theorem 1 We denote the conditional expectation of \((\mathcal{M}_{B}^{S})^{\prime}\) as \(\mathbb{E}_{B}\), We define the (unnormalized) Choi-Jamilkowski operator of \(\mathbb{E}_{B}\) as follows: \[C_{BB_{1}}:=(\mathbb{E}_{B}\otimes\operatorname{id}_{B_{1}})\,|\mathcal{I} \rangle\!\langle\mathcal{I}|_{BB_{1}}\,.\] Recall that there exists a decomposition \(\mathcal{H}_{B}\cong\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\otimes\mathcal{H}_{B _{i}^{R}}\) such that \[(\mathcal{M}_{B}^{S})^{\prime}\cong\bigoplus_{i}I_{B_{i}^{L}}\otimes Mat( \mathcal{H}_{B_{i}^{R}},\mathbb{C})\,.\] \(\mathcal{H}_{B_{1}}\) exhibits the same decomposition. The conditional expectation \(\mathbb{E}_{B}\) (with respect to \(\tau_{B}\)) is then written as \[\mathbb{E}_{B}(\cdot)= \bigoplus_{i}\tau_{B_{i}^{L}}\otimes\operatorname{tr}_{\mathcal{ H}_{B_{i}^{L}}}(\Pi_{i}\cdot\Pi_{i})\;,\] where \(\Pi_{i}\) is a projection onto \(\mathcal{H}_{B_{i}^{L}}\otimes\mathcal{H}_{B_{i}^{R}}\). Up to a local unitary, \(|\mathcal{I}\rangle\!\rangle_{BB_{1}}\) also exhibits decomposition \[|\mathcal{I}\rangle\!\rangle_{BB_{1}}=\sum_{i}[\mathcal{I}_{i}\rangle\!\rangle_ {B_{i}^{L}B_{i_{1}}^{L}}\otimes[\mathcal{I}_{i}\rangle\!\rangle_{B_{i}^{R}B_{ _{1}^{R}}^{R}}\;. \tag{50}\] Note that the argument below is independent of the basis choice in \(|\mathcal{I}\rangle\!\rangle_{BB_{1}}\). The normalized Choi state has the following corresponding decomposition: \[C_{BB_{1}} =\frac{1}{d_{B}}\bigoplus_{i}\tau_{B_{i}^{L}}\otimes|\mathcal{I}_ {i}\rangle\!\rangle\!\langle\mathcal{I}_{i}|_{B_{i}^{R}B_{1}^{R}}\otimes I_{B_{ 1}^{L}} \tag{51}\] \[=\bigoplus_{i}p_{i}\tau_{B_{i}^{L}}\otimes|\Psi_{i}\rangle\!\rangle \!\langle\Psi_{i}|_{B_{i}^{L}B_{1}^{R}}\otimes\tau_{B_{1}^{L}},, \tag{52}\] where \(p_{i}:=d_{B_{i}^{L}}d_{B_{i}^{R}}^{\ \ \ \ }/d_{B}\), the square root of which is: \[\sqrt{C_{BB_{1}}}=\bigoplus_{i}\sqrt{p_{i}}\sqrt{\tau}_{B_{i}^{L}}\otimes|\Psi_ {i}\rangle\!\rangle\!\langle\Psi_{i}|_{B_{i}^{R}B_{1}^{R}}\otimes\sqrt{\tau}_{B _{1}^{L}}. \tag{53}\] Next, we consider the canonical purification of \(C_{BB_{1}}\). \[|C\rangle_{BB_{1}\bar{B}\bar{B}_{1}}\] \[\quad=\sqrt{C_{BB_{1}}}|\mathcal{I}\rangle_{B\bar{B}}|\mathcal{I} \rangle_{B_{1}\bar{B}_{1}}\] \[\quad=\sum_{i}\sqrt{p_{i}}|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar {B}_{1}^{L}}|\Psi_{i}\rangle\!\rangle_{B_{i}^{R}B_{1}^{R}}|\Psi_{i}\rangle\! \rangle_{B_{i}^{R}\bar{B}_{1}^{R}}|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B} _{1}^{L}}\,,\] where \(\bar{B}\cong\bar{B}_{1}\cong B\). Applying the unitary \(U_{\bar{B}\bar{B}_{1}}\) gives: \[U_{\bar{B}\bar{B}_{1}}|C\rangle_{BB_{1}\bar{B}\bar{B}_{1}}=\sum_{i}\sqrt{p_{i}}| \Psi_{i}\rangle\!\rangle_{B_{i}^{R}B_{1i}^{R}}|\Phi_{i}\rangle\!\rangle_{B_{i}^{L }\bar{B}\bar{B}_{1}B_{1i}^{L}}\] where \(|\Phi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B}\bar{B}_{1}B_{1i}^{L}}\) denotes some pure state. The reduced state of \(\bar{B}\bar{B}\) is then calculated as \[\bigoplus_{i}p_{i}\tau_{B_{i}^{R}}\otimes\Phi_{i,B_{i}^{L}\bar{B}} \tag{54}\] and the entanglement entropy is calculated as \[S(B\bar{B})_{UCU^{\dagger}}=H(\{p_{i}\})+\sum_{i}p_{i}\left(\log d_{B_{i}^{R}}+S( B_{i}^{L}\bar{B})_{\Phi_{i}}\right)\,, \tag{55}\] where \(H(\cdot)\) denotes the Shannon entropy. Only the final term depends on \(U_{\bar{B}\bar{B}_{1}}\). The minimum in Eq. (14) is achieved when every \(\Phi_{i,B_{i}^{L}\bar{B}}\) is in a pure state (the minimum value is the entanglement of the purification [20] of \(C_{BB_{1}}\)). The corresponding state is given as \[|\bar{C}\rangle_{BB_{1}\bar{B}\bar{B}_{1}}=\sum_{i}\sqrt{p_{i}}|\Psi_{i} \rangle\!\rangle_{B_{i}^{R}B_{1i}^{R}}|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B} }|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B}_{1}}\,, \tag{56}\] where \(|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B}}\) (and \(|\Psi_{i}\rangle\!\rangle_{B_{i}^{L}\bar{B}_{1}}\)) are pure maximally entangled states between \(B_{i}^{L}\) and \(\bar{B}\) (\(B_{i}^{R}\) and \(\bar{B}_{1}\))(Fig. 1). It is clear that \[\operatorname{SchR}(BB_{1}\bar{B}_{1}:\bar{B})_{\bar{C}} =\operatorname{rank}(\tilde{C}_{\bar{B}})=\sum_{i}d_{B_{i}^{L}} \tag{57}\] \[\operatorname{SchR}(B\bar{B}:B_{1}\bar{B}_{1})_{\bar{C}} =\operatorname{rank}(\tilde{C}_{\bar{B}\bar{B}_{1}})=\sum_{i}d_{B_ {i}^{R}} \tag{58}\] By comparing \(|\tilde{C}\rangle_{BB_{1}\bar{B}\bar{B}_{1}}\) and decomposition (22), we conclude \(\mathcal{H}_{\bar{B}}=\bigoplus_{i}\mathcal{H}_{B_{i}^{L}}\) and \(\mathcal{H}_{E}=\bigoplus_{i}\mathcal{H}_{B_{i}^{R}}\), from which the theorem holds. Note that it might be true that the minimal achievable dimension can be calculated by minimization: \[\min_{U_{BB_{1}}}\operatorname{rank}\left(\operatorname{tr}_{B_{1}}\left(U_{BB _{1}}C_{BB_{1}}U_{BB_{1}}^{\dagger}\right)\right)\,. \tag{59}\] This expression does not need a purification, however, we do not know an algorithm to perform the minimization in Eq. (59). ## V Application examples ### Classical case The condition for the classical case is well known. However, here we reproduce it using our results to check for consistency. Let us consider the classical bipartite state \[\rho_{AB}=\sum_{a,b}p(a,b)|a,b\rangle\langle a,b|\] such that \(\rho_{A},\rho_{B}>0\) without loss of generality. Then, the Choi matrix \(J_{AB}\) is \[J_{AB}=\sum_{a,b}p(a|b)|a,b\rangle\langle a,b|\] and obtain \(\omega_{a,b}=p(a|b)\) and \(K_{a,b}=\sqrt{p(a|b)}|a\rangle\langle b|\). The unital CP map in Eq. (8) can be written as \[\Omega_{A\to B}^{\dagger}(\cdot)=\sum_{a,b}p(a|b)\langle a|\cdot|a \rangle|b\rangle\langle b|\,.\] Then, we introduce an equivalence relation between labels \(|b\rangle\) as \[b\sim b^{\prime}\Leftrightarrow p(a|b)=p(a|b^{\prime})\,,\quad\forall a\,.\] This equivalence relation induces a disjoint decomposition of the labels \(\mathcal{B}=\{b\}\) as \(\mathcal{B}=\bigcup_{i=1}^{m}\mathcal{B}^{i}\) and the Hilbert space \(\mathcal{H}_{B}=\bigoplus_{i}\mathcal{H}_{B^{i}}\). We define \(I_{B^{i}}:=\sum_{b\in\mathcal{B}^{i}}|b\rangle\langle b|\). A simple calculation shows that \[\operatorname{Im}\Omega_{A\to B}^{\dagger}=\left\{\bigoplus_{i=1}^{m}c_{i}I_{ B^{i}}\left|\,c_{i}\in\mathbb{C}\right\}\,,\] and therefore, \[\mathcal{F}(\mathcal{T})=\left(\operatorname{Im}\Omega_{A\to B}^{\dagger} \right)^{\prime}\cong\bigoplus_{i=1}^{m}|i\rangle\langle i|\otimes M_{B^{i}}( \mathbb{C})\] from Theorem 2: As \(\rho_{B}\) is classical and has no off-diagonal term between \(b\neq b^{\prime}\), \(\Delta_{\rho_{B}}^{t}(\mathcal{F}(\mathcal{T}))\subset\mathcal{F}(\mathcal{T})\) holds true for any \(t\in\mathbb{R}\). Hence, \((\mathcal{M}_{B}^{S})^{\prime}=\mathcal{F}(\mathcal{T})\) and the minimal sufficient subalgebra of \(\mathcal{H}_{B}\) become \[\mathcal{M}_{B}^{S}\cong\bigoplus_{i=1}^{m}|i\rangle\langle i|\otimes I_{B^{i }}\,.\] that the verification of \[C_{BB_{1}}=\bigoplus_{i=1}^{m}|i\rangle\langle i|_{B_{i}^{L}}\otimes| \mathcal{I}\rangle\langle\mathcal{I}|_{B_{i}^{R}B_{i_{1}}^{R}}\otimes|i \rangle\langle i|_{B_{i_{1}^{L}}^{L}}\,.\] and \[d_{\bar{B}}=m=\operatorname{rank}(C_{BB_{1}})\,. \tag{60}\] is simple. This is consistent with minimal sufficient statistics[2]. ### Pure bipartite states For a pure state, \(\rho_{AB}=|\psi\rangle\langle\psi|_{AB}\) restricted to \(\rho_{A},\rho_{B}>0\) implies that \(d_{A}=d_{B}=\operatorname{SchR}(A)_{\psi}\). \[J_{AB}=|\mathcal{I}\rangle\langle\mathcal{I}|_{AB} \tag{61}\] and thus \(\Omega_{A\to B}^{\dagger}=\operatorname{id}_{A\to B}\). This implies that \[\mathcal{F}(\mathcal{T})=\mathbb{C}I_{B} \tag{62}\] according to Theorem 2. This is also evident that \(\Omega_{B\to E}\) must be a completely depolarized channel. Therefore, \((\mathcal{M}_{B}^{S})^{\prime}=\mathbb{C}I\) and \(d_{\bar{B}}=\operatorname{SchR}(A:B)_{\psi}=d_{B}\); that is, further compression is impossible. ### Exact compression of quantum channel For a given CPTP-map \(\mathcal{E}_{A\to B}\), \(\mathcal{F}_{B\to\bar{B}}\) is an exact compression if a post-processing CPTP-map \(R_{\bar{B}\to B}\) that satisfies \[\tilde{\mathcal{E}}_{A\to\bar{B}} :=\mathcal{F}_{B\to\bar{B}}\circ\mathcal{E}_{A\to B}\,, \tag{63}\] \[\mathcal{E}_{A\to B} =R_{\bar{B}\to B}\circ\tilde{\mathcal{E}}_{A\to\bar{B}}\,. \tag{64}\] Figure 1: (Left) Canonical purification of \(C_{BB_{1}}\), which is the sum of tensor products of four maximally entangled states. By minimizing the entanglement between \(B\bar{B}\) and \(B_{1}\bar{B}_{1}\) by the unitary \(U_{BB_{1}}\), one obtain the desired state \(|\tilde{C}\rangle\) whose rank on \(\bar{B}\) gives the dimension \(d_{\bar{B}}=d_{B^{L}}\). \(\tilde{C}_{B\bar{B}_{1}}\) is separable and contains only classical correlations (dashed line). It can be confirmed that the (exact) compression is non-trivial if \(d_{\hat{B}}<d_{B}\). Consider the (normalized) Choi state \(\mathcal{E}_{A\to B}\) \[\rho_{AB}=(\mathrm{id}_{A}\otimes\mathcal{E}_{\bar{A}\to B})(|\Psi\rangle\!\rangle \!\langle\Psi|_{A\bar{A}})\,. \tag{65}\] This is similar to the exact compression of \(\rho_{AB}\) on \(B\). In this case, \(J_{AB}\) is \[J_{AB} =\mathcal{E}(\tau_{A})^{-1/2}(\mathrm{id}_{A}\otimes\mathcal{E} )\,(|\Psi\rangle\!\rangle\!\langle\Psi|_{A\bar{A}})\,\mathcal{E}(\tau_{A})^{-1 /2} \tag{66}\] \[=\mathcal{E}(I_{A})^{-1/2}(\mathrm{id}_{A}\otimes\mathcal{E})\,( |\mathcal{I}|)(\bar{\mathcal{I}}|_{A\bar{A}})\,\mathcal{E}(I_{A})^{-1/2}\,, \tag{67}\] which is the Choi operator of \((\mathcal{R}_{B\to A}^{\tau,\mathcal{E}})^{\dagger}\); that means, \(\Omega_{A\to B}^{\dagger}=(\mathcal{R}_{B\to A}^{\tau,\mathcal{E}})^{\dagger}\). Therefore, \((\mathcal{M}_{B}^{S})^{\prime}\) is the largest subalgebra of \((\mathrm{Im}(\mathcal{R}_{B\to A}^{\tau,\mathcal{E}})^{\dagger})^{\prime}\) invariant under \(ad_{\mathcal{E}_{A\to B}(\tau_{A})}\). #### v.2.1 Unital channels Consider the case in which \(\mathcal{E}_{A\to B}\) is unital, that is, \(\mathcal{E}(I_{A})=I_{B}\) (\(A\cong B\)). Then, \(\Delta_{\rho_{B}}^{t}=\mathrm{id}_{B}\) and \(\Omega_{B\to A}=\mathcal{R}_{B\to A}^{\tau,\mathcal{E}}=\mathcal{E}_{B\to A}^{\dagger}\) hold true. Because the modular invariance is trivial, \((\mathcal{M}_{B}^{S})^{\prime}=\mathcal{F}(\mathcal{T})\) and \[(\mathcal{M}_{B}^{S})^{\prime}=(\mathrm{Im}\mathcal{E}_{A\to B})^{\prime}\,, \quad\mathcal{M}_{B}^{S}=\mathrm{Alg}\,(\mathrm{Im}\mathcal{E}_{A\to B}). \tag{68}\] from Theorem 2. An example of a unital channel is a \(G\)-twirling operation of a finite group \(G\). Let \(U(g)\) be a unitary representation of \(G\) in \(\mathcal{H}_{B}\). \(G\)-twirling \(\mathcal{G}(\cdot)\) is defined as follows: \[\mathcal{G}(\cdot):=\frac{1}{|G|}\sum_{g\in G}U(g)\cdot U(g^{-1})\,. \tag{69}\] In general, \(U(g)\) is not irreducible and the Hilbert space \(\mathcal{H}_{B}\) can be decomposed as \[\mathcal{H}_{B}=\bigoplus_{\mu irrep}\mathcal{H}_{B_{\mu}^{L}}\otimes \mathcal{H}_{B_{\mu}^{R}}, \tag{70}\] where the irreducible decomposition of \(U(g)\) is given by \[U(g)=\bigoplus_{\mu\in irrep}U_{\mu}(g)\otimes I_{B_{\mu}^{R}}, \tag{71}\] correspondingly. \(U_{\mu}(g)\) acts on \(\mathcal{H}_{B_{\mu}^{L}}\) and \(\mathrm{dim}\mathcal{H}_{B_{\mu}^{R}}\) is the multiplicity of \(\mu\). Using Schur's lemma and Eq. (68), we obtain \[d_{\hat{B}}=\sum_{\mu}d_{B_{\mu}^{L}} \tag{72}\] as desired. ## VI Conclusion In this study, we studied the task of exact local compression of quantum bipartite states, where the goal is to locally reduce the dimensions of each Hilbert space without losing any information. We demonstrated a closed formula that allows us to calculate the minimum achievable dimensions for this compression task. Furthermore, we derived simple upper and lower bounds for the rank of the matrix, which offers a numerically more tractable approach to estimate the minimal dimensions. These bounds are in general not tight; however, it is still useful to check, for example, the scaling of the optimal compressed dimension. Future studies could focus on exploring more tractable (analytical and numerical) expression for the optimal dimension. It would also be interesting to investigate the relationship between obtained dimension and other known one-shot entropic quantities. The ability to compress quantum states and channels while maintaining their informational content could have more implications for zero-error quantum information processing. Theorem 2 suggests that non-trivial compression is impossible for generic bipartite states or quantum channels, but it would be possible for structured states or channels such as tensor network states or covariant channels. Practically, it would also be interesting to investigate _approximate_ one-shot scenario, which has a better potential for applications in quantum information processing (such as quantum information bottleneck problem [21]) but has not been investigated in this paper. The asymptotic result [6] implies that asymptotic compression with vanishing global error also requires non-trivial Koashi-Imoto decomposition, thus the vanishing error condition may still be too restrictive (see e.g., Ref. [22] for a possible modification). One possible way to obtain approximate compression is considering "smoothed" variant of the minimal dimension obtained in this paper. We leave studying such modification as a future problem. ## VII Acknowledgement The author would like to thank Anna Jencova and Francesco Buscemi for their helpful comments and fruitful discussions. K. K. acknowledges support from JSPS Grant-in-Aid for Early-Career Scientists, No. 22K13972; from MEXT-JSPS Grant-in-Aid for Transformative Research Areas (A) "Extreme Universe," No. 22H05254.
2309.16539
SUNBIRD: A simulation-based model for full-shape density-split clustering
Combining galaxy clustering information from regions of different environmental densities can help break cosmological parameter degeneracies and access non-Gaussian information from the density field that is not readily captured by the standard two-point correlation function (2PCF) analyses. However, modelling these density-dependent statistics down to the non-linear regime has so far remained challenging. We present a simulation-based model that is able to capture the cosmological dependence of the full shape of the density-split clustering (DSC) statistics down to intra-halo scales. Our models are based on neural-network emulators that are trained on high-fidelity mock galaxy catalogues within an extended-$\Lambda$CDM framework, incorporating the effects of redshift-space, Alcock-Paczynski distortions and models of the halo-galaxy connection. Our models reach sub-percent level accuracy down to $1\,h^{-1}{\rm Mpc}$ and are robust against different choices of galaxy-halo connection modelling. When combined with the galaxy 2PCF, DSC can tighten the constraints on $\omega_{\rm cdm}$, $\sigma_8$, and $n_s$ by factors of 2.9, 1.9, and 2.1, respectively, compared to a 2PCF-only analysis. DSC additionally puts strong constraints on environment-based assembly bias parameters. Our code is made publicly available on Github.
Carolina Cuesta-Lazaro, Enrique Paillas, Sihan Yuan, Yan-Chuan Cai, Seshadri Nadathur, Will J. Percival, Florian Beutler, Arnaud de Mattia, Daniel Eisenstein, Daniel Forero-Sanchez, Nelson Padilla, Mathilde Pinon, Vanina Ruhlmann-Kleider, Ariel G. Sánchez, Georgios Valogiannis, Pauline Zarrouk
2023-09-28T15:53:30Z
http://arxiv.org/abs/2309.16539v2
# Sunbird: A simulation-based model for full-shape density-split clustering ###### Abstract Combining galaxy clustering information from regions of different environmental densities can help break cosmological parameter degeneracies and access non-Gaussian information from the density field that is not readily captured by the standard two-point correlation function (2PCF) analyses. However, modelling these density-dependent statistics down to the non-linear regime has so far remained challenging. We present a simulation-based model that is able to capture the cosmological dependence of the full shape of the density-split clustering (DSC) statistics down to intra-halo scales. Our models are based on neural-network emulators that are trained on high-fidelity mock galaxy catalogues within an extended-\(\Lambda\)CDM framework, incorporating the effects of redshift-space, Alcock-Paczynski distortions and models of the halo-galaxy connection. Our models reach sub-percent level accuracy down to 1 \(h^{-1}\)Mpc and are robust against different choices of galaxy-halo connection modelling. When combined with the galaxy 2PCF, DSC can tighten the constraints on \(\omega_{\rm cdm}\), \(\sigma_{8}\), and \(n_{s}\) by factors of 2.9, 1.9, and 2.1, respectively, compared to a 2PCF-only analysis. DSC additionally puts strong constraints on environment-based assembly bias parameters. Our code is made publicly available on Github. keywords: cosmological parameters, large-scale structure of the Universe ## 1 Introduction The 3D clustering of galaxies contains a wealth of information about the contents and evolution of the Universe; from the properties of the early Universe to the nature of dark energy and dark matter, and to information on how galaxies form and evolve. Galaxy clustering provided some of the first evidence of the accelerated Universe (Maddox et al., 1990), helped establish the standard model of cosmology through the detection of baryon acoustic oscillations (Percival et al., 2001; Cole et al., 2005; Eisenstein et al., 2005), and has yielded accurate cosmological constraints (Anderson et al., 2014). Upcoming surveys such as DESI (DESI Collaboration et al., 2016), Euclid (Laureijs et al., 2011), and Roman (Green et al., 2012) will probe unprecedented volumes, enabling more stringent constraints that may reveal inconsistencies challenging the standard cosmological model or our understanding of how galaxies form and evolve. The spatial distribution of galaxies is commonly summarised by its two-point functions, the so-called two-point correlation function or its Fourier space equivalent, the power spectrum. For a Gaussian random field, this compression would be lossless. As the distribution of density fluctuations evolves through gravitational collapse, it becomes non-Gaussian: although overdensities can grow freely, un derder densities are always bounded from below, as the density contrast in regions devoid of matter can never go below \(\delta=-1\). As a consequence, the density field develops significant skewness and kurtosis, departing from Gaussianity (Einasto et al., 2021). The induced non-Gaussianity in galaxy clustering deems the correlation function a lossy summary. For this reason, cosmologists have developed a wealth of summary statistics that may be able to extract more relevant information from the 3D clustering of galaxies. Examples include the three-point correlation function (Slepian and Eisenstein, 2017) or bispectrum (Gil-Marin et al., 2017; Sugiyama et al., 2019; Philcox and Ivanov, 2022), the four-point correlation function (Philcox et al., 2021) or trispectrum (Gualdi et al., 2021), counts-in-cells statistics (Szapudi and Pan, 2004; Klypin et al., 2018; Jamieson and Loverde, 2020; Uhlemann et al., 2020), non-linear transformations of the density field (Neyrinck et al., 2009; Neyrinck, 2011; Wang et al., 2011, 2022), the separate universe approach (Chiang et al., 2015), the marked power spectrum (Massara and Sheth, 2018; Massara et al., 2022), the wavelet scattering transform (Valogiannis and Dvorkin, 2022; Valogiannis and Dvorkin, 2022), void statistics (Hawken et al., 2020; Nadathur et al., 2020; Correa et al., 2020; Woodfinden et al., 2022), k-nearest neighbours (Banerjee and Abel, 2020; Yuan et al., 2023), and other related statistics. Alternatively, one could avoid the use of summary statistics completely and attempt to perform inference at the field level (Lavaux et al., 2019; Schmidt, 2021; Dai and Seljak, 2023, 2022). However, utilising these summary statistics has been limited by our inability to model them analytically over a wide range of scales, difficulty compressing their high dimensionality, or due to a lack of accurate perturbation theory predictions or the difficulty in modelling the effect that observational systematics have on arbitrary summary statistics (Yuan et al., 2023). This has now drastically changed due to i) _advancements in simulations_: we now can run large suites of high-resolution simulations in cosmological volumes DeRose et al. (2019); Nishimichi et al. (2019); Maksimova et al. (2021), which enable us to forward model the relation between the cosmological parameters and the summary statistics with greater accuracy; and ii) _progress in machine learning techniques_ that allow us to perform inference on any set of parameters, \(\theta\), given any summary statistic, \(s\), provided we can forward model the relation \(s(\theta)\) for a small set of \(\theta\) values (Cranmer et al., 2020). Examples of the latter in cosmology are emulators, that model \(s(\theta)\) mainly through neural networks or Gaussian processes (Heitmann et al., 2009; DeRose et al., 2019; Zhai et al., 2023) and assume a Gaussian likelihood, or density estimators used to model directly the posterior distribution \(p(\theta|s(x))\)(Jeffrey et al., 2020; Hahn et al., 2022) and make no assumptions about the likelihood's distribution. While these advancements allow us to constrain cosmology with remarkable accuracy, our primary focus extends beyond just finding the most informative summary statistics. We are interested in statistics that could lead to surprising results revising our understanding of how the Universe formed and evolved. Notably, models beyond Einstein gravity that add degrees of freedom in the gravitational sector must screen themselves from local tests of gravity, and can therefore only deviate from general relativity in regions of low density or low gravitational potential (Joyce et al., 2015; Hou et al., 2023). Therefore, surprises in this direction could be found in statistics that explore the dependency of galaxy clustering to different density environments. Moreover, previous work (Paillas et al., 2021; Paillas et al., 2023; Bonnaire et al., 2022) has demonstrated that these statistics also have a large constraining power on the cosmological parameters. Although we have mentioned above that we can now run large suites of simulations in cosmological volumes, this is only true for N-body, dark matter-only simulations. We still need a flexible and robust galaxy-dark matter connection model that allows us to populate dark matter simulations with realistic galaxy distributions. In this work, we employ halo occupation distribution (HOD) models, which use empirical relations to describe the distribution of galaxies in a halo based on the halo's mass and other secondary halo properties. In particular, recent studies have found the halo local density to be a good tracer of dark matter halo secondary properties, both in hydrodynamical simulations (Hadzhiyska et al., 2020) and semi-analytical models of galaxy formation (Xu et al., 2021). Here we present a full-shape theory model for galaxy clustering in different density environments that can be used to infer the cosmological parameters from observations in a robust manner. In a companion paper (Paillas et al., 2023) we present the first cosmological constraints resulting from density-split clustering using the model presented in this manuscript that we apply to the BOSS DR12 CMASS data (Dawson et al., 2016; Reid et al., 2015). The paper is organised as follows. We define the observables and how we model them in Section 2. In Section 3, we demonstrate that the model can accurately recover the parameters of interest in a range of mock galaxy observations. We discuss our results and compare them to previous findings in the literature in Section 4. ## 2 A simulation-based model for density split statistics We are interested in modelling the connection between the cosmological parameters, \(\mathcal{C}\), the additional parameters describing how galaxies populate the cosmic web of dark matter, \(\mathcal{G}\), and clustering as a function of density environment, \(\mathbf{X}^{\text{obs}}\). To solve the inverse problem and constrain \(\mathcal{C}\) and \(\mathcal{G}\) from data, we could use simulated samples drawn from the joint distribution \(p(\mathcal{C},\mathcal{G},\mathbf{X}^{\text{obs}})\) to either, i) model the likelihood of the observation \(p\left(\mathbf{X}^{\text{obs}}|C,\mathcal{G}\right)\), subsequently sampling its posterior using Monte Carlo methods, or ii) directly model the posterior distribution \(p(C,\mathcal{G}|\mathbf{X}^{\text{obs}})\), as demonstrated in Jeffrey et al. (2020); Hahn et al. (2022), thus circumventing assumptions about the likelihood's functional form. Due to the Central Limit Theorem, we anticipate the likelihood of galaxy pair counts to approximate a Gaussian distribution. In this section, we validate that this holds true specifically for density-split statistics and elucidate how simulations can model its mean and covariance. Additionally, modelling the likelihood implies that we can use it as a measure of goodness of fit, vary the priors of the analysis at will, and combine our constraints with those of other independent observables. In this section, we will proceed as follows: we begin by detailing our method for estimating density-dependent clustering. Subsequently, we discuss our approach for simulating the observable for a CMASS-like mock galaxy sample. We conclude by introducing our neural network model of the observable's likelihood. ### The observables #### 2.1.1 Two-point clustering The information contained on 3D galaxy maps is commonly summarised in terms of the two-point correlation function (2PCF) \(\xi^{\mathcal{BB}}(r)\)(or the power spectrum in Fourier space), which measures the excess probability \(\mathrm{d}P\) of finding a pair of galaxies separated by a scale \(\mathbf{r}\) within a volume \(\mathrm{d}V\), relative to an unclustered Poisson distribution, \[\mathrm{d}P=\overline{n}\left[1+\xi^{\mathcal{BB}}(\mathbf{r})\right]\mathrm{d }V\,, \tag{1}\] where \(\vec{n}\) denotes the mean galaxy density. While the spatial distribution of galaxies is isotropic in real space, there are two main sources of distortions that induce anisotropies in the clustering measured from galaxy surveys: redshift-space distortions (RSD) and Alcock-Paczynski (AP) distortions, which are dynamical and geometrical in nature, respectively. Redshift-space distortions arise when converting galaxy redshifts to distances ignoring the peculiar motion of the galaxies. A pair of galaxies that is separated by a vector \(\mathbf{r}\) in real space, will instead appear separated by a vector \(\mathbf{s}\) in redshift space (to linear order in velocity): \[\mathbf{s}=\mathbf{r}+\frac{\mathbf{v}\cdot\mathbf{\hat{x}}}{a(z)H(z)}\mathbf{ \hat{x}}\,, \tag{2}\] where \(\mathbf{\hat{x}}\) is the unit vector associated with the observer's line of sight, \(\mathbf{v}\) is the peculiar velocity of the galaxy, \(a(z)\) is the scale factor and \(H(z)\) is the Hubble parameter. Alcock-Paczynski distortions arise when the cosmology that is adopted to convert angles and redshifts to distances, denoted as fiducial cosmology, differs from the true cosmology of the Universe. This effect is partially degenerate with RSD. For close pairs, the true pair separation is related to the observed pair separation via the parameters \(q_{\perp}\) and \(q_{\parallel}\), which distort the components of the pair separation across and along the observer's line of sight, \[r_{\perp}=q_{\perp}r_{\perp}^{\mathrm{fid}}\,;\,\,\,r_{\parallel}=q_{ \parallel}r_{\parallel}^{\mathrm{fid}}\,, \tag{3}\] where the \({}^{\mathrm{fid}}\) superscript represents the separations measured in the fiducial cosmology. The distortion parameters are given by \[q_{\parallel}=\frac{D_{\mathrm{H}}(z)}{D_{\mathrm{H}}^{\mathrm{fid}}(z)}\,;\, \,\,q_{\perp}=\frac{D_{\mathrm{M}}(z)}{D_{\mathrm{M}}^{\mathrm{fid}}(z)}\,, \tag{4}\] where \(D_{\mathrm{M}}(z)\) and \(D_{\mathrm{H}}(z)\) are the comoving angular diameter and Hubble distances to redshift \(z\), respectively. Due to RSD and AP, the 2PCF is no longer isotropic but depends on \(s\), the pair separation, and \(\mu\), the cosine of the angle between the galaxy pair separation vector and the mid-point line of sight. The two-dimensional correlation function can be decomposed in a series of multipole moments, \[\xi_{\ell}(s)=\frac{2\ell+1}{2}\int_{-1}^{1}\mathrm{d}\mu\,\xi(\mathrm{s}, \mu)\mathrm{P}_{\ell}(\mu), \tag{5}\] where \(\mathrm{P}_{\ell}\) is the \(\ell\)-th order Legendre polynomial. #### 2.1.2 Density-split clustering The density-split method (Paillas et al., 2023) characterises galaxy clustering in environments of different local densities. Instead of calculating the two-point clustering of the whole galaxy sample at Figure 1: A visualisation of the density-split clustering data vectors from the AracusSummit simulations, along with emulator prediction at the parameter values of the simulation. The lowest density quintile is shown in blue, Q\({}_{0}\), and the highest one in red, Q\({}_{4}\). Markers and solid lines show the data vectors and the emulator predictions, respectively, whereas the shaded area represents the emulator predicted uncertainty. Left: multipoles of the quintile-galaxy cross-correlation functions. Middle: multipoles of the quintile autocorrelation functions. Right: multipoles of the two-point correlation function. The upper and lower panels show the monopole and quadrupole moments, respectively. We also display the difference between the model and the data, in units of the data error. Each colour corresponds to a different density quintile. once, one first splits a collection of randomly placed query points in different bins or 'quantiles', according to the local galaxy overdensity at their locations. The two-point clustering is then calculated for each environment separately, and all this information is then combined in a joint likelihood analysis. The algorithm can be summarised as follows: 1. Redshift-space galaxy positions are assigned to a rectangular grid with a cell size \(R_{\rm cell}\), and the overdensity field is estimated using a cloud-in-cell interpolation scheme. The field is smoothed using a Gaussian filter with radius \(R_{s}\), which is performed in Fourier space for computational efficiency. 2. A set of \(N_{\rm query}\) random points are divided into \(N_{Q}\) density bins, or _quantiles_, according to the overdensity measured at each point. 3. Two summary statistics are calculated for each quantile: the autocorrelation function (DS ACF) of the query points in each quantile, and the cross-correlation function (DS CCF) between the quantiles and the entire redshift-space galaxy field. These correlation functions are then decomposed into multipoles (equation 5). 4. The collection of correlation functions of all quantiles but the middle one, is combined in a joint data vector, which is then fitted in a likelihood analysis to extract cosmological information. In Fig. 1, we show the different density split summary statistics for five quantiles and \(R_{s}=10\,h^{-1}\,\)Mpc, as measured in the AbacusSummit simulations presented in Section 2.2.1. Note that the smoothing scale can be varied depending on the average density of tracers in a given survey, here we restrict ourselves to a smoothing scale appropriate for a CMASS-like survey. In the first column, we show the CCF of the different density quantities and the entire galaxy sample. Above, the amplitude of the different correlations reflects the non-Gaussian nature of the density PDF: the most underdense regions, \(\rm Q_{0}\), are always constrained from below as voids cannot be emptier than empty (\(\delta=-1\)), meanwhile, dense regions, \(\rm Q_{4}\), can go well beyond 1, breaking the symmetry of the correlations. Around the scale of 100 \(h^{-1}\,\)Mpc we can distinguish the signal coming from the baryon acoustic oscillations for all density quantiles, both for the cross- and auto-correlations. Regarding the quadrupole moments, the anisotropy found is a consequence of the RSD effect on the galaxy positions, which also introduces an additional anisotropy in the distribution of quantiles when these are identified using the galaxy redshift space distribution, as shown in (Paillas et al., 2023). ### Forward modelling the galaxy observables In this subsection, we will first present the suite of dark-matter-only N-body simulations used in this work to model the cosmological dependence of density-split clustering, and will later present the galaxy-halo connection model we adopt to build CMASS-like mock galaxy catalogues. #### 2.2.1 The AbacusSummit simulations AbacusSummit(Maksimova et al., 2021) is a suite of cosmological N-body simulations that were run with the Abacus N-body code (Garrison et al., 2019, 2021), designed to meet and exceed the simulation requirements of DESI (Levi et al., 2019). The base simulations follow the evolution of \(6912^{3}\) dark matter particles in a \((2\,h^{-1}\rm Gpc)^{3}\) volume, corresponding to a mass resolution of \(2\times 10^{9}\,\rm M_{\odot}/h\). In total, the suite spans 97 different cosmologies, with varying, \[\mathcal{C}=\{\omega_{\rm cdm},\omega_{b},\sigma_{8},n_{s},{\rm d}n_{s}/{\rm d }\ln k,N_{\rm eff},w_{0},w_{a}\}, \tag{6}\] where \(\omega_{\rm cdm}=\Omega_{\rm c}h^{2}\) and \(\Omega_{\rm b}h^{2}\) are the physical cold dark matter and baryon densities, \({\rm d}n_{s}/{\rm d}\ln k\) is the running of the spectral tilt, \(N_{\rm eff}\) is the effective number of ultra-relativistic species, \(w_{0}\) is the present-day dark energy equation of state, and \(w_{a}\) captures the time evolution of the dark energy equation of state. The simulations assume a flat spatial curvature, and the Hubble constant \(H_{0}\) is calibrated to match the Cosmic Microwave Background acoustic scale \(\theta_{*}\) to the Planck2018 measurement. In this study, we focus on the following subsets of the AbacusSummit simulations: [frame=single,leftmargin=0pt,itemsep=0pt,parsep=0pt,topsep=0pt] **C000** Planck2018 \(\Lambda\)CDM base cosmology (Planck Collaboration et al., 2020), corresponding to the mean of the base_plikHM_TTTEEE_lowl_lowE_lensing likelihood. There are 25 independent realizations of this cosmology. [frame=single,leftmargin=0pt,itemsep=0pt,parsep=0pt,topsep=0pt] **C001-004** Secondary cosmologies, including a low \(\omega_{\rm cdm}\) choice (WMAP7, Komatsu et al., 2011), a \(w\)CDM choice, a high \(N_{\rm eff}\) choice, and a low \(\sigma_{8}\) choice. [frame=single,leftmargin=0pt,itemsep=0pt,parsep=0pt,topsep=0pt] **C013** Cosmology that matches Euclid Flagship2 \(\Lambda\)CDM (Castander et al., in preparation). [frame=single,leftmargin=0pt,itemsep=0pt,parsep=0pt,topsep=0pt] **C100-126** A linear derivative grid that provides pairs of simulations with small negative and positive steps in an 8-dimensional cosmological parameter space [frame=single,leftmargin=0pt,itemsep=0pt,parsep=0pt,topsep=0pt] **C130-181** An emulator grid around the base cosmology that provides a wider coverage of the cosmological parameter space. Note that all the simulations in the emulator grid have the same phase seed. The parameter ranges in the emulator grid are shown in Table 2.2.1. Moreover, we use a smaller set of 1643 N-body simulations denoted as AbacusSmall to estimate covariance matrices. These simulations are run with the same mass resolution as that of AbacusSummit in 500 \(h^{-1}\)Mpc boxes, with 1728\({}^{3}\) particles and varying phase seeds. Group finding is done on the fly, using a hybrid Friends-of-Friends/Spherical Overdensity algorithm, dubbed CompaSO (Hadzhiyska et al., 2021). We use dark matter halo catalogues from snapshots of the simulations at \(z=0.5\) and populate them with galaxies using the extended Halo Occupation Distribution framework presented in Sect. 2.2.2. #### 2.2.2 Modelling the galaxy-halo connection We model how galaxies populate the cosmic web of dark matter using the Halo Occupation Distribution (HOD) framework, which populates dark matter haloes with galaxies in a probabilistic way, assuming that the expected number of galaxies in each halo correlates with some set of halo properties, the main one being halo mass. In the base halo model (Zheng et al., 2007), the average number of central galaxies in a halo of mass \(M\) is given by \[\langle N_{\rm c}\rangle(M)=\frac{1}{2}\left(1+{\rm erf}\left(\frac{\log M- \log M_{\rm cdt}}{\sqrt{2}\sigma}\right)\right)\, \tag{7}\] where \({\rm erf}(x)\) denotes the error function, \(M_{\rm cdt}\) is the minimum mass required to host a central, and \(\sigma\) is the slope of the transition between having zero to one central galaxy. The average number of satellite galaxies is given by \[\langle N_{\rm s}\rangle(M)=\langle N_{\rm c}\rangle(M)\left(\frac{M-\kappa M_{ \rm cdt}}{M_{1}}\right)^{\alpha} \tag{8}\] where \(\kappa M_{\rm cdt}\) gives the minimum mass required to host a satellite, \(M_{1}\) is the typical mass that hosts one satellite, and \(\alpha\) is the power law index for the number of galaxies. Note that these particular functional forms have been developed for the clustering of luminous red galaxies (LRGs) and should be modified for other tracers such as emission line galaxies (ELGs). Alternatively, one could model the connection between dark matter halos and galaxies through more complex models of galaxy formation such as semi-analytical models or hydrodynamical simulations. In these scenarios, the simplified assumptions of HOD models whose occupation parameters solely depend on halo mass have been found to break down. In particular, recent studies have found the halo local density to be a good tracer of dark matter halo secondary properties that control galaxy occupation, both in hydrodynamical simulations (Hadzihyska et al., 2020) and semi-analytical models of galaxy formation (Xu et al., 2021). There is however no direct observational evidence of this effect so far, and we are interested in using density split statistics to more accurately constrain the role that environment plays in defining the halo-galaxy connection. In this work, we implement the HOD modelling using AbacusHOD (Yuan et al., 2021), which is a highly efficient Python package that contains a wide range of HOD variations. In AbacusHOD, the environment-based secondary bias parameters, \(B_{\rm cen}\) & \(B_{\rm sat}\), effectively modulate the mass of a dark matter halo during the HOD assignment, so that it depends on the local matter overdensity \(\delta_{m}\) \[\log_{10}M_{\rm cut}^{\rm eff}=\log_{10}M_{\rm cut}+B_{\rm cen}( \delta_{m}-0.5)\] \[\log_{10}M_{1}^{\rm eff}=\log_{10}M_{1}+B_{\rm sat}(\delta_{m}-0.5 )\;. \tag{9}\] Here, \(\delta_{m}\) is defined as the mass density within a \(5\,h^{-1}\,\)Mpc tophat filter from the halo centre, without considering the halo itself. More details about the exact implementation of this extension can be found in Yuan et al. (2021). Moreover, we include velocity bias parameters to increase the flexibility of the model to describe the dynamics of galaxies within dark matter haloes, that ultimately influence galaxy clustering through redshift-space distortions. There is in fact observational evidence pointing towards central galaxies having a larger velocity dispersion than their host dark matter halos (Guo et al., 2014; Yuan et al., 2021) for CMASS galaxies (dominated by LRGs), evidence for other tracers is not established yet. In the AbacusHOD implementation, the positions and velocities of central galaxies are matched to the most-bound particle in the halo, whereas the satellites follow the positions and velocities of randomly selected dark matter particles within the halo. The velocity bias parameters, \(\alpha_{\rm vel,c}\) & \(\alpha_{\rm vel,s}\), allow for offsets in these velocities, such that the centrals do not perfectly track the velocity of the halo centre, and the satellites do not exactly match the dark matter particle velocities. The exact velocity match is recovered when \(\alpha_{\rm vel,c}=0\) and \(\alpha_{\rm vel,c}=1\). The extended-HOD framework used in this study is then comprised of 9 parameters: \[\mathcal{G}=\{M_{\rm cut},M_{1},\sigma,\alpha,\kappa,\alpha_{\rm vel,c},\alpha_ {\rm vel,s},B_{\rm cen},B_{\rm sat}\}\;. \tag{10}\] Note that we are here not including additional parameters that may help marginalize over the effect that baryons have on halo density profiles. Although this has been shown to be a small effect (Bose et al., 2019), Yuan et al. (2021) presented an extended parametrisation that could be use to marginalize over this effect. #### 2.2.3 Generating mock galaxy catalogues We generate a Latin hypercube with 8500 samples from the 9-dimensional HOD parameter space defined in equation 10, with parameter ranges as listed in Table 2.2.1. Each of the 85 cosmologies is assigned 100 HOD variations from the Latin hypercube, which are then used to generate mock galaxy catalogues using the AbacusHOD. This number of HOD variations was chosen as a compromise between reducing the emulator error and increasing the computational cost of these measurements. In the future, we plan to develop a more efficient HOD sampling strategy to re-sample those HOD parameter values where the emulator error is large. Our target galaxy sample is the DR12 BOSS CMASS galaxy sample (Reid et al., 2015) at \(0.45<z<0.6\). If the resulting number density of an HOD catalogue is larger than the observed number density from CMASS, \(n_{\rm gal}\approx 3.5\times 10^{-4}\;(h/{\rm Mpc})^{-3}\), we invoke an incompleteness parameter \(f_{\rm ic}\) and randomly downsample the catalogue to match the target number density. The resulting HOD catalogues consist of the real-space galaxy positions and velocities. Under the distant-observer approximation, we map the positions of galaxies to redshift space by perturbing their coordinates along the line of sight with their peculiar velocities along the same direction (equation 2). For each mock catalogue, we build three redshift-space counterparts by adopting three different lines of sight, taken to be the \(x\), \(y\) and \(z\) axes of the simulation, which can be averaged out in the clustering analysis to increase the signal-to-noise ratio of the correlation functions (Smith et al., 2020). \begin{table} \begin{tabular}{|c c c|} \hline \hline & Parameter & Interpretation & Prior range \\ \hline Cosmology & \(\alpha_{\rm oldn}\) & Physical cold dark matter density & [0.103, 0.140] \\ & \(\omega_{b}\) & Physical baryon density & [0.0207, 0.024] \\ & \(\sigma_{\rm S}\) & Amplitude of matter fluctuations in \(8\,h^{-1}\,\)Mpc spheres & [0.687, 0.938] \\ & \(n_{s}\) & Spectral index of the primordial power spectrum & [0.901, 1.025] \\ & \(\Delta n_{s}/{\rm d}\ln k\) & Running off the spectral index & [0.-038, 0.038] \\ & \(N_{\rm eff}\) & Number of ultra-relativistic species & [2.1902, 3.9022] \\ & \(w_{0}\) & Present-day dark energy equation of state & [-1.27], -0.70] \\ & \(w_{\alpha}\) & Time evolution of the dark energy equation of state & [-0.628, 0.621] \\ \hline HOD & \(M_{\rm cut}\) & Minimum halo mass to host a central & [124, 13.3] \\ & \(M_{1}\) & Typical halo mass to host one satellite & [13.2, 14.4] \\ & \(\log\sigma\) & Slope of the transition from hosting zero to one central & [-3.0, 0.0] \\ & \(\alpha\) & Power-law index for the mass dependence of the number of satellites & [0.7, 1.5] \\ & \(\kappa\) & Parameter that modulates the minimum halo mass to host a satellite & [0.0, 1.5] \\ & \(\alpha_{c}\) & Velocity bias for centrals & [0.0, 0.5] \\ & \(\alpha_{\rm S}\) & Velocity bias for satellites & [0.7, 1.3] \\ & \(B_{\rm cen}\) & Environment-based assembly bias for centrals & [-0.5 0.5] \\ & \(B_{\rm sat}\) & Environment-based assembly bias for satellites & [-1.0, 1.0] \\ \hline \hline \end{tabular} \end{table} Table 1: Definitions and ranges of the cosmological and galaxy-halo connection parameters for the simulations used to train our emulator. Since the end goal of our emulator is to be able to model galaxy clustering from observations, we adopt the same fiducial cosmology as in our CMASS clustering measurements (Paillas et al., 2023), \[\omega_{\rm cdm}=0.12\quad\omega_{\rm b}=0.02237\quad h=0.6736\] \[\sigma_{8}=0.807952\quad n_{s}=0.9649\,, \tag{11}\] and infuse the mocks with the Alcock-Paczynski distortions that would be produced if we were to analyse each mock with this choice of fiducial cosmology. We do so by scaling the galaxy positions1 and the simulation box dimensions with the distortion parameters from equation 4, which depend on the adopted fiducial cosmology and the true cosmology of each simulation. Since, in general, \(q_{\perp}\) and \(q_{\parallel}\) can be different, the box geometry can become non-cubic, but it still maintains the periodicity along the different axes. This is taken into account when calculating the clustering statistics, as explained in the next section. Footnote 1: These distortions would have been naturally produced if we had started from galaxy catalogues in sky coordinates, and used our fiducial cosmology to convert them to comoving cartesian coordinates. In our case, we have to manually distort the galaxy positions, since we are already starting from the comoving box. #### 2.2.4 Generating the training sample We run the density-split clustering pipeline on the HOD mocks using our publicly available code2. Redshift-space galaxy positions are mapped onto a rectangular grid of resolution \(R_{\rm cell}=5\,h^{-1}\)Mpc, smoothed with a Gaussian kernel of width \(R_{s}=10\,h^{-1}\)Mpc. The overdensity field3 is sampled at \(N_{\rm query}\) random locations, where \(N_{\rm query}\) is equal to five times the number of galaxies in the box. We split the query positions into five quantiles according to the overdensity at each location. We plan to explore the constraining power of the statistic based on different values of the smoothing scale and the number of quantiles in future work. Footnote 2: [https://github.com/epaillas/densitysplit](https://github.com/epaillas/densitysplit). Footnote 3: The galaxy overdensity in each grid cell depends on the number of galaxies in the cell, the average galaxy number density, and the total number of grid cells. As we are working with a rectangular box with periodic boundary conditions, the average galaxy number density can be calculated analytically, which allows us to convert the galaxy number counts in each cell to an overdensity. When working with galaxy surveys, this has to be calculated using random catalogues that match the survey window function. We measure the DS autocorrelation and cross-correlation functions of each DS quintic in bins of \(\mu\) and \(s\) using pycorr, which is a wrapper around a modified version of CornfFunc (Sinha and Garrison, 2020). We use 241 \(\mu\) bins from \(-1\) to \(1\), and radial bins of different widths depending on the scale: 1 Mpc/h bins for \(0<s<4\)\(h^{-1}\)Mpc, 3 Mpc/h bins for \(4<s<30\)\(h^{-1}\)Mpc, and 5 Mpc/h bins for \(30<s<150\)\(h^{-1}\)Mpc. Additionally, we measure the galaxy 2PCF adopting these same settings. All the correlation functions are then decomposed into their multipole moments (equation 5). In this analysis, we decided to omit the hexadecapole due to its low signal-to-noise ratio, restricting the analysis to the monopole and quadrupole. The multipoles are finally averaged over the three lines of sight. Due to the addition of AP distortions, whenever the true cosmology of a mock does not match our fiducial cosmology, the boxes will have non-cubic dimensions while still maintaining the periodicity along the three axes. Both the densitysplit and pycorr codes can handle non-cubic periodic boundary conditions. In the case of densitysplit, we choose to keep the resolution of the rectangular grid fixed, so that \(R_{\rm cell}=5\,h^{-1}\)Mpc remains fixed irrespectively of the box dimensions (which, as a consequence, can change the number of cells that are required to span the different boxes). The smoothing scale \(R_{s}\) is also kept fixed to \(10\,h^{-1}\)Mpc, but since the underlying galaxy positions are AP-distorted, this mimics the scenario we would encounter in observations, where we make a choice of smoothing kernel and apply it to the distorted galaxy overdensity field. An example of the density split summary statistics for c000 and one of the sampled HOD parameters from the latin hypercube is shown in Fig. 1. ### Defining the observable's likelihood The data vector for density-split clustering is the concatenation of the monopole and quadrupole of the auto- and cross-correlation functions of quantiles \(\rm Q_{0},Q_{1},Q_{3},\) and \(\rm Q_{4}\). In the case of the galaxy 2PCF, it is simply the concatenation of the monopole and quadrupole. In Appendix A, we show that the likelihood of these data vectors is well approximated by a Gaussian distribution as also demonstrated in Paillas et al. (2023). We therefore define the log-likelihood as \[\log\mathcal{L}(\mathbf{X}^{\rm obs}|C,\mathcal{G}) =\left(\mathbf{X}^{\rm obs}-\mathbf{X}^{\rm theo}\left(C,\mathcal{G }\right)\right)\] \[\mathbf{C}^{-1}\left(\mathbf{X}^{\rm obs}-\mathbf{X}^{\rm theo} \left(C,\mathcal{G}\right)\right)^{\top}\,, \tag{12}\] where \(\mathbf{X}^{\rm obs}\) is the observed data vector, \(\mathbf{X}^{\rm theo}\) is the expected theoretical prediction dependent on \(\mathcal{C}\), the cosmological parameters, and \(\mathcal{G}\), the parameters describing how galaxies populate the cosmic web, referred to as galaxy bias parameters throughout this paper, and \(\mathbf{C}\) the theoretical covariance of the summary statistics. We will here assume that the covariance matrix is independent of \(\mathcal{C}\) and \(\mathcal{G}\), and use simulations with varying random seeds to estimate it. This assumption has been shown to have a negligible impact in parameter estimation for two-point functions (Kodwani et al., 2019), although it will need to be revised as the statistical precision of future surveys increases. In the following section, we demonstrate how we can use neural networks to model the mean relation between cosmological and HOD parameters and the density-split statistics in the generated galaxy mocks. #### 2.3.1 Emulating the mean with neural networks We split the suite of mocks of different cosmologies (and their corresponding HOD variations) into training, validation and test sets. We assign cosmologies c000, c001, c002, c003, c004 and c013 to the test set, while 80 per cent of the remaining cosmologies are randomly assigned to the training and 20 per cent to the validation set. See Section 2.2.1 for the definition of the different cosmologies. We construct separate neural-network emulators for the galaxy 2PCF, the DS ACF, and the DS CCF. The inputs to the neural network are the cosmological and HOD parameters, normalized to lie between 0 and 1, and the outputs are the concatenated monopole and quadrupole of each correlation function, also normalized to be between 0 and 1. We train fully-connected neural networks with Sigmoid Linear Units as activation functions (Elfwing et al., 2018) and a negative Gaussian log-likelihood as the loss function \[\mathcal{L}(\mathbf{X}|\mu_{\rm pred}(\mathcal{C},\mathcal{G}), \sigma_{\rm pred}(\mathcal{C},\mathcal{G}))\] \[\quad=\frac{1}{n}\sum_{i=1}^{n}\left(\frac{(X_{i}-\mu_{\rm pred}( \mathcal{C},\mathcal{G}))^{2}}{2\sigma_{\rm pred}(\mathcal{C},\mathcal{G})^{2} }+\log(\sigma_{\rm pred}(C,\mathcal{G})^{2})+\frac{1}{2}\log(2\pi)\right)\,.\] where \(\mu_{\rm pred}(C,\mathcal{G})\), the mean of the log likelihood, emulates the theory predictions from the N-body simulations, \(\sigma_{\rm pred}(C,\mathcal{G})\) models the network's uncertainty in its prediction, and \(n\) is the batch size. We use the AdamW optimisation algorithm to optimise the weights of the neural network, together with a batch size of 256. In contrast to Adam, AdamW includes L2 regularisation to ensure that large weights are only allowed when they significantly reduce the loss function. To further prevent overfitting, given the limited size of our dataset, we also introduce a dropout factor (Srivastava et al., 2014). Finally, to improve the model's performance and reduce training time, we decrease the learning rate by a factor of 10 every 5 epochs over which the validation loss does not improve, until the minimum learning rate of \(10^{-6}\) is reached. We use optima4 to find the hyperparameters of the neural network that produce the best validation loss. We optimize the following hyperparameters: learning rate, weight decay controlling the strength of L2 regularization, number of layers, number of hidden units in each layer, and the dropout rate, over 200 trials. More details related to the neural network architecture and its optimisation can be found on our GitHub repository.5 Footnote 4: [https://github.com/optuna/optuna](https://github.com/optuna/optuna) Footnote 5: [https://github.com/florpi/subbird](https://github.com/florpi/subbird) In Section 3, we present an extensive validation of the emulator's accuracy. #### 2.3.2 Estimating the covariance matrix The likelihood function in equation 12 requires defining the data vector, expected theoretical mean, and covariance matrix of the summary statistics. The total covariance matrix includes contributions from three sources: i) the intrinsic error of the emulator in reproducing simulations with identical phases to those of the training set (\(\mathbf{C_{\rm emu}}\)); ii) the error related to the difference between the fixed-phase simulations used for training and the true ensemble mean (\(\mathbf{C_{\rm sim}}\)); and iii) the error between the observational data and the mean (\(\mathbf{C_{\rm data}}\)). \[\mathbf{C}=\mathbf{C_{\rm data}}+\mathbf{C_{\rm emu}}+\mathbf{C_{\rm sim}}\,. \tag{14}\] Because the test sample is small and covers a range of cosmologies, to estimate the contribution from the emulator's error to the covariance matrix, we are limited to either assume a diagonal covariance matrix whose diagonal elements are the emulator's predicted uncertainties as a function of cosmological and HOD parameters, \(\sigma_{\rm pred}(C,\mathcal{G})\), or we can estimate the emulator error from the test set simulations and ignore its parameter dependence. For the latter, we compute the difference between measurements from the test set and the emulator predictions, \(\Delta\mathbf{X}=\mathbf{X}^{\rm emu}-\mathbf{X}^{\rm test}\), and we estimate a covariance matrix as \[\mathbf{C_{\rm emu}}=\frac{1}{n_{\rm test}-1}\sum_{k=1}^{n_{\rm test}}\left( \Delta\mathbf{X}_{k}-\overline{\Delta\mathbf{X}_{k}}\right)\left(\Delta \mathbf{X}_{k}-\overline{\Delta\mathbf{X}_{k}}\right)^{\top}\,, \tag{15}\] where the overline denotes the mean across all 600 test set mocks. To estimate \(\mathbf{C_{\rm sim}}\), we do a \(\chi^{2}\) minimisation to choose an HOD catalogue from the fiducial c0#0 cosmology that matches the density-split multipoles measured from BOSS CMASS (Paillas et al., 2023). We then use those HOD parameters to populate dark matter haloes and measure the multipoles from multiple independent realizations of the small AbacusSummit boxes ran with different phases. The covariance is calculated as \[\mathbf{C_{\rm sim}}=\frac{1}{n_{\rm sim}-1}\sum_{k=1}^{n_{\rm high}}\left( \mathbf{X}_{k}^{\rm sim}-\overline{\mathbf{X}^{\rm sim}}\right)\left(\mathbf{ X}_{k}^{\rm sim}-\overline{\mathbf{X}^{\rm sim}}\right)^{\top}\,. \tag{16}\] where \(n_{\rm sim}=1643\). Each of these boxes is \(500\,h^{-1}\)Mpc on a side, so we rescale the covariance by a factor of \(1/64\) to match the \((2\,h^{-1}\)Gpc\()^{3}\) volume covered by the base simulations. See Howlett Figure 2: Correlation matrices of the data and model vectors in our clustering analysis. \(\mathbf{C_{\rm data}}\) corresponds to errors associated with the sample variance of the data vector, while \(\mathbf{C_{\rm emu}}\) is associated with the systematic or intrinsic error of the model due to an imperfect emulation. The black horizontal and vertical lines demarcate contributions from the three summary statistics included in the data vector: the density-split cross-correlations and autocorrelation functions, and the galaxy two-point correlation function (listed in the same order as they appear along the diagonal of the correlation matrices). & Percival (2017) for an in depth discussion on rescaling the covariance matrix by volume factors. For a volume such as that of CMASS, the contribution of \(\mathbf{C}_{\rm sim}\) will be almost negligible. However, this will not be true for larger datasets such as those from the upcoming DESI galaxy survey (DESI Collaboration et al., 2016). Alternatively, the phase correction routine introduced in Appendix B of Yuan et al. (2022) could be used to reduce this contribution. The calculation of \(\mathbf{C}_{\rm data}\) depends on the sample that is used to measure the data vector. In this work, we estimate it from multiple realisations of the small AbacusSummit boxes, in the same way as we compute \(\mathbf{C}_{\rm sim}\). Thus, in the current setup, \(\mathbf{C}_{\rm data}=\mathbf{C}_{\rm sim}\). When fitting real observations, however, \(\mathbf{C}_{\rm data}\) would have to be estimated from mocks that match the properties of the specific galaxy sample that is being used, or using other methods such as jackknife resampling. Importantly, the volume of AbacusSummit is much larger than the volume of the CMASS galaxy sample that we are targeting, and therefore we are providing a stringent test of our emulator framework. In Figure 2, we show the correlation matrix for both data and emulator. The full data vector, which combines DSC and the galaxy 2PCF, is comprised by 648 bins. This results in covariance matrices with 6482 elements, showing significant (anti) correlations between the different components of the data vector. The horizontal and vertical black lines demarcate the contributions from different summary statistics. Starting from the bottom left, the first block along the diagonal represents the multipoles of the DS CCF, for all four quintiles. The second block corresponds to the DS ACF, and the last block corresponds to the galaxy 2PCF. The non-diagonal blocks show the cross-covariance between these different summary statistics. ## 3 Validating the neural network emulator In this section, we present an exhaustive evaluation of the emulator's accuracy by, i) assessing the network's accuracy at reproducing the test set multipoles, ii) ensuring that the emulator recovers unbiased cosmological constraints when the test set is sampled from the same distribution as the training set, iii) testing the ability of the emulator to recover unbiased cosmological constraints when applied to out-of-distribution data. ### Testing the accuracy of the emulated multipoles We first compare the multipoles measured from the test simulations against the emulator predictions. Figure 1 shows the density-split and the 2PCF multipoles as measured from one of the HOD catalogues corresponding to the **c090** cosmology. The HOD catalogue is chosen among the prior samples to maximise the likelihood of the CMASS dataset presented in Paillas et al. (2023). The model predictions, which are overplotted as solid lines, show excellent agreement with the data on a wide range of scales. These theory lines are the emulator prediction for the true cosmology and HOD parameters from the mock catalogue. In the lower sub-panels, we compare the emulator accuracy to the data errors. In this paper, we want to present a stringent test of the emulator and therefore compare its accuracy to that of the AbacusSummit simulations with a volume of (2 \(h^{-1}\)Gpc)\({}^{3}\), which is about 8 times larger than that of the CMASS galaxy sample we are targeting (Paillas et al., 2023). The data errors are estimated from the Figure 3: Median absolute emulator errors in units of the data errors, which are estimated for a volume of 2 \(h^{-1}\)Gpc. We show the monopole ACFs, quadrupole ACFs, monopole CCFs, and quadrupole CCFs in each row. The different density quintiles are shown in different colours. In Appendix B, we show that even though the emulator can be as far as 2\(\sigma\) away from the data for the monopole of quintile-galaxy cross-correlations, these are subpercent errors. covariance boxes of the AbacusSmall simulations and are rescaled to represent the expected errors for a volume of \((2\,h^{-1}{\rm Gpc})^{3}\) as explained in Section 2.3.2. In Fig. 1, we show that the model prediction is mostly within 1-\(\sigma\) of the data vector for this particular example, for both multipoles, and cross-correlations and auto-correlations. For a quantitative assessment of the emulator accuracy in predicting multipoles over a range of cosmological parameters, we show in Fig. 3 the median absolute emulator error (taken to be the difference between the prediction and the test data), calculated across the entire test sample, in units of the data errors. The errors always lie within 2-\(\sigma\) of the errors of the data for all scales and summary statistics, and peak at around the smoothing scale. In Appendix B, we show a similar version of this plot where instead of rescaling the vertical axis by the errors of the data, we express everything in terms of the fractional emulator error. While the monopoles of all different density-split summary statistics are accurate within 5%, and mostly well within 1% on small scales, the quadrupoles tend to zero on very small scales, blowing up the fractional error. Among all the multipoles, the error is generally larger for the monopole of the DS cross-correlation functions. This is in part due to the sub-percent errors on the data vector below scales of \(\sim 40\,h^{-1}{\rm Mpc}\), but also due to the fact that the sharp transition of the cross-correlation functions below the smoothing scale is overall harder to emulate. The DS autocorrelation emulator errors are almost always within 1-\(\sigma\) of the data errors, with the exception of the quadrupole of \({\rm Q}_{5}\). In Fig. 2 we see that the emulator accuracy is at subpercent level for the majority of the summary statistics in the analysis. #### 3.1.1 Sensitivity to the different cosmological parameters After corroborating that the emulator is sufficiently accurate, we explore the dependency of the different summary statistics with respect to the input parameters through the use of derivatives around the fiducial Planck 18 cosmology (Planck Collaboration et al., 2020). In Fig. 4, we show the derivatives of the quantile-galaxy cross-correlations for the different density environments with respect to the cosmological parameters. In Appendix C, we show the corresponding derivatives respect with respect to the HOD parameters, together with those of the quintile autocorrelations. These are estimated by computing the gradient between the emulator's output and its input through jax's autograd functionality6 which reduces the errors that numerical derivative estimators can introduce. Footnote 6: [https://github.com/google/jax](https://github.com/google/jax) In the first column of Fig. 4, we show that increasing \(\omega_{\rm cdm}\) reduces the amplitude of the cross-correlations for all quantiles, possibly due to lowering the average halo bias. Increasing \(\omega_{\rm cdm}\) also produces shifts in the acoustic peak on large scales for all quantiles. Moreover, the effect on the quadrupole is to reduce its signal for the most extreme quintiles (note that the quadrupole of \({\rm Q}_{0}\) is positive, whereas that of \({\rm Q}_{4}\) is negative. Note that there are two different RSD effects influencing the quadrupole: on one hand, identifying the density quantiles in redshift space introduces an anisotropy in the quantile distribution, as was shown in Paillas et al. (2023), and on the other hand, there will be an additional increase in anisotropy in the cross-correlations due to the RSD of the galaxies themselves. Regarding \(\sigma_{\rm S}\), shown in the second column of Fig.4, the effect on the monopoles is much smaller than that on the quadrupole due to enhancing velocities and therefore increasing the anisotropy caused by RSD. Finally, the effect of \(n_{s}\) on the monopole is similar to that of \(\omega_{\rm cdm}\), albeit without the shift at the acoustic scale. Interestingly, the derivative of the quadrupole may change sign near the smoothing scale. #### 3.1.2 Evaluating the uncertainty estimates While the emulator offers precise mean predictions, its uncertainty estimations present challenges. Specifically, the uncertainty estim Figure 4: We show the sensitivity of the density split statistics to each cosmological parameter by computing the derivatives of the different quantile-galaxy cross-correlations with respect to the cosmological parameters. From left to right, we show the derivatives with respect to \(\omega_{\rm cdm}\), \(\sigma_{\rm S}\) and \(n_{s}\), respectively. The upper panel shows the monopole derivatives, whereas the lower panel shows the derivatives of the quadrupole. mates, \(\sigma_{\text{pred}}(C,\mathcal{G})\), derived from training the emulator to optimize the Gaussian log-likelihood as per Equation 13, tend to underestimate the true uncertainties. This underestimation is problematic as it might introduce biases in our derived cosmological parameter constraints. To illustrate this, we present the z-score of the emulator's predictions in Figure 5 for the monopole and quadrupole of the DS CCFs, defined as \(z_{\text{z}}=\frac{X_{\text{obs}}^{\text{min}}-X_{\text{obs}}^{\text{peak}}}{ \sigma_{\text{z}}^{\text{min}}}\). Given that the emulator errors are modeled as Gaussian, the emulator uncertainties would be well calibrated if the distribution of \(z_{\text{z}}\)'s followed a standard normal distribution. Figure 5 shows that this is not the case, since the z scores show a variance larger than 1 by about a 15%. One possible reason for this discrepancy could be the limited size of our dataset. In the remainder of the paper, we will ignore the emulator's predicted uncertainties and quantify its errors by directly estimating them from the test set instead, as described in Equation 15. In the future, we aim to refine the calibration of uncertainty predictions for simulation-based models. ### Solving the inverse problem: Recovering the cosmological parameters In this section, we focus on the inverse problem, i.e., recovering the mocks' true cosmological parameters from their summary statistics. We will show that the emulator can recover unbiased parameter constraints on the test AbacusSummit HOD catalogues, as well as on a different N-body simulation with a galaxy-halo connection model that is based on another prescription than HOD. We also demonstrate where the density split information comes from by varying various choices of settings in the inference analysis pipeline. #### 3.2.1 Recovery tests on AbacusSummit In this section, we show the results of using the emulator to infer the combined set of cosmological and HOD parameters, a total of 17 parameters, on the test set we reserved from the AbacusSummit simulations, namely those mocks that were not used during the training of the emulator. Firstly, for each cosmology from the test set we select the mock catalogue with HOD parameters that maximise the likelihood with respect to a realistic data vector, taken to be the observed density-split multipoles from the BOSS CMASS galaxy sample (Paillas et al., 2023), and infer the posterior of the cosmological and HOD parameters for that particular sample. Since our model for the mock observables is differentiable, we can take advantage of the estimated derivatives to efficiently sample the posterior distributions through Hamiltonian Monte Carlo (HMC). HMC utilizes the gradient information from differentiable models to guide the sampling process through Hamiltonian dynamics, enabling more efficient exploration of the posterior landscape. It introduces momentum variables and a Hamiltonian function to represent the total energy, then follows the gradients to deterministically evolve the parameters over time while conserving the Hamiltonian. Here, we employ the NUTS sampler implementation from numpyro. We use flat prior ranges for the parameters that match those listed in Table 2.2.1. Fitting one mock takes about one minute on 1 CPU. We first fit \(\mathtt{c000}\), the baseline cosmology of AbacusSummit. Figure 6 shows the posterior distribution of the cosmological parameters, marginalised over the HOD parameters. Density split clustering, the galaxy 2PCF, and their combination recover unbiased constraints with the true cosmology of the simulation lying within the 68 per cent confidence region of the marginalised posterior of every parameter. Note that in particular density split statistics contribute to breaking the strong degeneracy between \(n_{s}\) and \(\omega_{\text{cdm}}\) observed in the 2PCF. In Table 3.2.1, we show the resulting constraints for each of the three cases tested. For the \((2\,h^{-1}\text{Gpc})^{3}\) volume that is considered here, the baseline analysis recovers a 2.6%, 1.2%, and 1.2% constraint for \(\omega_{\text{cdm}}\), \(\sigma_{8}\) and \(n_{s}\), respectively. These constraints are a factor of about 2.9, 1.9, and 2.1 tighter than for the 2PCF, respectively. Moreover, the parameters \(N_{\text{eff}}\), and \(w_{0}\) are recovered with a precision of 8%, and 4.9% in the baseline analysis. These are in turn a factor of about 2.5 and 1.9 times tighter than for the 2PCF. In an idealised Fisher analysis using simulated dark matter halos (Paillas et al., 2023), we found similar expected improvements for all parameters but \(\sigma_{8}\), for which the Fisher analysis predicted a much larger improvement. The posterior distribution of the HOD parameters, marginalised over cosmology, is shown in Figure D1. In particular, density split statistics can contribute to significantly tightening the constraints on the environment-based assembly bias parameters, \(B_{\text{cen}}\) and \(B_{\text{sat}}\). We expect that reducing the smoothing scale used to estimate densities with future denser datasets would help us attain even tighter constraints on these parameters that may lead to significant detections of the effect in such galaxy samples. Note that for this particular sample some of the true HOD parameters are close to the prior boundary. Moreover, in Figure 7 we show the marginalised constraints on \(\omega_{\text{cdm}}\) and \(\sigma_{8}\) for four particular cosmologies in the test set that vary these two parameters. As before, the HOD parameters are chosen from the prior for each cosmology to maximise the likelihood of CMASS data. These cosmologies are of particular interest since they show that the model can recover lower and higher \(\sigma_{8}\) values than that of the fiducial Planck cosmology. The additional AbacusSummit cosmologies that we are analysing are, \(\mathtt{c001}\), based on WMAP9+ACT+SPT LCDM constraints (Calabrese et al., 2017), \(\mathtt{c003}\), a model with extra relativistic density (\(N_{\text{eff}}\)) taken from the base_mu_plikHM_TT_low1_lowE_Riess18_post_BAO chain of (Planck Collaboration et al., 2020) which also has both high \(\sigma_{8}\) and \(\omega_{\text{cdm}}\), and \(\mathtt{c004}\), a model with lower amplitude clustering \(\sigma_{8}\). Figure 5: **Z-scores of the emulator uncertainty predictions, compared to a standard normal distribution, \(\mathcal{N}(0,1)\), for the test set of the density split cross correlation functions. The emulator predicted uncertainty is over-confident, meaning that its predicting smaller uncertainties than those observed empirically on the test set. \(\mathsf{O}\)** #### 3.2.2 Exploring the information content In this section, we will delve deep into the effects that removing subsets of the data when analysing the fiducial cosmology c000 have on the resulting parameter constraint to analyse what information is being used to constrain each of the parameters. The results are summarised in Figure 8. Let us first examine how the constraints vary as a function of the scales included in the analysis. Bear in mind however that we are not truly removing the small scales since the smoothing introduced to estimate densities leaks information from small scales into all the scales. In Figure 8, we show first the effect of analysing only from the BAO scale, \(s_{\rm min}=80\,h^{-1}\)Mpc. In that case, we still see significant gains over the full-shape two-point correlation function. For most parameters, however, apart from \(n_{S}\), we find there is more information contained in the smaller scales. Regarding the different quantiles, most of the information comes from the combination of void-like, Q\({}_{0}\), and cluster-like, Q\({}_{4}\), regions, whereas the intermediate quantiles barely contribute. Moreover, we have examined the effect of removing the different error contributions on the covariance matrix. Firstly we show that removing the emulator error produces statistically consistent constraints, but about a factor of 2 tighter for most parameters compared to the baseline. As we will show in the next subsection, our estimated uncertainties are designed to be conservative and therefore removing the emulator error does not lead in this case to extremely biased constraints. In the future, we will work on developing training sets and models that can overcome this limitation and produce more accurate Figure 6: Recovery of AbacusSummit fiducial cosmology (c000) for the set of HOD parameters that minimize the data \(\chi^{2}\) error, after marginalizing over the HOD parameters. We show constraints from the two-point correlation function (2PCF) in green, Density Split statistics (Density-Split) in pink, and a combination of the two (2PCF + Density-Split) in blue. predictions on small scales. This could lead to major improvements on the \(\sigma_{8}\) constraints. Finally, we demonstrate that cross-correlations between quantiles and galaxies (DS CCF) are on their own the most constraining statistic but there is a significant increase in constraining power obtained when combining them with auto correlations for the parameters \(\omega_{\rm cdm}\), \(\sigma_{8}\), and \(n_{s}\). #### 3.2.3 Coverage probability test We can test the covariance matrix and likelihood using a coverage probability test. Using repeated experiments with true values drawn from the Bayesian prior, we can test that the recovered values have the correct distribution within the likelihood using the chains sampling the posterior (Hermans et al., 2021). In simple terms, if you have a 95% confidence interval derived from the likelihood, the expected coverage is 95%. That means that, theoretically, we expect that for 100 repeated trials, the true value should fall within that interval 95 times. The empirical coverage is what you actually observe when you compare the rank of the true value within the likelihood. Using the same 95% confidence interval, if you applied this method to many samples and found that the true value was within the interval only 90 times out of 100, then the empirical coverage for that interval would be 90%. We can use coverage to verify that our covariance estimates are indeed conservative and that we are not subsequently underestimating the uncertainties on the parameters of interest. Note that coverage is simply a measure of the accuracy of the uncertainties, and not of its information content. We estimate the empirical coverage of each parameter on the 600 test set samples of \(p(\theta,X)\), extracted from six different values of the cosmological parameters and 100 different HOD values for each of them. In Figure 9, we compare the empirical coverage to the expected one. For a perfectly well-calibrated covariance, all should match up on the diagonal line. A conservative estimator of the covariance and of the likelihood would produce curves above the diagonal, whereas overconfident error estimation would generate curves underneath the diagonal line. Figure 9 shows that we mostly produce conservative confidence intervals from the likelihood, in particular for \(\omega_{\rm cdm}\), whereas confidence intervals can be slightly overconfident for \(\sigma_{8}\) although the deviation from the diagonal line is close to the error expected from estimating coverage \begin{table} \begin{tabular}{|c c c c|c|} \hline \hline & Parameter & 2PCF (68\% C.1) & DSC (68\% C.1) & 2PCF + DSC (68\% C.1) \\ \hline Cosmology & \(\omega_{\rm b}\) & \(--\) & \(0.02257\pm 0.00054\) & \(0.02242\pm 0.00050\) \\ & \(\omega_{\rm cdm}\) & \(0.1187^{+0.007}_{-0.010}\) & \(0.1220\pm 0.0039\) & \(0.1225\pm 0.0032\) \\ \hline & \(\sigma_{8}\) & \(0.815\pm 0.018\) & \(0.801\pm 0.011\) & \(0.8056\pm 0.0094\) \\ \hline & \(n_{s}\) & \(0.976^{+0.032}_{-0.023}\) & \(0.954^{+0.014}_{-0.016}\) & \(0.957\pm 0.012\) \\ & \(\mathrm{d}n_{s}/\mathrm{d}\ln k\) & \(-0.0033^{+0.018}_{-0.0024}\) & \(0.0048^{+0.015}_{-0.014}\) & \(0.0074\pm 0.0000\) \\ \hline & \(N_{\rm eff}\) & \(3.04\pm 0.40\) & \(3.06^{+0.22}_{-0.2}\) & \(3.13\pm 0.17\) \\ \hline & \(w_{0}\) & \(-0.9599^{+0.10}_{-0.081}\) & \(-0.974\pm 0.053\) & \(-0.992\pm 0.049\) \\ \hline HOD & \(\log M_{1}\) & \(14.03\pm 0.15\) & \(13.94^{+0.17}_{-0.11}\) & \(14.01^{+0.12}_{-0.09}\) \\ & \(\log M_{\rm cut}\) & \(12.588^{+0.066}_{-0.11}\) & \(12.621^{+0.097}_{-0.12}\) & \(12.581^{+0.047}_{-0.060}\) \\ \hline & \(\alpha\) & \(1.134^{+0.25}_{-0.19}\) & \(1.194^{+0.27}_{-0.11}\) & \(1.25^{+0.16}_{-0.11}\) \\ & \(\alpha_{\rm vel,c}\) & \(0.375^{+0.069}_{-0.054}\) & \(0.286^{+0.17}_{-0.089}\) & \(0.390^{+0.09}_{-0.033}\) \\ \hline & \(\alpha_{\rm vel,s}\) & \(>1.05\) & \(1.08^{+0.18}_{-0.10}\) & \(1.09^{+0.11}_{-0.09}\) \\ & \(\log\sigma\) & \(-1.54^{+0.98}_{-0.56}\) & \(-1.61^{+0.64}_{-0.48}\) & \(-1.58^{+0.57}_{-0.50}\) \\ \hline & \(\kappa\) & \(--\) & \(<0.830\) & \(0.65^{+0.22}_{-0.63}\) \\ & \(B_{\rm cen}\) & \(<-0.404\) & \(-0.336^{+0.059}_{-0.14}\) & \(-0.410^{+0.043}_{-0.060}\) \\ \hline & \(B_{\rm sat}\) & \(<-0.0339\) & \(-0.11\pm 0.36\) & \(-0.37\pm 0.28\) \\ \hline \hline \end{tabular} \end{table} Table 2: Parameter constraints from the galaxy two-point correlation (2PCF), density-split clustering (DSC) and the baseline combination (2PCF + DSC) analyses. Each row shows the parameter name and the corresponding mean and 68 per cent confidence intervals. Figure 7: Marginalized constraints from density-split clustering on \(\omega_{\rm cdm}\), \(\sigma_{8}\) and \(n_{s}\), derived from fits to mock galaxy catalogues at 4 different cosmologies from our test sample. The true cosmology of each mock is shown by the horizontal and vertical dotted coloured lines. 2D contours show the 68 and 95 per cent confidence regions around the best-fit values. on a small dataset of only 600 examples. The HOD parameters are all very well-calibrated. #### 3.2.4 Recovery tests on Uchuu One of the fundamental validation tests for our emulator is to ensure that we can recover unbiased cosmological constraints when applied to mock catalogues based on a different N-body simulation, and using a different galaxy-halo connection model. The latter is particularly important since the HOD model used to train the emulator makes strong assumptions about how galaxies populate dark matter halos and its flexibility to model the data needs to be demonstrated. To this end, we test our model on the Uchuu simulations (Ishiyama et al., 2021; Dong-Plez et al., 2022; Oogi et al., 2023; Aung et al., 2023; Prada et al., 2023) and use mock galaxies that were created by Zhai et al. (2023b) using subhalo abundance matching (SHAM, e.g., Vale and Ostriker, 2006; Kravtsov et al., 2004) to populate dark matter haloes with galaxies. This model assigns galaxies to dark matter halos based on the assumption that the stellar mass or luminosity of a galaxy is correlated with the properties of dark matter halo or subhalo hosting this galaxy. Specifically, we use the method of Lehmann et al. (2017) to assign galaxies to dark matter halos and subhalos. In this method, the property used to rank halos is a combination of the maximum circular velocity within the halo, \(v_{\rm max}\), and the virial velocity, \(v_{\rm vir}\). This model also includes a certain amount of galaxy assembly bias, further testing the flexibility of our HOD modeling. Uchuu is a suite of cosmological N-body simulations that were generated with the GreeM code (Ishiyama et al., 2009) at the ATERUI II supercomputer in Japan. The main simulation has a volume of \((2\,h^{-1}{\rm Gpc})^{3}\), following the evolution of \(2.1\) trillion dark matter particles with a mass resolution of \(3.27\times 10^{8}\)\(h^{-1}{\rm M_{\odot}}\). It is characterized by a fiducial cosmology \(\Omega_{\rm m}=0.3089\), \(\Omega_{\rm b}=0.0486\), \(h=0.6774\), \(\sigma_{\rm B}=0.8159\), and \(n_{s}=0.9667\). Dark matter halos are identified using the Rockstar halo finder (Behroozi et al., 2010), which is also different from the one implemented in AbacusSummit. Figure 10 shows the resulting marginalised inference using our emulator for both the 2PCF, and the combination of density split with the 2PCF. Note that the constraints on \(n_{s}\) from the 2PCF are in this case completely prior dominated. We can however recover unbiased constraints, even for the stringent test case of a \((2\,h^{-1}{\rm Gpc})^{3}\) volume. ## 4 Discussion and conclusions ### Comparison with previous work #### 4.1.1 Analytical models of density dependent statistics Similar definitions of density-split statistics have been presented in Neyrinck et al. (2018); Repp and Szapudi (2021). In Neyrinck et al. (2018), the authors defined sliced correlation functions, by slicing the correlation function on local density. They have also presented a model with the Gaussian assumption. In Repp and Szapudi (2021), Figure 8: Marginalised constraints on \(\omega_{\rm cdm}\), \(\sigma_{\rm B}\), \(n_{s}\), \(N_{\rm eff}\) and \(B_{\rm sat}\) for different configurations in the inference analysis. Dots and error bars show the mean and the 68 per cent confidence interval of the parameters, respectively. The uppermost points show the baseline configuration, which consists of the combination of the monopole and quadrupole of the DS cross-correlation and auto-correlation functions for quintiles Q\({}_{\rm b}\), Q\({}_{\rm i}\), Q\({}_{\rm b}\), and Q\({}_{\rm d}\). Figure 9: Comparison of the empirical coverage for a given confidence level, shown in different colours for the different cosmological parameters, to the expected coverage, shown in grey. A perfectly calibrated model would follow the one-to-one diagonal. This diagonal has some associated errorbars however, given that we are only using 600 samples to estimate the coverage, we quantify this by sampling 600 points from a uniform distribution and estimating its coverage 30 times. These are the different grey lines plotted in the figure. the authors introduce indicator functions by identifying regions of a given density and computing the power spectrum in density bins. This is essentially the Fourier version of the DS ACF. Our analyses have included both the DS CCF and ACF, but finding that the CCF carries most of the cosmological information. These statistics are all similar in spirit. #### 4.1.2 Fisher In previous work, Paillas et al. (2023b) showed with a Fisher analysis the potential of density split statistics to constrain cosmological parameters from dark matter halo statistics. Here, we have confirmed their findings by modelling the density split statistics explicitly as a function of the cosmological parameters, and including the halo-galaxy connection to model the density split statistics of galaxies. The improved constraints over two-point correlation functions found here are of a similar magnitude to those in Paillas et al. (2023b) for all cosmological parameters, but \(\sigma_{\rm B}\), for which we find weaker constraints. Moreover, we also find that the most extreme quantiles have a similar constraining power and that it is their combination that explains most of the information content of density split statistics. Finally, Paillas et al. (2023b) found that density split statistics could break important degeneracies between cosmological parameters that would lead to much tighter constraints on the sum of neutrino masses. This is not something we could corroborate in this paper since variations in neutrino mass are not included in the suite of simulations used in this work, but we plan to work on this in the future by utilising N-body simulations that can accurately simulate the effects of massive neutrinos in the large scale structure (Elbers et al., 2021). #### 4.1.3 Cosmic Voids Over the past decade, there has been renewed interest in using cosmic voids to constrain cosmology (Pisani et al., 2019). They have been found to be amongst the most pristine probes of cosmology in terms of how much information is preserved in linear theory at late times (Cai et al., 2016; Hamaus et al., 2017; Nadathur and Percival, 2019). However, in practice, extracting cosmological information from voids has proven to be difficult from a purely perturbation theory perspective due to mainly: i) void definitions being difficult to treat analytically and producing different results (Cautun et al., 2018), ii) identifying voids in redshift space adds additional anisotropy to the observed void-galaxy cross-correlation (Nadathur et al., 2019; Correa et al., 2020b), a similar effect to that found here when estimating densities directly in redshift space, which is difficult to model analytically, and iii) linear theory can only accurately model the mapping from real to redshift space, which means we still require some way of estimating the real space void profiles. In this work, we have shown how emulators can fix all of the above mentioned issues by forward modelling each of these effects. Moreover, we have shown here how although void-galaxy cross-correlations contain a wealth of information to constrain the cosmological parameters, it is their combination with overdense environments that would give us the tightest constraints. #### 4.1.4 The Aerulus emulator Related to this work, Storey-Fisher et al. (2022) presented an emulation framework based on the Aemulus suite of cosmological N-body simulations to accurately reproduce two-point galaxy clustering, the underdensity probability function and the density-marked correlation function on small scales (\(0.1-50\)\(h^{-1}\)Mpc). We confirm that including summary statistics beyond two-point functions can improve the cosmological constraints significantly, even after marginalising over the HOD parameters. Moreover, environment-based statistics could lead to a significant detection of assembly bias. As opposed to the marked correlations shown in Storey-Fisher et al. (2022), we estimate densities around random points spread over the survey volume which better samples underdensities in the cosmic web. In the future, it would be interesting to compare the density split constraints to those of density-marked correlation functions, and perhaps the findings in this paper on what environments are most constraining can inform the shape of the mark function used. ### Conclusions We have presented a new simulation-based model for density-split clustering and the galaxy two-point correlation function, based on mock galaxy catalogues from the AbacusSummit suite of simulations. These models allow us to extract information from the full-shape of the correlation functions on a very broad scale range, \(1\)\(h^{-1}\)Mpc \(<s<150\)\(h^{-1}\)Mpc, including redshift-space and Alcock-Paczynski distortions to constrain cosmology and deepen our understanding of how galaxies connect to their host dark matter halos. We have trained neural network surrogate models, or _emulators_, which can generate accurate theory predictions for density-split clustering and the galaxy 2PCF in a fraction of a second within an extended \(w\Lambda\)CDM parameter space. The galaxy-halo connection is modelled through an extended halo occupation distribution (HOD) framework, including a parametrisation for velocity bias and environment-based assembly bias, but Figure 10: Marginalised posterior on the cosmological parameters when analysing the SHAM mocks based on the Uchnu simulations. We show the contours obtained when analysing only the 2PCF, compared to those found when analysing the combination of the 2PCF and density split statistics. The true parameters that generated the mock are shown in gray. the emulator is also validated against simulations that use a sub-halo abundance matching framework and a different N-body code to demonstrate the robustness of the method. We have shown that density split statistics can extract information from the non-Gaussian density field that is averaged out in the galaxy two-point correlation function. Our emulators, which reach a sub-percent level accuracy down to \(1\,h^{-1}\)Mpc, are able to recover unbiased cosmological constraints when fitted against measurements from simulations of a \((2\,h^{-1}\)Gpc\()^{3}\) volume. The recovered parameter constraints are robust against choices in the HOD parametrisation and scale cuts, and show consistency between the different clustering summary statistics. We find that density-split statistics can increase the constraining power of galaxy 2PCFs by factors of 2.9, 1.9, and 2.1 on the cosmological parameters \(\omega_{\rm cdm}\), \(\sigma_{8}\) and \(n_{s}\), respectively. Moreover, the precision on parameters \(N_{\rm ur}\), and \(w_{0}\) can be improved by factors of 2.5 and 1.9 with respect to the galaxy 2PCF. Finally, we find density-split statistics to be particularly constraining the environment-based assembly bias parameters. In a companion paper, we show how all these findings result on parameter constraints from the CMASS sample of SDSS (Paillas et al., 2023). As we transition to the era of DESI, with its high-density galaxy samples, particularly BGS, alternative summary statistics such as density split have a huge potential to not only increase the precision on cosmological parameter constraints, but to deepen our understanding of how galaxies connect to dark matter haloes. However, this opportunity comes with challenges. The precision that DESI promises requires that our theoretical frameworks are refined to an unprecedented degree of accuracy. It is essential to address these theoretical challenges to fully harness the potential of upcoming observational datasets in cosmological studies. ## 5 Acknowledgements The authors thank Etienne Burtin for helpful discussions throughout the development of this project. YC acknowledges the support of the Royal Society through a University Research Fellowship. SN acknowledges support from an STFC Ernest Rutherford Fellowship, grant reference ST/T005009/2. FB is a University Research Fellow and has received funding from the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (grantagreement853291). WP acknowledges the support of the Natural Sciences and Engineering Research Council of Canada (NSERC), [funding reference number RGPIN-2019-03908] and from the Canadian Space Agency. This work is supported by the National Science Foundation under Cooperative Agreement PHY- 2019786 (The NSF AI Institute for Artificial Intelligence and Fundamental Interactions, [http://iaifi.org/](http://iaifi.org/)). This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of High Energy Physics of U.S. Department of Energy under grant Contract Number DE-SC0012567, grant DE-SC0013718, and under DE-AC02-76SF00515 to SLAC National Accelerator Laboratory, and by the Kavli Institute for Particle Astrophysics and Cosmology. The computations in this paper were run on the FASRC Cannon cluster supported by the FAS Division of Science Research Computing Group at Harvard University, and on the Narval cluster provided by Compute Ontario (computeontario.ca) and the Digital Research Alliance of Canada (alliancecan.ca). In addition, this work used resources of the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy Office of Science User Facility located at Lawrence Berkeley National Laboratory, operated under Contract No. DE-AC02-05CH11231. The AbacusSummit simulations were run at the Oak Ridge Leadership Computing Facility, which is a DOE Office of Science User Facility supported under Contract DE-AC05-00OR22725. We thank Instituto de Astrofisica de Andalucia (IAA-CSIC), Centro de Supercomputacion de Galicia (CESGA) and the Spanish academic and research network (RedIRIS) in Spain for hosting Uchuuu DR1, DR2 and DR3 in the Skies & Universes site for cosmological simulations. The Uchu simulations were carried out on Aterni II supercomputer at Center for Computational Astrophysics, CfCA, of National Astronomical Observatory of Japan, and the K computer at the RIKEN Advanced Institute for Computational Science. The Uchuu Data Releases efforts have made use of the skun@IAA_RedIRIS and skun@IAA computer facilities managed by the IAA-CSIC in Spain (MICIN EU-feder grant EQC2018-004366-P). This research used the following software packages: Corffunc Sinha & Garrison (2020), Flax Heek et al. (2023), gefDist Lewis (2019), Jax Bradbury et al. (2018), Jupfer Kluyver et al. (2016), Matplotlib Hunter (2007), Numpy Harris et al. (2020), numpypro(Phan et al., 2019), and optuna. For the purpose of open access, the authors have applied a CC BY public copyright licence to any Author Accepted Manuscript version arising. ## Data Availability Statement The data underlying this article are available in [https://abacusnbody.org](https://abacusnbody.org).
2309.11256
Tropical cryptography III: digital signatures
We use tropical algebras as platforms for a very efficient digital signature protocol. Security relies on computational hardness of factoring one-variable tropical polynomials; this problem is known to be NP-hard.
Jiale Chen, Dima Grigoriev, Vladimir Shpilrain
2023-09-20T12:28:40Z
http://arxiv.org/abs/2309.11256v2
# Tropical cryptography III: digital signatures ###### Abstract. We use tropical algebras as platforms for a very efficient digital signature protocol. Security relies on computational hardness of factoring one-variable tropical polynomials; this problem is known to be NP-hard. _Keywords:_ tropical algebra, digital signature, factoring polynomials ## 1. Introduction In our earlier papers [2], [3], we employed _tropical algebras_ as platforms for cryptographic schemes by mimicking some well-known classical schemes, as well as newer schemes like [4], [5], in the "tropical" setting. What it means is that we replaced the usual operations of addition and multiplication by the operations \(\min(x,y)\) and \(x+y\), respectively. An obvious advantage of using tropical algebras as platforms is unparalleled efficiency because in tropical schemes, one does not have to perform any multiplications of numbers since tropical multiplication is the usual addition, see Section 2. On the other hand, "tropical powers" of an element exhibit some patterns, even if such an element is a matrix over a tropical algebra. This weakness was exploited in [7] to arrange a fairly successful attack on one of the schemes in [2]. In this paper, we offer a digital signature scheme that uses tropical algebra of one-variable polynomials. Security of this scheme is based on computational hardness of factoring one-variable tropical polynomials. This problem is known to be NP-hard, see [6]. ## 2. Preliminaries We start by giving some necessary information on tropical algebras here; for more details, we refer the reader to the monograph [1]. Consider a tropical semiring \(\mathbf{A}\), also known as the min-plus algebra due to the following definition. This semiring is defined as a linearly ordered set (e.g., a subset of reals) that contains \(0\) and is closed under addition, with two operations as follows: \(x\oplus y=\min(x,y)\) \(x\otimes y=x+y\). It is straightforward to see that these operations satisfy the following properties: _associativity_: \(x\oplus(y\oplus z)=(x\oplus y)\oplus z\) \(x\otimes(y\otimes z)=(x\otimes y)\otimes z\). _commutativity_: \(x\oplus y=y\oplus x\) \(x\otimes y=y\otimes x\). _distributivity_: \((x\oplus y)\otimes z=(x\otimes z)\oplus(y\otimes z)\). There are some "counterintuitive" properties as well: \(x\oplus x=x\) \(x\otimes 0=x\) \(x\oplus 0\) could be either 0 or \(x\). There is also a special "\(\epsilon\)-element" \(\epsilon=\infty\) such that, for any \(x\in S\), \(\epsilon\oplus x=x\) \(\epsilon\otimes x=\epsilon\). ### Tropical polynomials A (tropical) monomial in \(S\) looks like a usual linear function, and a tropical polynomial is the minimum of a finite number of such functions, and therefore a concave, piecewise linear function. The rules for the order in which tropical operations are performed are the same as in the classical case, see the example below. Still, we often use parenthesis to make a tropical polynomial easier to read. **Example 1**.: _Here is an example of a tropical monomial: \(x\otimes x\otimes y\otimes z\otimes z\). The (tropical) degree of this monomial is 5. We note that sometimes, people use the alternative notation \(x^{\otimes 2}\) for \(x\otimes x\), etc._ _An example of a tropical polynomial is: \(p(x,y,z)=5\otimes x\otimes y\otimes z\oplus x\otimes x\oplus 2\otimes z \oplus 17=(5\otimes x\otimes y\otimes z)\oplus(x\otimes x)\oplus(2\otimes z) \oplus 17\). This polynomial has (tropical) degree 3, by the highest degree of its monomials._ We note that, just as in the classical case, a tropical polynomial is canonically represented by an ordered set of tropical monomials (together with finite coefficients), where the order that we use here is deglex. We also note that some tropical polynomials may look "weird": **Example 2**.: _Consider the polynomial \(p(x)=(0\otimes x)\oplus(0\otimes x\otimes x)\). All coefficients in this polynomial are 0, and yet it is not the same as the polynomial \(q(x)=0\)._ _Indeed, \(q(x)\otimes r(x)=r(x)\) for any polynomial \(r(x)\). On the other hand, if, say, \(r(x)=2\otimes x\), then \(p(x)\otimes r(x)=(2\otimes x\otimes x)\oplus(2\otimes x\otimes x)\neq r(x)\)._ In the following example, we show in detail how two tropical polynomials are multiplied and how similar terms are collected. **Example 3**.: _Let \(p(x)=(2\otimes x)\oplus(3\otimes x\otimes x)\) and \(q(x)=5\oplus(1\otimes x)\). Then \(p(x)\otimes q(x)=[(2\otimes x)\otimes 5]\oplus[(2\otimes x)\otimes(1\otimes x)] \oplus[(3\otimes x\otimes x)\otimes 5]\oplus[(3\otimes x\otimes x)\otimes(1 \otimes x)]=(7\otimes x)\oplus(3\otimes x\otimes x)\oplus(8\otimes x\otimes x )\oplus(4\otimes x\otimes x\otimes x)=(7\otimes x)\oplus(3\otimes x\otimes x )\oplus(4\otimes x\otimes x\otimes x).\)_ In this paper, our focus is on one-variable tropical polynomials, although one can use multivariate tropical polynomials instead. ## 3. Digital signature scheme description Let \(\mathbf{T}\) be the tropical algebra of one-variable polynomials over \(\mathbf{Z}\), the ring of integers. The signature scheme is as follows. **Private:** two polynomials \(X,Y\in\mathbf{T}\) of the same degree \(d\), with all coefficients in the range \([0,r]\), where \(r\) is one of the parameters of the scheme. **Public:** - polynomial \(M=X\otimes Y\) - a hash function \(H\) (e.g., SHA-512) and a (deterministic) procedure for converting values of \(H\) to one-variable polynomials from the algebra \(\mathbf{T}\) (see Section 4.1). **Signing** a message \(m\): 1. Apply a hash function \(H\) to \(m\). Convert \(H(m)\) to a polynomial \(P\) of degree \(d\) from the algebra \(\mathbf{T}\) using a deterministic public procedure. 2. Select two random private polynomials \(U,V\in\mathbf{T}\) of degree \(d\), with all coefficients in the range \([0,r]\). Denote \(N=U\otimes V\). 3. The signature is the triple of polynomials \((P\otimes X\otimes U,\ P\otimes Y\otimes V,\ N)\). **Verification:** **1.** The verifier computes the hash \(H(m)\) and converts \(H(m)\) to a polynomial \(P\) of degree \(d\) from the algebra \(\mathbf{T}\) using a deterministic public procedure. **2. (a)** The verifier checks that the degrees of the polynomials \(P\otimes X\otimes U\) and \(P\otimes Y\otimes V\) (the first two polynomials in the signature) are both equal to \(3d\), and the degree of the polynomial \(N\) is equal to \(2d\). If not, then the signature is not accepted. **(b)** The verifier checks that neither \(P\otimes X\otimes U\) nor \(P\otimes Y\otimes V\) is a constant multiple (in the tropical sense) of \(P\otimes M\) or \(P\otimes N\). If it is, then the signature is not accepted. **(c)** The verifier checks that all coefficients in the polynomials \(P\otimes X\otimes U\) and \(P\otimes Y\otimes V\) are in the range \([0,3r]\), and all coefficients in the polynomial \(N\) are in the range \([0,2r]\). If not, then the signature is not accepted. **3.** The verifier computes \(W=(P\otimes X\otimes U)\otimes(P\otimes Y\otimes V)\). The signature is accepted if and only if \(W=P\otimes P\otimes M\otimes N\). **Correctness** is obvious since \(W=(P\otimes X\otimes U)\otimes(P\otimes Y\otimes V)=P\otimes P\otimes(X \otimes Y)\otimes(U\otimes V)=P\otimes P\otimes M\otimes N\). **Remark 1**.: _Step 2 in the verification algorithm is needed to prevent trivial forgery, e.g. signing by the triple of polynomials \((P\otimes M,\ P\otimes N,\ N)\), in which case \((P\otimes M)\otimes(P\otimes N)=P\otimes P\otimes M\otimes N\)._ **Remark 2**.: _Here is how one can check whether or not one given tropical polynomial, call it \(R(x)\), is a constant multiple (in the tropical sense) of another given tropical polynomial (of the same degree), call it \(S(x)\)._ _Let \(r_{i}\in\mathbf{Z}\) denote the coefficient at the monomial \(x^{\otimes i}\) in \(R(x)\), and \(s_{i}\in\mathbf{Z}\) denote the coefficient at the monomial \(x^{\otimes i}\) in \(S(x)\). If \(R(x)=c\otimes S(x)\) for some \(c\in\mathbf{Z}\), then \(r_{i}=s_{i}+c\) for every \(i\). Here "+" means the "classical" addition in \(\mathbf{Z}\)._ _Therefore, to check if \(R(x)\) is a constant multiple of \(S(x)\), one checks if \((r_{i}-s_{i})\) is the same integer for every \(i\)._ ## 4. Key generation and suggested parameters The suggested degree \(d\) of each polynomial \(X,Y,U,V\) is \(150\). All coefficients of monomials in the polynomials \(X,Y,U,V\) are selected uniformly at random from integers in the range \([0,r]\), where \(r=127\). We emphasize that, in contrast with the "classical" case, if the coefficient at a monomial is \(0\), this does not mean that this monomial is "absent" from the polynomial. ### Converting \(H(m)\) to a tropical polynomial over \(\mathbf{Z}\) We suggest using a hash functions from the SHA-2 family, specifically SHA-512. We assume the security properties of SHA-512, including collision resistance and preimage resistance. We also assume that there is a standard way to convert \(H(m)\) to a bit string of length 512. Then a bit string can be converted to a tropical polynomial \(P=P(x)\) over \(\mathbf{Z}\) using the following _ad hoc_ procedure. Let \(B=H(m)\) be a bit string of length 512. We will convert \(B\) to a one-variable tropical polynomial \(P\) of degree \(d=150\) over \(\mathbf{Z}\). We therefore have to select 151 coefficients for monomials in \(P\), and we want to have these coefficients in the range \([0,\,127]\). With 7 bits for each coefficient, we need \(151\cdot 7=1057\) bits in total. **(1)** Concatenate 3 copies of the bit string \(B\) to get a bit string of length 1536. **(2)** Going left to right, convert 7-bit block \(\#j\) to an integer and use it as the coefficient at the monomial \(x^{\otimes j}\). **(3)** After we use \(7\cdot 151=1057\) bits, all monomials in the polynomial \(P=P(x)\) will get a coefficient. ### Multiplying two tropical polynomials Let \(R(x)\) and \(S(x)\) be two one-variable tropical polynomials of degree \(d\) and \(g\), respectively. We want to compute \(R(x)\otimes S(x)\). Note that a one-variable tropical monomial, together with a coefficient, can be represented by a pair of integers \((k,l)\), where \(k\) is the coefficient and \(l\) is the degree. Our goal is therefore to compute the coefficient at every monomial of degree from \(0\) to \(d+g\) in the product \(R(x)\otimes S(x)\). Suppose we want to compute the coefficient at the monomial of degree \(m,\ 0\leq m\leq d+g\). Then we go over all coefficients \(r_{i}\) at the monomials of degrees \(i\leq m\) in the polynomial \(R(x)\) and add (in the "classical" sense) \(r_{i}\) to \(s_{j}\), where \(s_{j}\) is the coefficient at the monomial of degree \(j=m-i\) in the polynomial \(S(x)\). Having computed all such sums \(r_{i}+s_{j}\), we find the minimum among them, and this is the coefficient at the monomial of degree \(m\) in the polynomial \(R(x)\otimes S(x)\). ## 5. What is the hard problem here? The (computationally) hard problem that we employ in our construction is factoring one-variable tropical polynomials. This problem is known to be NP-hard, see [6]. _Since recovering the private tropical polynomials \(X\) and \(Y\) from the public polynomial \(M=X\otimes Y\) is exactly the factoring problem, we see that inverting our candidate one-way function \(f(X,Y)=X\otimes Y\) is NP-hard._ However, the private tropical polynomials \(X\) and \(Y\) are involved also in the signature. For example, from the polynomial \(W=P\otimes X\otimes U\) the adversary can recover \(X\otimes U\) because the polynomial \(P\) is public. The polynomial \(U\) is private, so it looks like the adversary is still facing the factoring problem. However, the adversary now knows two products involving the polynomial \(X\), namely \(X\otimes Y\) and \(X\otimes U\). Therefore, we have a somewhat different problem here: finding a common divisor of two given polynomials. This problem is easy for "classical" one-variable polynomials over \(\mathbf{Z}\). In particular, any classical one-variable polynomial over \(\mathbf{Z}\) has a unique factorization (up to constant multiples) as a product of irreducible polynomials. In contrast, a one-variable tropical polynomial can have an exponential number of incomparable factorizations [6]. Furthermore, it was shown in [6] that two one-variable tropical polynomials may not have a unique g.c.d. All this makes it appear likely that the problem of finding the g.c.d. of two (or more) given one-variable tropical polynomials is computationally hard. No polynomial-time algorithm for solving this problem is known. More about this in Section 6. ## 6. Possible attacks The most straightforward attack is trying to factor the tropical polynomial \(M=X\otimes Y\) as a product of two tropical polynomials \(X\) and \(Y\). As we have pointed out before, this problem is known to be NP-hard [6]. In our situation, there is an additional restriction on \(X\) and \(Y\) to be of the same degree \(d\), to pass Step 2 of the verification procedure. If one reduces the equation \(M=X\otimes Y\) to a system of equations in the coefficients of \(X\) and \(Y\), then one gets a system of \(2d+1\) quadratic (tropical) equations in \(2(d+1)\) unknowns. With \(d\) large enough, such a system is unapproachable; in fact, solving a system of quadratic tropical equations is known to be NP-hard [9]. The size of the key space for \(X\) (or \(Y\)) with suggested parameters is \(128^{151}=2^{1057}\), so the brute force search is infeasible. It is unclear whether accumulating (from different signatures) many tropical polynomials of the form \(M_{i}=X\otimes U_{i}\), with different (still unknown) polynomials \(U_{i}\) can help recover \(X\). With each new \(M_{i}\), the attacker gets \(d+1\) new unknowns (these are coefficients of \(U_{i}\)) and \(2d+1\) new equations. There is a well-known trick of reducing a system of quadratic equations to a system of linear equations by replacing each product of two unknowns by a new unknown. However, the number of pairs of unknowns increases by \(d^{2}\) with each new \(U_{i}\). Therefore, a system of linear equations like that will be grossly underdetermined, resulting in a huge number of solutions for the new unknowns, thus making solving the original system (in the old unknowns) hard, especially given the restrictions on the old unknowns tacitly imposed by Step 2(c) of the verification procedure. ## 7. Performance and signature size For our computer simulations, we used Apple MacBook Pro, M1 CPU (8 Cores), 16 GB RAM computer. Python code is available, see [8]. We note that a one-variable tropical monomial, together with a coefficient, can be represented by a pair of integers \((k,l)\), where \(k\) is the coefficient and \(l\) is the degree of the monomial. Then a one-variable tropical polynomial of degree 150 is represented by 151 such pairs of integers, by the number of monomials. If \(k\) is selected uniformly at random from integers in the range \([0,127]\), then the size of such a representation is about 2000 bits on average. Indeed, 151 coefficient of the average size of 6 bits give about 900 bits. Then, the degrees of the monomials are integers from 0 to 150. These take up \((\sum_{k=1}^{7}k\cdot(2^{k}-2^{k-1}+1))+8\cdot(150-127)\approx 1000\) bits. Thus, it takes about 2000 bits to represent a single tropical polynomial with suggested parameters. Since a private key is comprised of two such polynomials, this means that the size of the private key in our scheme is about 4000 bits (or 500 bytes) on average. The public key is a product of two polynomials of degree 150 and is therefore a polynomial of degree 300. Coefficients in this polynomial are in the range \([0,254]\). Using the same argument as in the previous paragraph, we estimate the size of such polynomial to be about 4500 bits (or 562 bytes) on average. The signature is a triple of polynomials, two of them are products of 3 tropical polynomials of degree 150 and one of them is a product of 2 tropical polynomials of degree 150. Therefore, the signature size is about 16,000 bits (or 2000 bytes) on average. In the table below, we have summarized performance data for several parameter sets. Most columns are self-explanatory; the last two columns show memory usage during verification and during the whole process of signing and verification. \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multicolumn{8}{|c|}{Performance metrics for various parameter values} \\ \hline degree of private polynomials & range for coefficients in private & verification time (sec) & signature size & public key size & private key size & memory size, key size & memory usage, weight \\ & & & (Kbytes) & (Kbytes) & (Kbytes) & verifification & process \\ & polynominals & & & & & (Mbytes) & (Mbytes) \\ \hline 100 & [0,127] &!0.1 & 1.3 & 0.37 & 0.33 & 0.4 & 0.4 \\ \hline 150 & [0,127] & 0.15 & 2 & 0.56 & 0.5 & 0.37 & 0.5 \\ \hline 200 & [0,127] & 0.25 & 2.6 & 0.74 & 0.67 & 0.47 & 0.6 \\ \hline \end{tabular}
2302.00029
Determining Which Sine Wave Frequencies Correspond to Signal and Which Correspond to Noise in Eye-Tracking Time-Series
The Fourier theorem states that any time-series can be decomposed into a set of sinusoidal frequencies, each with its own phase and amplitude. The literature suggests that some frequencies are important to reproduce key qualities of eye-movements ("signal") and some of frequencies are not important ("noise"). To investigate what is signal and what is noise, we analyzed our dataset in three ways: (1) visual inspection of plots of saccade, microsaccade and smooth pursuit exemplars; (2) analysis of the percentage of variance accounted for (PVAF) in 1,033 unfiltered saccade trajectories by each frequency band; (3) analyzing the main sequence relationship between saccade peak velocity and amplitude, based on a power law fit. Visual inspection suggested that frequencies up to 75 Hz are required to represent microsaccades. Our PVAF analysis indicated that signals in the 0-25 Hz band account for nearly 100% of the variance in saccade trajectories. Power law coefficients (a, b) return to unfiltered levels for signals low-pass filtered at 75 Hz or higher. We conclude that to maintain eye movement signal and reduce noise, a cutoff frequency of 75 Hz is appropriate. We explain why, given this finding, a minimum sampling rate of 750 Hz is suggested.
Mehedi H. Raju, Lee Friedman, Troy M. Bouman, Oleg V. Komogortsev
2023-01-31T19:02:17Z
http://arxiv.org/abs/2302.00029v2
Determining Which Sine Wave Frequencies Correspond to Signal and Which Correspond to Noise in Eye-Tracking Time-Series ###### Abstract The Fourier theorem proposes that any time-series can be decomposed into a set of sinusoidal frequencies, each with its own phase and amplitude. The literature suggests that some of these frequencies are important to reproduce key qualities of eye-movements ("signal") and some of these frequencies are not important ("noise"). We looked at three types of analysis: (1) visual inspection of plots of saccade, microsaccade and smooth pursuit exemplars; (2) an analysis of the percentage of variance accounted for (PVAF) in each of 1,033 unfiltered saccade trajectories by each frequency cutoff; (3) an analysis of saccade peak velocity in the unfiltered and various filtered conditions. Visual inspection suggested that frequencies up to 75 Hz are required to represent microsaccades. Our PVAF analysis indicated that data in the 0-25 Hz band are sufficient to account for nearly 100% of the variance in unfiltered saccade trajectories. Our analysis indicated that frequencies below 100 Hz are sufficient to maintain peak velocities. Therefore, our overall conclusion is that to maintain eye-movement signal and reduce noise, a cutoff frequency of 100 Hz is appropriate. Our results have implications for the proposed sampling rate of eye-tracking recordings. If one is working in the frequency domain and 100 Hz needs to be preserved, the minimum required sampling rate would be 200 Hz. However, in a time domain analysis, a minimum 1000 Hz sampling rate is required. Eye Tracking Signal Noise Fourier ## 1 Introduction Fourier analysis models a time-series as the sum of a set of sine-waves with variable frequencies, phases, and amplitudes. In many cases (but not all, e.g., nystagmus (Rosengren et al., 2020)), the lower frequencies are required to preserve a time-series feature of interest (e.g., saccade peak velocity), and higher frequencies may not be needed and thus represent noise. In this common case, a low-pass filter can be used to keep the signal part of the time-series and remove the noise part. The best practice would be for researchers to evaluate what frequencies are needed, and which are not, for any particular goal, prior to data collection. This preliminary analysis would allow the researcher to design an appropriate data collection scheme. In this part of the study, signals should be collected at the highest possible frequency so that an analysis of which frequencies are needed can be performed. Once this information is known, filter settings and sampling rate could be optimized. For example, if the signal is in analog form, and if it is determined in this preliminary analysis that frequencies above 25 were not needed, then for the actual data collection one could place an anti-aliasing filter (low pass, 25 Hz cutoff) before the A/D conversion with a sampling rate of 250 Hz. Figure 1: Visual representation of the 10x rule One's study goals are very important when trying to determine a required sampling rate. If we were interested in the frequency domain, then a minimum of 2 samples per wave is required [Shannon, 1949]. If the fastest frequency we need was \(x\ Hz\), then the minimum sampling frequency needs to be \(2\times\ x\ Hz\). However, if we were interested in the time domain, as we believe that most eye-movement researchers are, then the minimum sampling frequency (Fs) needed to be \(Fs\ >=x\times\ 10\ Hz^{2}\). We refer to this as the "10x rule". The basis for the rule is obvious if one considers this question: How many samples are needed per sine-wave to resolve and see that sine wave? If one only samples a sine wave twice, then the sine-wave will not look like a sine wave. It is a rule of thumb that to accurately visualize a sine-wave, the sine wave needs to be sampled at least 10 times, and preferably 20 times. See Fig. 1 illustrating the need for this rule. We are not aware of any paper in the eye-movement field that used this 10x rule, including all the papers cited in this study. Here, a leading research group makes this statement: For oscillating eye-movements, such as tremors, we can argue based on the Nyquist-Shannon sampling theorem (Shannon, 1949) that the sampling frequency should be at least twice the speed of the particular eye movement (e.g., behavior at 150 Hz requires \(>300\) Hz sampling frequency) [Andersson et al., 2010]. Of course, this rule is only appropriate if one's focus is in the frequency domain (e.g., Fourier amplitude or power spectra). But eye-movement researchers are interested in the time domain, i.e., the trajectories of saccades, or PSOs, or the length and stability of fixation etc. Therefore, the correct rule of thumb is the 10x rule described above. Below, we review the prior research on required frequencies for saccades. For our research (and we suspected many others) faithful preservation of saccade trajectories and main-sequence-related saccade metrics would probably be sufficient. We didn't review signal-to-noise determinations for other eye movements with potentially higher-frequency components. We presented our analysis of the literature in Table 1. One potentially relevant paper was not included in our table [Juhola et al., 1985]. The signals (electrooculography EOG and photoelectric) were analog signals. These analog signals were filtered first with the low-pass analog filter at 30 Hz. Subsequently, the signals were digitally filtered with a low-pass filter with a cutoff of 70 Hz. This creates a very complex situation, and we didn't think that statements about frequencies required to preserve saccade peak velocity were useful given the insertion of this analog filter. Therefore, this paper was not included in Table 1. We also excluded [Inchingolo and Spanio, 1985]. Their paper was based on EOG signal which was analog-filtered with a cutoff at 100 Hz. Any further statements about the effects of other digital low-pass filtering was confounded by the presence of the analog filter. From Table 1, despite the difference in recording and other methods, the literature supports the notion that 0-125 Hz frequency components are sufficient to preserve saccade characteristics. To summarize, our goal in this study was to determine which frequencies are needed to preserve signal and which frequencies correspond to noise in eye-tracking studies. We evaluate this issue for of saccades, microsaccades and smooth pursuit. \begin{table} \begin{tabular}{p{85.4pt}|p{85.4pt}|p{85.4pt}} \hline Article & Methods & Findings \\ \hline [Bahill et al., 1981] & Photoelectric techniques & For noisy data, a bandwidth of 0-125 Hz was required to record saccades. Also, a sampling rate of 1000 Hz was suggested. \\ \hline [Schmitt et al., 2007] & Video-based infrared eye-tracker & Sampling rate should be 250 Hz. \\ \hline [Wierts et al., 2008] & VOG and Search coil & Saccadic eye movements of \(>=5^{o}\) amplitude were bandwidth limited up to a frequency of 25 to 30 Hz. A sampling frequency of about 50 Hz was sufficiently high to prevent aliasing. \\ \hline [Mack et al., 2017] & Synthetic saccades & Signals sampled as low as 240 Hz allow for the good reconstruction of peak velocity. With 240 Hz, the frequencies that can be evaluated were 0-24 Hz (in the time domain). \\ \hline \end{tabular} \end{table} Table 1: Frequency Content of Saccades ## 2 Methods ### Subjects We recorded a total of 23 unique subjects (M=17/ F=6, median age = 28, range = 20 to 69 years). From the total number of unique participants, 14 had normal (not-corrected) vision, and 9 had corrected vision (7 glasses, 2 contact lenses). Nine of the unique participants were left-eye dominant and 14 were right-eye dominant. Subjects were recruited from laboratory personnel, undergraduates taking a class on computer programming, and friends of the experimenters. The Texas State University institutional review board approved the study, and participants provided informed consent. We report on two datasets, the first dataset ("Fixation"), initially contained data from 15 subjects, but because of blinks and other artifacts, we only analyzed 9 subjects. The second dataset ("RS-SP"), contained data when subjects viewed both a random saccade task and a smooth pursuit task. The RS-SP dataset consisted of 9 subjects. ### Eye Movement Data Collection Eye movements were collected with a tower-mounted EyeLink 1000 eye tracker (SR Research, Ottawa, Ontario, Canada). The eye tracker operated in monocular mode capturing the participant's dominant eye. Eye dominance was determined using the Miles method [30]. During the collection of eye movements data, each participant's head was positioned at a distance of 550 millimeters from a 19" (48.26 cm) computer screen (474\(\times\)297 millimeters, resolution 1680\(\times\)1050 pixels), where the visual stimulus was presented. The sampling rate was 1000 Hz. All data sets were collected with all heuristic filters off, i.e., unfiltered. For the fixation task, subjects were presented with a single fixation point (white circle, \(0.93^{o}\)) as the visual stimulus. The point was positioned in the horizontal middle of the screen and at a vertical angle of \(3.5^{o}\) above the primary position. Participants were instructed to fixate on the stationary point stimulus for a period of 30 seconds. During the random saccade task, subjects were instructed to follow the same target on a dark screen as the target was displaced at random locations across the display monitor, ranging from \(\pm\) 15\({}^{o}\) and \(\pm\) 9\({}^{o}\) of visual angle in the horizontal and vertical directions respectively. The random saccade task was 30 seconds long. The target positions were randomized for each recording. The minimum amplitude between adjacent target displacements was 2\({}^{o}\) of visual angle. The distribution of target locations was chosen to ensure uniform coverage across the display. The delay between target jumps varied between 1 sec and 1.5 sec (chosen randomly from a uniform distribution). During the smooth pursuit task, subjects were instructed to follow a target on the dark screen as the target moved horizontally from center to right. This ramp was followed by a fixation (length between 1 and 1.5 sec). This was followed by another ramp from the right to the left of the screen, then another fixation, etc. The rest of the task was a series of left-to-right and right-to-left ramps with fixations interposed. The target was moving at velocities of either \(5^{o}\)/sec, \(10^{o}\)/sec, or \(20^{o}\)/sec. For each speed, there were 5 continuous leftward and 5 rightward ramps per set. The order of the velocity sets was random for each participant. There was a 15 sec fixation period at the beginning of the task and between each set. The whole recording was 120 seconds long. ### Signal Processing of Fixation Data All fixation recordings lasted 30 seconds (30,000 samples). Non-overlapping segments of fixation of 2048 continuous samples were selected. To be included in our analysis, we wanted only segments without saccades and other artifacts. If a segment contained any velocity above 25 deg/sec the segment was rejected3. For six of the 15 subjects, we could not find a single segment of 2048 samples that met our criteria. Footnote 3: Velocity was calculated with a six-point difference approach using \(t_{+3}\) and \(t_{-3}\) as recommended by Bahill et al. [1982] ### Selection of Saccade, Catch-up Saccade and Microsaccade Exemplars We wanted to have multiple exemplars of saccades, catch-up saccades (CUS), and microsaccades. For the saccade examples, we used the random saccade task. For the CUS, we used the smooth pursuit dataset. Microsaccades were selected from the fixation dataset. Two exemplars were chosen for each eye movement type, a low-noise example and a high-noise example ("clean" and "noisy"). The selection was subjective but incorporated measures of precision to guide this choice. More examples are available as part of our supplementary material. For those unfamiliar with catch-up saccades, they occur when tracking a smoothly moving target. When gain was less than 1.0, subjects consistently lag behind the smoothly moving signal. In this case, they generate relatively small saccades to "catch-up" to the target. For a detailed analysis of the relationship between smooth pursuit gain, CUS amplitude, and CUS rate see Friedman et al. (1991). ### Signal Frequency Content Analysis We wanted to evaluate eye movements after one of seven filtering regimes (unfiltered, low-pass filtered at 25 Hz, band pass filtered from 26-50 Hz, 51-75 Hz, 76-100 Hz, 101-200 Hz, and high-pass filtered at 201 Hz). See Fig. 2 for an illustration of the frequency-response of the various filters. These were created using very sharp high-pass, low-pass, and band-pass Butterworth-style filters (order = 7). To prevent phase effects, all of these filters were zero-phase, which means that after the data were filtered in the forward direction, the signal was flipped and passed through the filter again. This procedure effectively doubled the filters' orders and squares the magnitudes of their transfer functions. The filtering operation was performed in post processing. ### Calculation of percentage of variance accounted for (PVAF) The first step for this analysis was to identify saccades in all of our Random Saccade task data. The identification was initially performed by an updated version of our previously published event detection method (Friedman et al., 2018). All potential saccades were screened by the authors so that only well-marked saccades were included. There were 1,033 well-marked saccades. A PVAF analysis was performed on each of these saccades. For each of these saccades, data from the unfiltered condition was treated as a dependent variable, and all of the filtered signals were treated as independent variables. We regressed the first filtered signal (0-25 Hz) onto the unfiltered signal and noted the \(r^{2}\). We then added the data filtered from 26-50 Hz and noted the change in \(r^{2}\). We kept doing this until all of the filtered bands had been entered into the multiple linear regression model. We multiplied each \(r^{2}\) by 100 to obtain the percent of variance accounted for (PVAF). ### Study of the Effects of Filtering on Saccade Peak Velocity We started with the 1,033 saccades discussed above. We created a histogram of saccade duration. We noticed that there were two groups of saccades, short saccades, and long saccades (Fig.3). In this histogram, two groups were obvious: those with durations less than (or equal to) 25 ms (N=415) and those with a duration greater than 25 ms (N=618). The analysis of the effect of filtering on peak velocity was performed on the short saccade group and the long saccade group separately. Snippets of the horizontal position channel were cut from 200 ms prior to each saccade to 200 ms after each saccade. For each snippet, a velocity calculation was performed using the 1\({}^{\text{st}}\) derivative from a Savitzky-Golay filter function with order = 2 and window = 7. The peak (absolute) velocity of the saccade was determined. In the next steps, each snippet was filtered with 7\({}^{\text{th}}\)-order low-pass Butterworth filter with cutoffs of 25, 50, 75, 100, and 200 Hz. Peak velocities were determined for every saccade in the unfiltered state and in each of the filter conditions4. A Friedman test was conducted to test median differences between the filter conditions. With N = 415 or 618, and 6 levels, we would have a statistical power of 1.00 to detect a medium effect size5. Typically, in planning a study a power of 0.8 is a common goal. In our case, with a power of 1.0, all studies with moderate effect sizes will be statistically significant. Figure 2: Frequency response of different frequency bandwidths using 7\({}^{\text{th}}\) order Butterworth filters Therefore, these statistical tests should be considered very powerful (i.e., effectively certain to find a medium effect size if there was one). Post-hoc multiple comparisons were controlled with a Tukey HSD test, at alpha = 0.05. ## 3 Results ### Analysis of Exemplars #### 3.1.1 Saccades In Fig. 4, we present the signal frequency content analysis for a "clean" saccade. This saccade has an approximate amplitude of \(2.94^{o}\). In plot (A1) we present the unfiltered signal trace for the saccade. In plot (B1 to G1) we present the signal containing frequencies from different bands. All of the plots in the left column were scaled to match the unfiltered saccade in (A1). All of the plots on the right column were scaled individually based on their range of data. The signal in plot (B1) appears very similar to the saccade in plot (A1). However, the post-saccadic activity in (A1) was missing, and there was less noise. The saccade amplitude has not been altered. In plot (C1) we present the signal containing frequencies from 26-50 Hz. There appears to be a minor contribution to signal amplitude from this band. For the remaining plots in the left column (D1 to G1), it appears that no signal remains that was relevant to the trajectory of the unfiltered saccade in (A1). In the right column, note the range of the data in (D2) to (G2). All of these bands contribute less than 3.0% of the amplitude of the unfiltered saccade. The waveforms of these plots do not appear to be relevant to the unfiltered saccade. So, for this saccade, we would consider that the data below 50 Hz were signal and the data above 50 Hz were noise. In Fig. 5, we present the signal frequency content analysis for a "noisy" saccade. This saccade has an approximate amplitude of \(2.79^{o}\). In plot (A1) we present the unfiltered signal trace for the saccade. In plots (B1 to G1) we present the signal containing frequencies from different bands. The signal in plot (B1) appears very similar to the saccade in plot (A1). However, the post-saccadic activity in (A1) was missing, and there was less noise. The saccade amplitude has not been altered. In plot (C1) we present the signal containing frequencies from 26-50 Hz. There appears to be a minor contribution to signal amplitude from this band. Some of the signals in this band may contribute to the post-saccadic activity in the unfiltered saccade. For the remaining plots in the left column (D1 to G1), it appears that no signal remains that was relevant to the trajectory of the unfiltered saccade in (A1). In the right column, note the range of the data in (D2) to (G2). All of these bands contribute less than 4.0% of the amplitude of the unfiltered saccade. The waveforms of these plots do not appear to be relevant to the unfiltered saccade. So, for this saccade also, we would consider that the data below 50 Hz were signal and the data above 50 Hz were noise. #### 3.1.2 Microsaccade In Fig. 6, we present the signal frequency content analysis for a "clean" microsaccade. This microsaccade has an approximate amplitude of \(0.63^{o}\). The saccade detection algorithm determined the end of this saccade later than one would choose manually, but we don't think this difference affects the present analysis. In plot (A1) we present the unfiltered signal trace for the microsaccade. In plots (B1) to (G1) we present the signal containing frequencies from different bands. The signal in plot (B1) appears to be a very smooth version of the waveform in (A1). The microsaccade amplitude may be very slightly less than the amplitude of the unfiltered microsaccade. In plot (C1) we present the Figure 3: Frequency histogram of saccade duration (N=1,033 saccades) Figure 4: Signal frequency content analysis of a clean saccade. (A1) Exemplar of a clean unfiltered saccade. (B1) The signal in (A1) with only frequencies from 0 to 25 Hz. (C1) The signal in (A1) with only frequencies from 26 to 50 Hz. (D1) The signal in (A1) with only frequencies from 51 to 75 Hz. (E1) The signal in (A1) with only frequencies from 76 to 100 Hz. (F1) The signal in (A1) with only frequencies from 101 to 200 Hz. (G1) The signal in (A1) with only frequencies from 201 to 500 Hz. Note that all plots on the left panel have the same amplitude range as the original saccade. Since we cannot see some of the signals on this scale very well, each plot (A2-G2) on the right panel was y-scaled individually according to the range of the data. Yellow highlighting indicates the saccade. Figure 5: Signal frequency content analysis of a noisy saccade. See caption for Fig. 4 for more details. signal containing frequencies from 26-50 Hz. The waveform for the data filtered at 51-75 Hz (D1) appears to contain some relevant signal. For the remaining plots in the left column (E1 to G1), it appears that no signal remains that was relevant to the trajectory of the unfiltered microsaccade in (A1). Note the range of the data in (C2 to G2). Their amplitude range was a much higher ratio to the unfiltered microsaccade than similar bands were to the saccade exemplars. For this exemplar, we consider that the data below 75 Hz were signal and the data above 75 Hz were noise. Similarly, in Fig. 7, we present the signal frequency content analysis for a "noisy" microsaccade. This microsaccade has an approximate amplitude of \(0.651^{o}\). The signal in plot (B1) looks like a very smooth version of the unfiltered saccade. The amplitude of this very smooth waveform is, at most, very slightly less than the unfiltered saccade. In plot (C1), we present the signal containing frequencies from 26-50 Hz. These higher frequencies contribute to the sharpness of unfiltered signal. In plot (D1), we present the signal containing frequencies from 51-75 Hz. As was the case for the waveform in (C1), these higher frequencies contribute to the sharpness of the unfiltered signal. For the remaining plots in the left column (E1 to G1), it appears that no signal remains that was relevant to the trajectory of the unfiltered microsaccade in (A1). This part of the signal was what makes this a "noisy" saccade. For this microsaccade also, we would consider that the data below 75 Hz were signal and the data above 75 Hz were noise. #### 3.1.3 Smooth Pursuit and Catch-up saccades (CUS) In Fig. 8 we present a "clean" segment of smooth pursuit and in Fig. 9 we present a "noisy" segment. Both segments have five or more CUS. The analysis of these figures was identical. In the (A1) plots we present the unfiltered smooth pursuit signal, including catch-up saccades. The (B1) plots appear very similar to that of the unfiltered segment. In the band from 26-50 Hz, There are some very small high-frequency bursts coincident with each saccade. Plot (C2) makes this point more clearly. For the remaining plots (D1 to G1), it appears that no signal remains that was relevant to the pattern of smooth pursuit of the unfiltered catch-up saccade in (A1). In both plots (D2) and (E2), there were bursts of high-frequency noise signal coincident with each CUS. However, the amplitude of these bursts in (E2) was so small that we think data from this frequency band can be ignored. For these smooth pursuit segments, we would consider that the data below 50 or 75 Hz was signal and the data above 75 Hz was noise. ### Percentage of variance accounted for (PVAF) Our results for the PVAF analysis of all saccade trajectories are presented in Fig. 10. It was clear from this figure that nearly all of the variance in the trajectory of the unfiltered saccade was accounted for by data in the range of 0-25 Hz. None of the high-frequency data contributed to the variance in the original unfiltered saccade in any substantial way. See Table 2 for exact numbers. ### Effects of Filters on Saccade Peak Velocity The saccade duration histogram clearly indicated two distinct saccade groups, Group 1 had a duration of 25 ms or less, and group 2 had a duration greater than 25 ms (Fig 3). Error bar plots of the median peak velocities for both groups are presented in Fig. 11. The medians (and mean absolute difference) values are also presented in Table 3. The Chi-square for the Friedman test applied to the short saccades was 1,655.64, (\(df=5,p<0.0001\)). Post-hoc tests indicated that the 100 Hz condition and the 200 Hz condition were not statistically significantly different, but all other possible comparisons were statistically significant (\(p<0.05\), Tukey-HSD for multiple comparisons). The Chi-square for the Friedman test applied to the long saccades was 1,176.91, (\(df=5,p<0.0001\)). Post-hoc tests indicated that the 75 Hz condition was not statistically significantly different from the 100 Hz condition, but that all other potential comparisons were statistically significantly different from all others, \(p<0.05\). On the basis of these statistical tests, peak velocity for both short and long saccades never return to the unfiltered level even in data filtered at 200 Hz. It is, however, important to recall the very great power of these tests. With these sample sizes, almost any small difference would be considered statistically significant. On the basis of the median values, it appears that by 100 Hz, the differences in median peak velocity were trivial (Short saccades: 109.9 \({}^{o}/s\) vs 107.6 \({}^{o}/s\); Long saccades: 328.5 \({}^{o}/s\) vs 323.14 \({}^{o}/s\)). \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Measure & 0-25 Hz & 16-50 Hz & 51-75 Hz & 76-100 Hz & 101-200 Hz & 201-500 Hz \\ \hline Median & 99.1795 & 0.5210 & 0.0302 & 0.0034 & 0.0076 & 0.0247 \\ \hline MAD & 1.5289 & 1.1703 & 0.2250 & 0.0499 & 0.0632 & 0.2094 \\ \hline \hline \end{tabular} \end{table} Table 2: PVAF Figure 6: Signal frequency content analysis of a clean microsaccade. See caption for Fig. 4 for more details. Figure 7: Signal frequency content analysis of a noisy microsaccade. See caption for Fig. 4 for more details. Figure 8: Signal frequency content analysis of a relatively clean smooth pursuit segment with catch-up saccades. See caption for Fig. 4 for more details. Figure 9: Signal frequency content analysis of a relatively noisy smooth pursuit segment with catch-up saccades. See caption for Fig. 4 for more details. ### Summary of Results In Table 4 we summarize our conclusions about which frequencies correspond to signal and which correspond to noise. ## 4 Discussion We provided a set of analyses that each provide estimates of which sine-wave frequencies are signal and which are noise in eye-tracking data. The different analyses provide different answers but can be summarized in a final single rule. The visual analysis of our microsaccade and smooth pursuit exemplars suggested that frequencies up to 75 Hz were required to retain signal whereas waves above 75 Hz represent noise. Our analysis of the percent of variance accounted for in unfiltered saccade trajectories by different filter bands indicated that essentially all of the variance in saccade \begin{table} \begin{tabular}{l c c c c c c} \hline Measure & UnFiltered & 0-25 Hz & 0-50 Hz & 0-75 Hz & 0-100 Hz & 0-200 Hz \\ \hline Short Medians & 109.9 & 49.5 & 90.9 & 105.1 & 107.6 & 109.1 \\ \hline Short MAD & 39.4 & 22.8 & 37.4 & 39.7 & 39.5 & 39.5 \\ \hline Long Median & 328.5 & 296.1 & 312.3 & 320.6 & 323.1 & 326.8 \\ \hline Long MAD & 106.3 & 113.5 & 104.1 & 103.0 & 104.0 & 105.9 \\ \hline \end{tabular} \end{table} Table 3: Saccade Peak Velocities (degrees per second) Figure 11: Peak-velocity Error Bar plots. The plot on left is for saccades \(\leq\) 25ms. The plot on the right is for saccades \(>\) 25 ms. Each circle is the median for a particular filter level. The error bars are \(\pm\) the mean absolute deviation. Figure 10: PVAF Error Bar Chart. Each circle was the median PVAF for a particular filter level. There are error bars in this plot based on the median absolute deviation, but they are so small as to be invisible in this range. shape was accounted for with data in the 0-25 Hz band. Saccade peak velocity was reduced when data were low-pass filtered at 25, 50, or 75 Hz. Data filtered at 100 Hz had peak velocities only trivially lower than the peak velocities of unfiltered saccades. Taken together, we conclude, that, if the goal is to preserve saccade (including microsaccade) and smooth pursuit characteristics, that frequencies up to 100 Hz are required but that frequencies above this frequency are noise. Juhola [Juhola, 1986] discuss the importance of correctly designing digital low pass files to preserve saccade peak velocity, However, they do not offer specific recommendations of cutoff frequencies and filter orders. Mack et al [Mack et al., 2017] concluded that a sampling rate of 240Hz was the minimum required to preserve estimates of saccade peak velocity. In our proposed follow-up to this article, we plan to compare various filter schemes in terms of their frequency response. We will make a recommendation of the best filter to retain signal and remove noise in eye-movement time-series. Since such filters can introduce or increase temporal autocorrelation, we will also evaluate and compare filters in terms of their autocorrelation effects. These results have implications for proposed sampling frequencies for future data collection. If our studies only involved the frequency domain, we would only need two samples at 100 Hz, so a sampling rate of 200 Hz would suffice. However, because we were interested in evaluating eye movements in the time domain, the 10x rule discussed in the introduction applies. Therefore, the minimum acceptable sampling rate for eye-tracking studies is 1000 Hz. ## Acknowledgments This work was funded by grant from the NSF (1714623) (PI: Oleg Komogortsev). ## Open Practices Statement As stated in the manuscript, the signals for all saccades analyzed are at [https://digital.library.txstate.edu/handle/10877/16437](https://digital.library.txstate.edu/handle/10877/16437). Also, images of all saccades as well as the basis for the PVAF analysis for each saccade are also available on this website. ## Conflict of interest The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2309.12987
Relating Wigner's Friend Scenarios to Nonclassical Causal Compatibility, Monogamy Relations, and Fine Tuning
Nonclassical causal modeling was developed in order to explain violations of Bell inequalities while adhering to relativistic causal structure and faithfulness -- that is, avoiding fine-tuned causal explanations. Recently, a no-go theorem that can be viewed as being stronger than Bell's theorem has been derived, based on extensions of the Wigner's friend thought experiment: the Local Friendliness (LF) no-go theorem. Here we show that the LF no-go theorem poses formidable challenges for the field of causal modeling, even when nonclassical and/or cyclic causal explanations are considered. We first recast the LF inequalities, one of the key elements of the LF no-go theorem, as special cases of monogamy relations stemming from a statistical marginal problem. We then further recast LF inequalities as causal compatibility inequalities stemming from a nonclassical causal marginal problem, for a causal structure implied by well-motivated causal-metaphysical assumptions. We find that the LF inequalities emerge from this causal structure even when one allows the latent causes of observed events to admit post-quantum descriptions, such as in a generalized probabilistic theory or in an even more exotic theory. We further prove that no nonclassical causal model can explain violations of LF inequalities without violating the No Fine-Tuning principle. Finally, we note that these obstacles cannot be overcome even if one appeals to cyclic causal models, and we discuss potential directions for further extensions of the causal modeling framework.
Yìlè Yīng, Marina Maciel Ansanelli, Andrea Di Biagio, Elie Wolfe, David Schmid, Eric Gama Cavalcanti
2023-09-22T16:32:39Z
http://arxiv.org/abs/2309.12987v4
Relating Wigner's Friend scenarios to Nonclassical Causal Compatibility, Monogamy Relations, and Fine Tuning ###### Abstract Nonclassical causal modeling was developed in order to explain violations of Bell inequalities while adhering to relativistic causal structure and _faithfulness_--that is, avoiding fine-tuned causal explanations. Recently, a no-go theorem stronger than Bell's theorem has been derived, based on extensions of Wigner's friend thought experiment: the Local Friendliness (LF) no-go theorem. We herein contend that LF no-go theorem poses formidable challenges for the field of causal modeling, even when nonclassical and/or cyclic causal explanations are considered. We first recast the LF inequalities, one of the key elements of the LF no-go theorem, as special cases of monogamy relations stemming from a statistical marginal problem; we then further recast LF inequalities as causal compatibility inequalities emerging from a _nonclassical_ causal marginal problem. We find that the LF inequalities emerge from the causal modeling perspective even when allowing the latent causes of observed events to admit post-quantum descriptions, such as Generalised Probabilistic Theories (GPT) or even more exotic causal compatibility prescriptions. We prove that _no_ nonclassical causal model can explain violations of LF inequalities without both rejecting various well-motivated causal-metaphysical assumptions and violating the No Fine-Tuning principle. Finally, we note that these obstacles cannot be overcome even if one were to appeal to _cyclic_ causal models. ###### Contents * I Introduction * I.1 The minimal LF scenario * I.2 Overview of the results * II LF inequalities are monogamy relations * III LF no-go theorem as a nonclassical causal marginal problem * III.1 Nonclassical causal marginal problem * III.2 LF inequalities are nonclassical causal compatibility inequalities * IV The conundrum for nonclassical causal inference * IV.1 In tension with relativity * IV.2 At odds with No Fine-Tuning * IV.2.1 Applicability to cyclic causal models * IV.2.2 Verifying the statistical independence of an inaccessible variable * V Conclusions * A Additional causal inference background * A.1 GPT compatibility * A.2 LF DAG as a GPT diagram * A.3 \(d\)-separation and its compositionality * B.1.1 The \(d\)-separation rule * B.1.2 The \(d\)-separation rule * B.1.3 The \(d\)-separation rule * B.1.4 The \(d\)-separation rule * B.1.5 The \(d\)-separation rule * B.1.6 The \(d\)-separation rule * B.1.7 The \(d\)-separation rule * B.1.8 The \(d\)-separation rule * B.1.9 The \(d\)-separation rule * B.20 The \(d\)-separation rule * B.21 The \(d\)-separation rule * B.22 The \(d\)-separation rule * B.23 The \(d\)-separation rule * B.24 The \(d\)-separation rule * B.25 The \(d\)-separation rule * B.26 The \(d\)-separation rule * B.27 The \(d\)-separation rule * B.28 The \(d\)-separation rule * B.29 The \(d\)-separation rule * B.30 The \(d\)-separation rule * B.31 The \(d\)-separation rule * B.32 The \(d\)-separation rule * B.33 The \(d\)-separation rule * B.34 The \(d\)-separation rule * B.35 The \(d\)-separation rule * B.36 The \(d\)-separation rule * B.37 The \(d\)-separation rule * B.38 The \(d\)-separation rule * B.39 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.44 The \(d\)-separation rule * B.45 The \(d\)-separation rule * B.46 The \(d\)-separation rule * B.47 The \(d\)-separation rule * B.48 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.49 The \(d\)-separation rule * B.40 The \(d\)-separation rule * B.41 The \(d\)-separation rule * B.42 The \(d\)-separation rule * B.43 The \(d\)-separation rule * B.44 The \(d\)- can come from one's causal intuitions regarding the Bell experiment and can be rigorously derived in two independent avenues: causal-metaphysical assumptions such as Local Causality [5] and causal discovery principles such as No Fine-Tuning (or faithfulness) [2; 6]. A causal structure imposes constraints on its compatible probability distributions, known as _causal compatibility inequalities_ or _equalities_. Bell inequalities are classical causal compatibility inequalities of the Bell DAG. The central message of Bell's theorem can then be understood as the inability of the Bell DAG to explain violations of Bell inequalities under classical causal modeling. This new perspective on Bell's theorem provided by Ref. [2] also established a bridge between quantum physicists and the classical causal inference community, paving the way for fresh insights on both fronts. For instance, it underscored the significance of causal compatibility inequalities (as opposed to equalities) for statisticians, and it also encouraged physicists to explore novel scenarios that show quantum advantages by investigating various causal structures [7; 8; 9; 10; 11; 12] and to contribute to classical causal inference [13; 14; 15; 16; 17; 18; 19]. Moreover, the idea of causally explaining violations of Bell's inequalities while keeping the Bell causal structure intact (i.e., without adding any extra arrows) spurred the development of nonclassical generalizations to Pearl's classical causal model framework, which permit correlations to be explained by causes whose properties are described by quantum theory or other generalized probabilistic theories (GPTs) [20; 21; 22; 23]. Substantial progress in nonclassical causal inference has unfolded since then [24; 25; 26; 27]. Independent of the motivation for explaining Bell's inequalities without changing the Bell DAG, another direction of generalizing Pearl's classical causal models is to allow cyclic instead of only acyclic causal structures. Cyclic causal structures have been used in classical causal inference for situations with feedback loops [28; 29] and have been studied in the quantum context concerning indefinite causal order [30; 31; 32]. In this paper, we use a framework for nonclassical causal models that is built upon the work of [20], which is the most widely adopted framework in nonclassical causal inference. We also add an extension to this framework to include cyclic causal models for relevant results. The present work aims to construct a similar bridge to the one built by Ref. [2], but where we connect extended Wigner's Friend no-go theorems with _nonclassical_ causal inference. We achieve this by demonstrating that the Local Friendliness (LF) inequalities [33], featured in one of the most significant extended Wigner's Friend no-go theorems [34], can be viewed as causal compatibility inequalities. Consequently, these inequalities need to be respected even when allowing quantum, GPT, or other more unconventional nonclassical causal explanations. Moreover, we demonstrate that these causal compatibility inequalities can be derived from the requirement of No Fine-Tuning. Importantly, the No Fine-Tuning argument, can also be applied to _cyclic_ causal models. ### The minimal LF scenario The specific scenario we consider in this work is the minimal1 LF scenario [35, Sec. 2.1], featuring three observers: Alice, Bob, and, Charlie2. Bob and Charlie each measure their share of a bipartite system, obtaining outcomes labeled3\(B\) and \(C\), respectively. Charlie, who is in an isolated environment that we refer to as "Charlie's laboratory", performs a fixed measurement. Alice and Bob have choices of measurement settings Figure 1: **The Bell DAG.** Triangles represent observed nodes while circles represent latent nodes. Figure 2: The minimal Local Friendliness (LF) scenario with additional spatio-temporal requirements. The solid lines are the worldlines of the three observers. The dotted lines indicate light cones. Note that such spatio-temporal requirements are not needed for all of our results except the ones in Sec. IV.1. labeled by \(X\) and \(Y\), respectively. Alice, who is stationed outside Charlie's laboratory, has the following choices: for \(x=1\) she asks Charlie for his outcome and sets her own outcome to the value she heard from Charlie; for \(x\neq 1\) she performs a different measurement on Charlie's laboratory, possibly disturbing or erasing the records of Charlie's result in the process. Note that we have not demanded Alice's and Bob's measurements to be space-like separated here since this is not needed for most of our results. If the spatio-temporal relationships between the events \(A\), \(B\), \(C\), \(X\), and \(Y\) are as given in Fig. 2, we will call it the _spacelike-separated_ minimal LF scenario. Here, just as in Bell's theorem, we have the background metaphysical assumption [36] that such events are well-localized in space-time. In particular, \(B\), \(C\), and \(Y\) must be outside the future light cone of \(X\), while \(A\), \(C\), and \(X\) must be outside the future light cone of \(Y\). This is analogous to the locality-loophole-free Bell experiment. In [33; 35; 37], it was shown that a conjunction of metaphysical assumptions (the conjunction was called "Local Friendliness") on the spacelike-separated minimal LF scenario implies constraints on the operational correlations on \(A\), \(B\), \(X\), and \(Y\), known as the Local Friendliness (LF) inequalities. These assumptions are Absoluteness of Observed Events and Local Agency. **Definition 1** (Absoluteness of Observed Events).: _Any observed event is an absolute single event, and not relative to anything or anyone._ The crucial implication of Absoluteness of Observed Events is that it enables a well-defined joint distribution over all variables representing observed events (referred to as _observed variables_). In the minimal LF scenario, this means, in particular, that Charlie's outcome \(C\) is always absolute, even if Alice erases it; as such, there is always a well-defined joint probability distribution over \(\{A,B,C,X,Y\}\). **Definition 2** (Local Agency).: _A freely chosen measurement setting is uncorrelated with any set of relevant events not in its future-light-cone._ Local Agency is motivated by relativity theory. In the spacelike-separated minimal LF scenario, it demands that \[P(ac|xy)=P(ac|x),\quad P(bc|xy)=P(bc|y). \tag{1}\] Together with the assumption that \(P(a|c,x{=}1)=\delta_{a,c}\) (since Alice asks Charlie for his outcome when \(x=1\)), one arrives at the LF inequalities. In this way, just like Bell's inequalities, LF inequalities are derived in a theory-independent manner. In [33; 35], quantum realizations of the minimal LF scenario are proposed where Alice is a superobserver that can perform arbitrary quantum operations that undo4 Charlie's measurement when \(x\neq 1\). Quantum theory predicts such a realization to violate LF inequalities.5 Thus the no-go result: if this quantum prediction holds, at least one of the two metaphysical assumptions has to be false. Footnote 4: For such quantum realizations involving memory reversing, Charlie is proposed to be an advanced enough quantum AI instead of a real human [35]. Footnote 5: A prototype of such a quantum realization was done in Refs. [33; 38] where (in hindsight for the case of [38]) LF inequalities were violated. For discussion on future experimental realizations involving increasingly complicated and large systems, see [35]. ### Overview of the results In this work, we first (in Sec. II) show that the LF inequalities are essentially special cases of monogamy relations, which are consequences of statistical marginal problems. That is, given that the joint distributions \(P(abc|xy)\) (stipulated to exist by Absoluteness of Observed Events) need to satisfy the statistical constraints of Eqs. 1 (stipulated by Local Agency), there must be a trade-off between the strength of the marginal correlation \(P(ab|xy)\) and that of \(P(ac|x{=}1)\) (the latter is assumed to be a perfect correlation in deriving LF inequalities). We then proceed to our main results concerning a framework for nonclassical causal inference that we term _\(d\)-sep_ causal modeling, which enables GPTs and certain post-GPTs to provide causal explanations. First, we define such a framework in Sec. III.1. Then, in Sec. III.2, we show that the statistical marginal problem that led to the LF inequalities can also be _exactly_ cast as a nonclassical causal marginal problem [39; 40]. In this causal marginal problem, the LF inequalities are causal compatibility inequalities of the causal structure of Fig. 3, which we call the _LF DAG_, within the framework of \(d\)-sep causal modeling. As such, the LF DAG with \(d\)-sep causal modeling captures the essence of the LF theorem. Then, Sec. IV shows that just as how Bell's theorem constitutes a serious challenge for classical causal inference [2], the LF no-go theorem further contests _nonclassical_ causal inference: there is no way to explain the violation of LF inequalities with \(d\)-sep Figure 3: **The LF DAG.** The latent node \(\mathcal{L}\) is highlighted in yellow as a reminder that it can be associated with nonclassical systems while our results still hold. causal modeling if one wishes to either (a) retain certain causal-metaphysical assumptions, such as the ones motivated by relativity or, (b) adhere to the No Fine-Tuning rule to avoid adopting conspiratorial causal explanations. Importantly, in Sec. IV.2.1, we show that the No Fine-Tuning no-go theorem can be generalized even when our \(d\)-sep causal modeling framework is extended to include cyclic causal structures. It is worth mentioning that, in the No Fine-Tuning argument of Sec. IV.2, one of the conditional independence relations assumed in the No Fine-Tuning no-go theorem involves Charlie's measurement outcomes even if it is reversed by Alice. However, we construct an explicit protocol that can operationally verify this statistical independence. The design of this verification protocol is tailored to the previously mentioned quantum proposal for realizing violations of LF inequalities, providing a means of gathering operational evidence of the needed independence relations. ## II LF inequalities are monogamy relations A _statistical marginal problem_[41] concerns a set \(\mathbf{P}\) of probability distributions over overlapping sets of variables. It studies whether there is a joint probability distribution \(P\) over the union of these sets that has the probabilities in \(\mathbf{P}\) as its marginals, possibly under certain _statistical_ constraints. As we will see shortly, the original formulation [33] of the LF no-go theorem can be cast as a statistical marginal problem, in particular, one that yields monogamy relations. _Monogamy relations_ are consequences of a special type of statistical marginal problem. In such problems, the joint distribution is the correlation among multiple parties in a given scenario, and the marginal probability distributions are the correlations shared between different subsets of parties. The monogamy relations are constraints one can derive on these marginal correlations under certain statistical conditions. They give a tradeoff relation between these marginals, where the correlation between one subset of the parties, say Alice and Bob, is in some way bounded by the correlation between another subset of the parties, say Alice and Charlie. For example, in a tripartite Bell experiment, with the implicit assumption of Absoluteness of Observed Events [36], there is a joint distribution over all setting and outcome variables. Then, one can ask how strong the correlation between Alice and Bob can be, given certain degrees of correlation between Alice and Charlie, under the operational no-signaling condition. Here, the operational no-signaling condition is a statistical condition demanding that any of the marginal correlations between two parties must be independent of the measurement setting of the third party. The consequences of such a statistical marginal problem can be expressed as inequalities on the marginal correlation between Alice and Bob and between Alice and Charlie; such inequalities are monogamy relations for the tripartite Bell scenario. In Local Friendliness, as mentioned in Sec. I, the Absoluteness of Observed Events assumption allows there to be a well-defined joint distribution between the variables \(A\), \(B\), \(C\), \(X\), and \(Y\). Then, one asks how strong the correlation between Alice and Bob (namely, \(P(ab|xy)\)) can be, given the correlation between Alice and Charlie when \(x=1\) (namely, \(P(ac|x{=}1)\)), under the statistical conditions of Eqs. (1).6 Footnote 6: Despite sharing the same mathematical form of the operational no-signaling conditions in a tripartite Bell scenario, Eqs. (1) are not always operationally testable, as records of the value of \(C\) might get deleted. For example, in the case where all outcomes and settings are binary in the minimal LF scenario, the statistical constraints of Eqs. (1) give rise to the following monogamy relation [42]: \[\text{CHSH}+2\sum_{a=c}P(ac|x{=}1)\leq 5, \tag{2}\] where CHSH refers to the sum \[\sum_{a=b}P(ab|00)+\sum_{a=b}P(ab|01)+\sum_{a=b}P(ab|10)+\sum_{a\neq b}P(ab|11) \tag{3}\] Eq. (2) illustrates a tradeoff relation between \(P(ab|xy)\) and \(P(ac|x{=}1)\): the larger the correlation between Alice's and Charlie's outcomes when \(x=1\), the smaller the violation of the CHSH inequality that Alice and Bob can produce. In the special case where \(a=c\) whenever \(x=1\), Alice and Bob simply cannot violate the CHSH inequality. Although Eq. (2) was originally obtained for the tripartite Bell scenario [42], it applies to the minimal LF scenario because Eqs. (1) (that comes from Local Agency) are mathematically the same as the operational no-signal conditions in the tripartite Bell scenario where Charlie does not have choices for his measurement setting. This example with binary variables makes explicit what we mean when we say that the constraints that arise in the LF scenario are monogamy relations.7 In more generality, we can define the _LF monogamy relations_: Footnote 7: Another example is given by the so-called “relaxed” LF inequalities derived in Ref. [43, Eq. (13)]. They are mathematically special cases of the monogamy relations for the Bell scenario derived in [42, Eq. (3)]. **Definition 3** (LF monogamy relations).: LF monogamy relations _are inequality constraints on the set of probability distributions \(\{P(ab|xy),P(ac|x{=}1)\}\) that arise from the requirement that these distributions are marginals of \(P(abc|xy)\) satisfying the Local Agency constraints of Eqs. (1)._ As mentioned, the Local Friendliness inequalities are derived with the assumption that \(P(a|c,x{=}1)=\delta_{a,c}\) since Alice asks Charlie for his outcome when \(x=1\). With this assumption, Eq. (2) reduces to the CHSH inequality. Indeed, in the binary case, it was noted in Ref. [33] that the LF inequality is exactly the CHSH inequality. Thus, the LF inequalities are special cases of LF monogamy relations; thus, they are, at their core, consequences of a statistical marginal problem that involves the conditions of Eqs. (1). **Definition 4** (LF inequalities).: _An LF inequality is a special case of an LF monogamy relation, where \(P(ac|x{=}1)\) is set to be \(P(ac|x{=}1)=\delta_{a,c}P(c|x{=}1)\)._ In this work, we derive results for all LF monogamy relations instead of only the LF inequalities. This is a strengthening of the original no-go theorem, because it allows for the possibility of discrepancy between Charlie's actual observation and Alice's copy of his outcome (be it because of noise or other reasons [44]). More explicitly, we can have \[\sum_{a\neq c}P(ac|x{=}1)\leq\gamma \tag{4}\] for some \(\gamma\in[0,1]\).8 It may be hard to directly observe such a discrepancy, but we can instead qualify our conclusion for the statistics we obtained in an experimental realization of the minimal LF scenario. That is, instead of immediately concluding that the LF assumptions are violated whenever the LF inequalities are violated, we appeal to the LF monogamy relations to derive a lower bound on the discrepancy \(\gamma\) which would be required to salvage the LF assumptions in light of observing specific correlations of \(P(ab|xy)\). The option of holding onto the LF assumptions, then, gets weighted against the (im)plausibility of the lower bound on \(\gamma\) computed from the monogamy relations. Footnote 8: Eq. (4) is analogous to [43, Eq. (8)] but with an entirely different physical interpretation (we do not endorse their interpretation). Finally, bear in mind that in the definition of LF monogamy relations (Def. 3) there is no assumption made regarding the type of system the three agents can share. This hints at the challenges our findings will present in terms of the incapability of nonclassical causal inference for explaining violations of LF inequalities. ## III LF no-go theorem as a nonclassical causal marginal problem ### Nonclassical causal marginal problem The main goal of causal inference is to find underlying causal mechanisms for explaining the statistics of experiments or observations. The connection between the causal mechanisms and the observed statistics is given by a _causal model_. Any causal model has three parts. The first part is a causal structure, represented by a graph \(\mathcal{G}\). For simplicity, we assume for now that we are dealing with directed acyclic graphs (DAGs), and later in Sec. IV.2.1, we will extend our framework to include cyclic causal models. The second part is a _causal prescription_, which is a specification of the type of theory under which probability distributions can be produced by the causal structure. Such a theory determines the types of system a latent node in \(\mathcal{G}\) can be associated with (note that _observed_ nodes are necessarily classical) and determines the compatibility criterion between \(\mathcal{G}\) and a probability distribution. The third part is a probability distribution \(P\) over the observed variables that is compatible with \(\mathcal{G}\) given the causal prescription. A causal model with a causal structure \(\mathcal{G}\) is called a _classical causal model_ when its causal prescription is classical theory. Under such a rule, all nodes in \(\mathcal{G}\) must be associated with classical random variables. A probability distribution \(P\) that can be produced by \(\mathcal{G}\) under classical theory is said to be _classical-compatible_ with \(\mathcal{G}\). Likewise, a causal model is called a _quantum causal model_ when its causal prescription is quantum theory. Then, any _latent_ node in \(\mathcal{G}\) may be associated with a quantum system. A probability distribution \(P\) that can be produced by \(\mathcal{G}\) under quantum theory is said to be _quantum-compatible_ with \(\mathcal{G}\). Similarly, we use the term _GPT causal model_ to refer to any causal model whose causal prescription is some GPT and consequently, whose latent nodes can be associated with those GPT systems. As long as there is _some_ GPT that allows a probability distribution \(P\) to be produced by \(\mathcal{G}\), we say that \(P\) is _GPT-compatible_ with \(\mathcal{G}\). Appendix A.1 provides details on how a probability distribution can be produced by a causal structure given the GPT. Clearly, classical causal models are special cases of quantum causal models and quantum causal models are special cases of GPT causal models. It is also possible to have compatibility rules with even more general nonclassical probabilistic theories and to define a wider class of causal models that we call the _\(d\)-sep causal models_. However, to spell out what we mean by \(d\)-sep causal models, it is instructive to first see some common features of the compatibility criteria specified by all GPTs. In general, the constraints that a causal structure imposes on its compatible probability distributions depend on the causal prescription. For example, the Bell inequality is a constraint of the Bell DAG under classical causal modeling but not a constraint if the causal prescription is quantum or an arbitrary GPT. However, some of the constraints can hold across various causal prescriptions, of which the most well-known are conditional independence relations between observed variables implied by \(d\)-separation relations [45, 46], which are graph-theoretic relations of the causal structure. (See Appendix A.3 or [3, Chapter 1] for an introduction.) A causal structure \(\mathcal{G}\) together with a causal prescription is said to obey the _\(d\)-separation rule9_ when any \(d\)-separation relation among observed nodes of the causal structure implies a conditional independence relation on its compatible probability distributions. That is, for any three sets of observed nodes, \(\mathbf{U}\), \(\mathbf{V}\), and \(\mathbf{W}\), in \(\mathcal{G}\), whenever \(\mathbf{U}\) and \(\mathbf{V}\) are \(d\)-separated by \(\mathbf{W}\) (denoted as \(\mathbf{U}\perp_{\mathrm{d}}\mathbf{V}|\mathbf{W}\)), then the conditional independence relation \(P(\mathbf{u}|\mathbf{v},\mathbf{w})\!=\!P(\mathbf{u}|\mathbf{w})\) must hold for all distributions compatible with \(\mathcal{G}\). When the graph is acyclic, i.e, a DAG, it was proven in Ref. [20] that the \(d\)-separation rule is satisfied regardless of the GPT specified by the causal prescription.10 Footnote 9: For reference, it is also called the “directed global Markov property” in, for example, Ref. [28]. Footnote 10: See [20] for a proof in Operational Probabilistic Theories (OPT)s, which can be straightforwardly applied to the GPT case. While we focus on GPTs, all our results for GPT translate to the OPT case. In light of this theory-independent constraint, we say that a probability distribution \(P\) is _\(d\)-sep-compatible_ with a DAG \(\mathcal{G}\) if \(P\) exhibits all of the conditional independence relations coming from the \(d\)-separation relations among observed nodes of \(\mathcal{G}\), and we call any causal model whose probability distribution and causal structure are \(d\)-sep-compatible a _\(d\)-sep causal model_. Note that GPT causal models cannot explain every distribution which is merely compatible in this more relaxed sense. On the other hand, we have been made aware that quasi-probabilistic [47] causal models--wherein the real number assigning the (quasi)probability of a particular latent valuation need not lie between zero and one--on a DAG \(\mathcal{G}\) can give rise to distributions that are not GPT-compatible with \(\mathcal{G}\).11 Therefore, one can think of causal prescriptions that go beyond GPTs, and are nevertheless included in our \(d\)-sep causal modeling framework. For reference, \(d\)-sep causal models coincide with the so-called Ordinary Markov models [48], and the set of probability distributions \(d\)-sep-compatible with an acyclic causal structure is the set \(\mathcal{I}\) defined in Ref. [20]. Footnote 11: The example provided to us is that, with quasi-probabilistic theories, it is possible to achieve perfect correlation between the three variables of the so-called triangle scenario [47]. Such perfect correlation cannot be realized by GPTs in that scenario. We postpone to Sec. IV.2.1 the introduction of an extension of \(d\)-sep causal models, encompassing a wide class of cyclic causal models, namely, those adhering to a generalisation of the \(d\)-separation rule. Importantly, all the findings, including the ones prior to Sec. IV.2, remain valid in that extended framework. The study of _causal marginal problems_ has been around in the classical causal inference community for more than two decades [49, 40], it is a special case of causal compatibility problem. Here, we define it in such a way that it also applies to nonclassical causal inference. Similar to a statistical marginal problem, a causal marginal problem concerns a set \(\mathbf{P}\) of probability distributions over non-identical but overlapping sets of _observed variables_. It concerns whether there is a probability distribution \(P\) over the union of these sets of variables that has the probabilities in \(\mathbf{P}\) as its marginals under certain _causal_ constraints. Such causal constraints can be, for example, that \(P\) must be compatible with a causal structure \(\mathcal{G}\) under a certain causal prescription. When \(P\) is compatible with \(\mathcal{G}\) under that causal prescription, we say that the _set \(\mathbf{P}\)_ is compatible with \(\mathcal{G}\), meaning that all probability distributions in the set are _simultaneously_ compatible with \(\mathcal{G}\). ### LF inequalities are nonclassical causal compatibility inequalities Now, we demonstrate how and why the LF no-go theorem can be _exactly_ cast as a causal marginal problem relative to the LF DAG, meaning that the formulation of the no-go theorem using the LF DAG does not lose or gain any constraints on operational data than what was already in the original formulation of the LF no-go theorem [33]. First, we proof that LF inequalities are causal compatibility inequalities under \(d\)-sep causal modeling. **Theorem 1** (No-go--LF DAG).: _No \(d\)-sep causal model with the LF DAG (Fig. 3) can explain any violation of LF monogamy relations, including LF inequalities._ Proof.: The LF DAG has the following \(d\)-separation relations: \[AC\perp_{\mathrm{d}}Y|X,\quad BC\perp_{\mathrm{d}}X|Y, \tag{5}\] By the \(d\)-separation rule, Eq. (5) demands the conditional independence relations of Eqs. (1) to hold for any distribution \(d\)-sep-compatible with \(\mathcal{G}\), such a distribution must satisfy the LF monogamy relations by Def. 3. Theorem 1 is a formulation of the LF no-go theorem from the perspective of a causal marginal problem, which is a special case of causal compatibility problem. Furthermore, the following theorem implies that such a perspective with the LF DAG does not impose any extra constraints on \(\{P(ab|xy),P(ac|x\!\!=\!\!1)\}\) beyond those from the statistical marginal problem in Def. 3. **Theorem 2**.: _The only constraints on the set of probability distributions \(\{P(ab|xy),P(ac|x\!\!=\!\!1)\}\) that are GPT-compatible with the LF DAG are the LF monogamy relations and the no-signaling relations between Alice and Bob, i.e.,_ \[P(a|xy)=P(a|x),\quad P(b|xy)=P(b|y). \tag{6}\] Proof.: Consider the DAG in Fig. 4, which represents a tripartite Bell experiment where one of the three observers, Charlie, does not have choices for his measurement setting. Ref. [50] has an explicit construction of a GPT, called the Generalized Non-signaling Theory, such that when the latent node \(\mathcal{L}\) is associated with it, the DAG in Fig. 4 is compatible with _any_\(P(abc|xy)\) satisfying Eqs. (1). The LF DAG, compared to the DAG in Fig. 4, has an additional arrow from \(C\) to \(A\). Thus, the LF DAG must have no less power than the DAG in Fig. 4 in explaining the correlations of \(P(abc|xy)\), under any \(d\)-sep causal prescription.12 Thus, it must be GPT-compatible with all the probability distributions that are GPT-compatible with the DAG of Fig. 4. Thus, the LF DAG must be GPT-compatible with _any_\(P(abc|xy)\) satisfying Eqs. (1). Consequently, by Def. 3, the LF DAG must be GPT-compatible with _any_ set of \(\{P(ab|xy),P(ac|x{=}1)\}\) satisfying the LF monogamy relations and Eqs. (6) merely come from Eqs. (1). Footnote 12: In fact, the LF DAG and the DAG in Fig. 4 are equivalent under any \(d\)-sep causal prescription in explaining \(P(abc|xy)\). This fact is hinted at by the observation in footnote 7. Note that by opting to present Theorem 1 in terms of \(d\)-sep causal modeling and Theorem 2 in terms of GPT causal modeling, we are presenting the most general version of both theorems: Theorem 1 says that even the most general \(d\)-sep causal model one could think of would not help to explain the violation of the LF inequalities; meanwhile, Theorem 2 says that even by restricting the \(d\)-sep causal models to GPT ones only, there are still no extra constraints on \(\{P(ab|xy),P(ac|x{=}1)\}\) other than LF monogamy relations and no-signaling. Together, these theorems imply that there is no difference in the set of probability distributions that are GPT-compatible and \(d\)-sep-compatible with the LF DAG. The no-signaling relations of Eqs. (6) mentioned in Theorem 2 are _not_ extra constraints imposed by the LF DAG compared to the constraints stipulated in the statistical marginal problem (i.e., Def. 3) for the minimal LF scenario, since they are implied by Eqs. (1). Therefore, we do not gain or lose any constraint on the operational probability \(P(ab|xy)\) in the minimal LF scenario by casting the LF no-go theorem in terms of a causal marginal problem with the LF DAG. Such a causal marginal problem can be looked at under the framework of either GPT causal modeling or \(d\)-sep causal modeling. ## IV The conundrum for nonclassical causal inference Even though the LF DAG cannot explain any violation of LF monogamy relations under \(d\)-sep causal modeling, one could still hope to change the causal structure to do so. Fig. 5 shows some examples of such alternative causal structures. For the causal structures in Figs. 4(a) and 4(b), there are even _classical_ causal models that violate LF monogamy relations; for Fig. 4(c), it is necessary to use nonclassical causal prescriptions. In the case of Bell's theorem, Ref. [2] showed that, besides being in tension with relativity, all _classical_ causal models that can explain the violation of Bell's inequalities are fine-tuned. In this section, we prove an analogous result for \(d\)_-sep_ causal models that can explain the violation of LF monogamy relations: all of them need to simultaneously violate causal-metaphysical assumptions motivated by, for example, relativity, _and_ be fine-tuned. It is then clear why the problem for causality posed by the LF theorem is much stronger than that by Bell's theorem: while the one by Bell's theorem is only an issue for classical causal inference, the LF theorem puts the much more general nonclassical causal inference in question. ### In tension with relativity The theorem that will be presented in this section concerns the _spacelike-separated_ minimal LF scenario, as it uses a causal-metaphysical assumption motivated by relativity. These causal-metaphysical assumptions are causal motivations for Local Agency (see Def. 2). They are previously defined in Ref. [37] as **Definition 5** (Relativistic Causal Arrow).: _Any cause of an event is in its past light cone._ **Definition 6** (Independent Settings).: _A setting has no relevant causes i.e., it can always be chosen via suitable variables that do not have causes among, nor share a common cause with, any of the other experimental variables._ Figure 4: The DAG for a tripartite Bell experiment where one of the three observers, Charlie, does not have choices for his measurement setting. Figure 5: **Examples of causal structures allowing for violations of LF inequalities**. Relativistic Causal Arrow is motivated by the theory of relativity, while Independent Settings13 is motivated by the common assumption that Alice and Bob can freely choose their measurement settings. Footnote 13: This assumption was called _Independent Interventions_ in [37], however, since “intervention” has a specific technical meaning in causal-inference literature, we will use the term Independent Settings for the scope of this work. **Theorem 3** (No-go--relativistic).: _No \(d\)-sep causal model satisfying Relativistic Causal Arrow and Independent Settings can explain any violation of the LF monogamy relations (including the LF inequalities) in the spacelike-separated minimal LF scenario._ Proof.: As introduced in Sec. I, in the spacelike-separated minimal LF scenario, \(B\), \(C\), and \(Y\) must be outside the future light cone of setting \(X\), and \(A\), \(C\), and \(X\) must be outside the future light cone of setting \(Y\). Independent Settings demands that the settings \(X\) and \(Y\) have no relevant causes. Specifically, in a DAG that obeys Independent Settings, \(X\) and \(Y\) cannot be caused by nor share common causes with any other node. Thus, they can only be not \(d\)-separated from a node if they are causes of that node. Relativistic Causal Arrow demands that \(X\) cannot be a cause of \(B\), \(C\), and \(Y\), while \(Y\) cannot be a cause of \(A\), \(C\), and \(X\). This then implies that any causal structure that obeys both Relativistic Causal Arrow and Independent Settings needs to exhibit the following \(d\)-separation relations: \[B\perp_{\mathrm{d}}X,\quad C\perp_{\mathrm{d}}X,\quad A\perp_{\mathrm{d}}Y, \quad C\perp_{\mathrm{d}}Y,\quad X\perp_{\mathrm{d}}Y. \tag{7}\] By the compositionality property of the \(d\)-separation (see Def 8 for the definition of compositionality, and Appendix A.3 that \(d\)-separation is compositional), these imply that \[ACK\perp_{\mathrm{d}}Y,\quad BCY\perp_{\mathrm{d}}X. \tag{8}\] The \(d\)-separation rule demands \[P(acx|y)=P(y),\quad P(bcy|x)=P(bcy). \tag{9}\] These equations imply Eqs. (1), which yields the LF monogamy relations by Def. 3. Theorem 3 reformulates the challenge raised in Ref. [37] to quantum causal models, strengthening it to all \(d\)-sep causal models. Furthermore, Theorem 3 together with Theorem 2 imply that **Corollary 1**.: _The set of \(P(abc|xy)\) GPT-compatible with the LF DAG coincides with that of any \(d\)-sep causal model satisfying Relativistic Causal Arrow and Independent Settings._ That is, the LF DAG under GPT causal modeling provides the most14 powerful \(d\)-sep causal explanation for the correlations of \(P(abc|xy)\) while adhering to Relativistic Causal Arrow and Independent Settings. Footnote 14: Strictly speaking, the LF DAG is a representative of the equivalence class of DAGs under \(d\)-sep causal modeling that are the most powerful, which includes, for example, the GPT causal model with the DAG of Fig. 4. It is an equivalence class since all DAGs in it are \(d\)-sep-compatible with the same set of probability distributions. ### At odds with No Fine-Tuning As mentioned, Theorem 3 applies to observations made in the spacelike-separated minimal LF scenario. Therefore, when considering _non_-spacelike-separated scenarios, Theorem 3 has no teeth. Nevertheless, there is in fact a completely independent way of arguing against models that explain the violation of LF monogamy via an alternative causal structure: they violate the No Fine-Tuning condition (also known as the faithfulness condition), which is an assumption often used in causal discovery. A causal model is _fine-tuned_ if it produces a conditional independence relation that _does not_ come from a \(d\)-separation relation of the causal structure. Such a model is often undesirable, since its causal structure is not sufficient to explain why that conditional independence is observed in the data. In order to have a satisfactory causal explanation for operational observations, a fine-tuned model needs to be supplemented by further explanations for the appearance of that conditional independence. Without supplementary explanations, a fine-tuned causal model suggests that there is a conspiracy of nature that hides certain causal dependencies from our observations. Thus, in such cases, a fine-tuned model is undesirable and hence the _No Fine-Tuning_ causal discovery rule will be applied. **Definition 7** (No Fine-Tuning).: _All conditional independence relations in the probability distribution of a causal model must correspond to \(d\)-separation relations in its causal structure._ When a supplementary explanation exists, however, a fine-tuned model is not of concern. Cryptographic protocols are examples of such processes.15 Footnote 15: A causal model for a cryptography protocol will be fine-tuned. This is because the plaintext appears to be uncorrelated with the cyprextent, even though they are not \(d\)-separated: the plaintext causally influences the cyprextent. Nevertheless, the fine-tuning is not of concern since it can be further explained by how the cryptographer carefully designed the encryption algorithm to wash out any correlation between the plaintext and the cyprextent. In Ref. [2; 6] it was proven that, although there are classical causal models explaining violations of Bell inequalities, all such models must be fine-tuned to reproduce the no-signaling condition in a Bell scenario. Without a proper supplementary explanation for the appearance of the no-signalling relations, the fine-tuned models are unsatisfactory. Here, we construct a No Fine-Tuning no-go theorem for the minimal LF scenario, where not only classical causal models are considered, but also any nonclassical ones under \(d\)-sep causal modeling framework, with extensions to cyclic causal structures in Sec. IV.2.1. Besides the no-signaling conditions, namely, \(P(a|xy)=P(a|x)\) and \(P(b|xy)=P(b|y)\) (Eqs. (6)), in our No Fine-Tuning argument for the minimal LF scenario we will also make use of additional conditional independence relations that can be expected from the experiment, namely, the statistical independence of Alice's or Bob's measurement setting from Charlie's outcome: \[P(c|xy)=P(c). \tag{10}\] Both Eqs. (6) and (10) are operationally verifiable. The data to verify Eqs. (6) are always available at the end of an experimental realization of the minimal LF scenario. The operational verification of Eq. (10), on the other hand, may be less straightforward, since as mentioned in Sec. II, the data for \(C\) may not always be available in some realizations of the minimal LF scenario such as the quantum proposal mentioned in Sec. I. However, operational evidence of Eq. (10) can, in fact, still be gathered in these realizations. A protocol for operationally verifying Eq. (10) in such a quantum proposal will be presented in Sec. IV.2.2, and some alternative protocols are presented in Appendix C. Crucially for the result, the conditional independence relations of Eqs. (6) and Eq. (10) do not come with any clear supplementary explanation: in the realizations of the minimal LF scenario, the experimenter does not need to design or engineer16 Eqs. (6) and (10), and there is no well-established17 mechanism that can wash out the corresponding correlations allowed by a causal structure permitting them. Footnote 16: The effort in experimental design to close various loopholes is not about enforcing statistical independence but about ruling out specific underlying mechanics (such as causal influences) relevant to certain assumptions used in the corresponding no-go theorems. For example, the locality-loophole free Bell experiments aim at ruling out the causal influence from Alice’s measurement setting \(X\) to Bob’s measurement outcome \(B\) according the Relativistic Causal Arrow, by making sure the \(X\) and \(B\) are space-like separated. Footnote 17: We remark that there is a proposed mechanism in Bohmian mechanics that explains why, in most conditions, signaling is not detected even if the causal structure permits it in principle, namely, the statistical mechanism that gives rise to the quantum equilibrium condition [51]. Therefore, it is reasonable to adopt the No Fine-Tuning causal discovery rule for this scenario. However, as it turns out, this assumption rules out any causal structure that is \(d\)-sep-compatible with violations of LF inequalities. **Theorem 4** (no-go--No Fine-Tuning).: _No \(d\)-sep causal model satisfying \(P(a|xy)=P(a|x)\), \(P(b|xy)=P(b|y)\), and \(P(c|xy)=P(c)\) with No Fine-Tuning can explain any violation of the LF monogamy relations (including the LF inequalities)._ Proof.: \(P(c|xy)=P(c)\) implies that \(P(c|xy)=P(c|x)\) and \(P(c|xy)=P(c|y)\). No Fine-Tuning requires the DAG reproducing \(P(a|xy)=P(a|x)\), \(P(b|xy)=P(b|y)\). \(P(c|xy)=P(c|x)\) and \(P(c|xy)=P(c|y)\) to satisfy the \(d\)-separation relations of \[A\perp_{\mathrm{d}}Y|X,\,B\perp_{\mathrm{d}}X|Y,\,C\perp_{\mathrm{d}}X|Y,\,C \perp_{\mathrm{d}}Y|X. \tag{11}\] Because of the compositionality property of the \(d\)-separation (see Def 8 for the definition of compositionality, and Appendix A.3 that \(d\)-separation is compositional), the above \(d\)-separation relations imply that \[AC\perp_{\mathrm{d}}Y|X,\quad BC\perp_{\mathrm{d}}X|Y. \tag{12}\] By the \(d\)-separation rule, these imply Eqs. (1), which yields the LF monogamy relations by Def. 3. That is, as long as one would like their causal model for the minimal LF scenario to reproduce the conditional independence relations of Eqs. (6) and (10), for whatever motivations one may have, including operational evidence or metaphysical considerations, Theorem 4 stipulates that all \(d\)-sep models that reproduce violations of LF monogamy relations must be fine-tuned. Thus, Theorem 4 provides an entirely novel way of understanding violations of LF inequalities--not only do they contradict relativistic causal-metaphysical assumptions in spacelike-separated minimal LF scenarios, but also challenge _any_\(d\)-sep causal model that respects the causal discovery assumption of No Fine-Tuning. Just as in the case in Sec. IV.1, we have the following corollary from Theorem 2 and Theorem 4. **Corollary 2**.: _The set of \(P(abc|xy)\) GPT-compatible with the LF DAG coincides with that of any \(d\)-sep causal model satisfying Eqs. (6) and (10) with No Fine-Tuning._ That is, under the No Fine-Tuning causal discovery rule, we again arrive at the LF DAG--with GPT causal modeling, it already provides the most powerful \(d\)-sep causal explanation one can obtain for the correlation of \(P(abc|xy)\) while reproducing the desired conditional independences of Eqs. (6) and (10). #### iv.2.1 Applicability to cyclic causal models Until this point, we only worked with acyclic causal structures and therefore one might suggest going around the no-go results in Theorem 3 and Theorem 4 by employing cyclic causal structures. For Theorem 3, cyclic causal structures are nonstarters as Relativistic Causal Arrow immediately rules them out. For Theorem 4, cyclic causal structures are not immediately ruled out, but we will prove in this subsection that they may still not be of any help. To do so, we extend the validity of Theorem 4 to a wide class (potentially all) of cyclic causal structures by generalizing the framework of \(d\)-sep causal modeling and by accordingly updating the definition of No Fine-Tuning. This is necessary because both the \(d\)-sep causal modeling framework and the current definition of No Fine-Tuning rely on \(d\)-separation; as we will see now, there are cyclic causal structures with, e.g., some GPT causal prescriptions, for which the \(d\)_-separation rule_ does not apply. Recall from Sec. III.1 that the \(d\)-separation rule means that all \(d\)-separation relations among observed nodes of the causal structure will give rise to conditional independence relations in the compatible probability distributions. Ref. [20] proved that this rule holds for all DAGs under any GPT causal prescription, and thus, we _defined_\(d\)-sep causal modeling as a framework with DAGs. The \(d\)-separation rule is not necessarily valid for a cyclic causal structure under GPT causal prescriptions. Indeed, some GPT causal prescriptions applied to a cyclic graph \(\mathcal{G}\) may give rise to compatible probability distributions that violate the conditional independence relations from \(d\)-separation relations among observed nodes of \(\mathcal{G}\).18 Appendix A.4 shows two examples where this happens. Because of that, generalizations of \(d\)-separation rule have been developed to encompass a larger number of cyclic causal structures. Footnote 18: We note that there are cyclic causal structures that obey the \(d\)-separation rule with _all_ causal prescriptions. This class is non-trivial since it contains, for example, causal structures that can violate causal inequalities that detect indefinite causal order [32], such as the Lugano causal structure [52]. We use the term _graphical-separation relations_ to denote the graph-theoretic relations employed in those generalizations. Each kind of graphical-separation relation will be determined by a corresponding graph-theoretic criterion; for reference, the graph-theoretic criterion for \(d\)-separation is reproduced in Appendix A.3. A reasonable generalization of the \(d\)-separation rule should have its graphical-separation relations reduce to \(d\)-separation relations on acyclic graphs. A causal structure together with a causal prescription is said to obey a _graphical-separation rule_ when any of the corresponding graphical-separation relations among observed nodes implies a conditional independence relation on its compatible probability distributions. Examples of graphical-separation rules that generalize the \(d\)-separation rule includes the \(\sigma\)-separation rule [28] and the \(p\)-separation rule [53]. While the \(\sigma\)-separation rule can still be violated by some cyclic causal structures with classical causal prescription (as shown in the second example of Appendix A.4), \(p\)-separation is a further generalization proved to be valid at least for _all_ causal structures under _quantum_ causal prescription [53]. Both the \(\sigma\)-separation rule and the \(p\)-separation rule reduce to \(d\)-separation rule for acyclic causal structures. A common feature of \(d\)-separation, \(\sigma\)-separation and \(p\)-separation rules is their compositionality, which is also a key ingredient needed for the proof of the No Fine-Tuning theorem (Theorem 4). **Definition 8** (Compositional graphical-separation relations).: _A graphical-separation relation denoted by \(\bot\) is compositional if \(\mathbf{U}\perp\mathbf{W}|\mathbf{Z}\) and \(\mathbf{V}\perp\mathbf{W}|\mathbf{Z}\) implies \(\mathbf{U}\mathbf{V}\perp\mathbf{W}|\mathbf{Z}\), where \(\mathbf{U}\), \(\mathbf{V}\), \(\mathbf{W}\), and \(\mathbf{Z}\) are four sets of observed nodes in a graph._ The fact that \(d\)-, \(\sigma\)-, and \(p\)-separation relations are compositional follows immediately from their definitions: two sets are conditionally separated if any element of one set is separated from any element of the other; see Def. 11 in Appendix A.3 for example. This means that compositionality is a natural property of graphical-separation relations. While it is not clear at this time if the \(p\)-separation rule is valid for all cyclic causal structures _under any GPT causal prescription_, the breadth of existing examples suggests a wide--perhaps even universal--applicability of the property of compositionality for cyclic causal structures. Thus, compositionality will be the key property of our extension of the framework of \(d\)-sep causal modeling to cyclic causal structures. Such extension is called _compositional causal modeling_. A compositional causal model is any causal model whose causal structure (which can be cyclic), together with its causal prescription, adheres to a graphical-separation rule associated with compositional graphical-separation relations. The "minimum definition of a causal model" in Ref. [54] is a special case of a compositional causal model, wherein the graphical-separation rule is simply \(d\)-separation and cyclic causal models satisfying it are permitted. Now, we can extend the notion of No Fine-Tuning to the compositional causal modeling framework. **Definition 9** (No Fine-Tuning (compositional causal modeling)).: _Suppose that a compositional causal model has a causal structure \(\mathcal{G}\) and a causal prescription that, together, satisfy a graphical-separation rule whose relation \(\bot\) is compositional. The model satisfies No Fine-Tuning if every conditional independence relation in its probability distribution corresponds to a \(\bot\) relation in \(\mathcal{G}\)._ Clearly, this notion reduces to the usual definition (Def. 7) if \(\bot\) is in fact \(\bot_{\mathrm{d}}\). Using this updated definition of No Fine-Tuning, the proof for Theorem 4 still goes through when we replace \(\bot_{\mathrm{d}}\) by any compositional graphical-separation relation \(\bot\). Thus, _no_ compositional causal model can explain any violation of LF monogamy relations while reproducing \(P(a|xy)=P(a|x),\ P(b|xy)=P(b|y)\) and \(P(c|xy)=P(c)\) without fine-tuning, _even those with cyclic causal structures and nonclassical causal prescriptions_. The generalization of \(d\)-sep causal modeling to compositional causal modeling brings more prominence to the No Fine-Tuning no-go theorem (Theorem 4) compared to the relativistic no-go theorem (Theorem 3). As mentioned, Relativistic Causal Arrow demands the causal structure to be acyclic and thus, Theorem 3 cannot say anything about cyclic causal structures. On the other hand, we expanded Theorem 4 to apply to cyclic causal structures: no model allows for LF inequality violations and satisfies Eqs. (6) and Eq. (10) while also obeying No Fine-tuning corresponding to any graphical-separation rule associated with compositional graphical-separation relations. #### iv.1.2 Verifying the statistical independence of an inaccessible variable Now, we will describe a verification protocol for checking the validity of Eq. (10) in the proposed quantum realization of violations of LF inequalities mentioned in Sec. I.1, where Alice sometimes reverses Charlie's measurement. This verification protocol requires the runs of quantum LF experiments to be executed simultaneously, i.e., in a parallel instead of sequential fashion; this is done with multiple Charlies. The protocol also involves another agent, Veronika19 the verifier. She receives the necessary data to verify that Eq. (10) is satisfied and communicates the verification result to the rest of the world, _without_ communicating any of the individual values of Charlie's outcome. In general, Veronika will get entangled with the Charlies in this process; the verification result received by the world, on the other hand, can be separable from Veronika and the Charlies. Alice, who is assumed to have quantum control over all the Charlies and Veronika, subsequently unwinds the entanglement between them so that afterwards, she can either directly access the outcome of a Charlie's measurement (if \(x=1\)) or further unwind the entanglement between a Charlie and his system (if \(x\neq 1\)). Then, Alice and Bob can proceed with the quantum protocol for violating LF inequalities, while the verification result sent out by Veronika is still publicly accessible. Footnote 19: Besides the fact that ‘vero’ means ‘true’, the name of the verifier is a tribute to Veronika Baumann who pointed out that the verifier in our verification protocol can send out a verification result that needs not be erased afterwards, which is inspired by Deutsch’s protocol for Wigner’s friend [55] and her own work [56]. The details of the verification protocol are as follows. Assume that we execute \(N\) parallel runs of the proposed quantum LF experiment to achieve violations of LF inequalities. At the beginning of the experiment, \(N\) identical entangled pairs are prepared. Instead of one Charlie, we have \(N\) Charlies, each of whom receives half of an entangled pair. The remaining halves of the entangled pairs are all sent to Bob. Each Charlie measures his half in a fixed basis, and does not share his outcome with the others. Note that we need several Charlies here because Alice will need to reverse some of their measurements while simply copying the outcomes of the others, depending on her measurement setting \(X\); if there was a single Charlie who saw all \(N\) outcomes, it is unclear whether Alice could coherently operate on some parts of his memory independently from the rest. After all Charlies finished their measurement, the joint state of all Charlies will in general be in a huge superposition state, which we denote as \[\sum_{C_{1},C_{2},\ldots,C_{N}\in\{1,2,\ldots,M\}}\left(\prod_{i=1,2,\ldots,N} \hskip-10.0pt\psi_{C_{i}}\right)\left|C_{1},C_{2},\ldots,C_{N}\right\rangle_{ \mathrm{C}}, \tag{13}\] where \(M\) is the number of possible outcomes, \(\psi_{C_{i}}\) is the amplitude for the \(i\)th Charlie to obtain the outcome \(C_{i}\), and \(\left|\cdot\right\rangle_{\mathrm{C}}\) is the state of the joint system consisting of all the Charlies. In general, there can be \(M^{N}\) terms in the superposition. To simplify the notation, we denote the string representing the values of all Charlies' outcomes in each term as \(\mathbf{C}_{k}\) with \(k=1,2,\ldots,M^{N}\) and rewrite the big superposition state as \[\sum_{k=1,2,\ldots,M^{N}}\hskip-10.0pt\psi_{\mathbf{C}_{k}}\left|\mathbf{C}_{k}\right\rangle _{\mathrm{C}}, \tag{14}\] where \(\psi_{\mathbf{C}_{k}}:=\prod_{i=1,2,\ldots,N}\psi_{C_{i}}\). Now, Alice chooses \(N\) values of \(X\), each of which corresponds to her measurement setting in each of the \(N\) runs, and similarly, Bob chooses \(N\) values of \(Y\), each of which corresponds to his measurement setting in each run. Before Alice and Bob implement any measurements Veronika checks if \(C\) is independent of \(X\) and \(Y\). She has two ways to do so. We will first present the way in which Veronika can be a quantum observer (such as a quantum AI similar to Charlie) without advanced technological abilities. At the end of this section, we will present a variant of the verification protocol where Veronika needs to be a superobserver with quantum control over all Charlies' labs. In the first approach, Veronika collects all values of \(X\) and \(Y\) from Alice and Bob and talks to all the Charlies to collects their values of \(C\). Then, she computes the value of \(f(c|xy)-f(c)\), where \(f(\cdot)\) indicates relative frequencies. If Veronika sees \(|f(c|xy)-f(c)|<\epsilon\), where \(\epsilon\) is a pre-agreed-upon small positive number indicating the allowed statistical fluctuations, she records her verification result as "pass", and otherwise, she records her verification result as "fail". It is important that neither of the records contains any information about any individual values of \(C\). Before she sends her verification result out to the rest of the world, we assume that the joint system of all Charles, Veronika (denoted by V) and her verification record (denoted by R) are in a closed system. As such, the state of this joint system will be \[\left(\underset{k=1,2,\ldots,J}{\sum}\psi_{\mathbf{C}_{k}}\left|\mathbf{C}_{k} \right\rangle_{\mathrm{C}}\,\right|\,\text{saw ``}\mathbf{C}_{k}\text{''}\big{)}_{\mathrm{V}}\left|\text{ pass}\right\rangle_{\mathrm{R}}+\] \[\left(\underset{k=J+1,\ldots,M^{N}}{\sum}\psi_{\mathbf{C}_{k}}\left| \mathbf{C}_{k}\right\rangle_{\mathrm{C}}\,\right|\,\text{saw ``}\mathbf{C}_{k}\text{''}\big{)}_{\mathrm{V}}\left|\text{ fail}\right\rangle_{\mathrm{R}}. \tag{15}\] Here, we assume without loss of generality that sequences are labelled such that the first \(J\) sequences of \(\mathbf{C}_{k}\) yield \(|f(c|xy)-f(c)|<\epsilon\) in Veronika's calculation, while for the rest \((M^{N}-J)\) sequences of \(\mathbf{C}_{k}\), Veronika's calculation will yield \(|f(c|xy)-f(c)|\geq\epsilon\). For combinatorial reasons, as the number of parallel runs \(N\) gets larger, the proportion of the \(M^{N}\) sequences that present correlations with \(X\) and \(Y\) decreases and \(J/M^{N}\approx 1\). Thus, when \(N\) is large enough, the term in the superposition where the verification record reads "pass" will dominate almost all of the amplitude. Correspondingly, when the verification record is sent out to the rest of the world, there will be a high chance that the message reads "pass". In this case, the state effectively becomes \[\left(\underset{k=1,2,\ldots,J}{\sum}\psi_{\mathbf{C}_{k}}\left|\mathbf{C}_{k}\right\rangle _{\mathrm{C}}\,\right|\,\text{saw ``}\mathbf{C}_{k}\text{''}\big{)}_{\mathrm{V}}\left|\text{ pass}\right\rangle_{\mathrm{R}}, \tag{16}\] up to normalisation. Since now the message is disentangled from Veronika and the Charles, Alice can now rewind the interaction that caused Charles and Veronika to become entangled, erasing all Veronika's memory about any individual outcome of any Charlie without modifying the verification record. The values of \(X\) and \(Y\) are stored in classical systems and thus will not be disturbed or erased by this process. The Charles will now be in the state \[\underset{k=1,2,\ldots,J}{\sum}\psi_{\mathbf{C}_{k}}\left|\mathbf{C}_{k}\right\rangle _{\mathrm{C}}, \tag{17}\] up to normalisation. In general, Eq. (17) will be different from Eq. (14). As such, when Alice's measurement choice is \(x=1\), Alice's copy of Charlie's outcome may have certain degrees of inaccuracy. However, this is not of concern because the No Fine-Tuning no-go theorem as in Theorem 4 applies to LF monogamy relations, which jointly constrain both \(P(ab|xy)\) and \(P(ac|x{=}1)\), instead of only LF inequalities, which assume \(P(a|c,x{=}1)=\delta_{a,c}\). Note that in the case where the message reads "fail," the experiment has to be discarded because the state of the Charlies after the rewinding will be very different from the pre-verification state. An alternative verification scheme can be implemented in the case that Veronika has enough quantum control over the Charlies. In this case she can directly implement a projection-valued measurement (PVM) with two projectors on the joint system (labelled by \(\mathcal{S}\)) consisting of all Charlies and the classical systems storing all values of \(X\) and \(Y\). In such a PVM, one of the projectors corresponds to \(|f(c|xy)-f(c)|<\epsilon\) and the other for \(|f(c|xy)-f(c)|\geq\epsilon\). Then, if the measurement outcome correspond to the the first projector, Veronika records the verification result as "pass", and otherwise, as "fail", without ever knowing any individual values of Charlies' outcomes. After the measurement the joint state of all Charlies, Veronika and her record, in general, is \[\left(\underset{k=1,2,\ldots,J}{\sum}\psi_{\mathbf{C}_{k}}\left|\mathbf{C}_{k} \right\rangle_{\mathrm{C}}\right)\left|\,\text{saw ``}\text{pass''}\big{)}_{\mathrm{V}}\left|\text{ pass}\right\rangle_{\mathrm{R}}+\] \[\left(\underset{k=J+1,\ldots,M^{N}}{\sum}\psi_{\mathbf{C}_{k}}\left| \mathbf{C}_{k}\right\rangle_{\mathrm{C}}\right)\left|\,\text{saw ``fail''}\big{)}_{\mathrm{V}}\left|\text{ fail}\right\rangle_{\mathrm{R}}. \tag{18}\] In this protocol, Veronika is no longer entangled with each individual sequence of \(\mathbf{C}_{k}\), instead, she is as much entangled with all Charlies as her verification record. As such, after the verification is sent out to the world Veronika will automatically be separable from all Charlies and there is no need for Alice to rewind. Nevertheless, the records of Charlies' outcomes will still get some noise due to the effective collapse of the state mentioned before, and the discussions thereof regarding the noise also apply here. Finally, we acknowledge that one may have some worries or concerns in using the Veronika verification protocol as a justification of Eq. (10) in Theorem 4. In Appendix. C, we list some of these concerns and provide our responses. The important thing to note is that Theorem 4 and its proof are correct regardless of the feasibility or validity of the Veronika verification protocol. The Veronika protocol merely serves as one of the possible motivations one may have for adhering to Eq. (10) in the minimal LF scenario, especially when considering its quantum realizations. ## V Conclusions In this work, we introduced two new perspectives to the LF no-go theorem [33]: the first recasts LF inequalities as special cases of monogamy relations, the second recasts LF inequalities as causal compatibility inequalities associated with a _nonclassical_ causal marginal problem. Both perspectives bring new tools and insights to further elucidate the underlying landmark result, and illuminate new potential avenues for future research. By casting the LF inequalities as monogamy relations, we see that these inequalities arise precisely because of the correlation that Alice must occasionally share with Charlie, provided that Eqs. (1)--a statistical constraint analogous to the no-signalling condition in a tripartite Bell scenario--is satisfied. This point of view links the study of extended Wigner's Friend scenarios with ideas previously leveraged in quantum information technologies such as quantum key distribution and randomness amplification. This connection now enables researchers to share technical toolboxes originally developed for distinct motivations. More speculatively, this connection may also give hints on new no-go theorems in extended Wigner's friend scenarios, for example, by looking at monogamy relations derived from constraints other than no-signalling. Note that the LF monogamy relations are derived in a manner agnostic of whatever particular physical theory may be governing the systems shared by the agents. The theory-agnostic nature of their derivation means there is no escaping the LF monogamy relations by modifying the causal explanation to employ quantum, GPT, or other types of nonclassical systems as common causes. This is formalized by our results from the causal compatibility perspective, using the framework of \(d\)-sep causal modeling we defined in Sec. III.1. Indeed, while one _can_ explain Bell inequality violations by invoking a _quantum_ common cause in the Bell DAG, we showed by contrast that associating GPT (or more exotic) systems with the latent node of the LF DAG is _useless_ for explaining LF inequality violations. This follows from the fact that the LF inequalities are causal compatibility inequalities stemming from a _nonclassical_ causal marginal problem; see Theorem 1. This is an explicit manifestation of the fact that the LF no-go theorem is much stronger than Bell's theorem, which was pointed out in [33, 37]. We refer the reader to Appendix B for a comparison of the probability distributions compatible with the Bell DAG versus the LF DAG under different causal prescriptions and constraints. Besides showing that the LF inequalities are nonclassical causal compatibility inequalities of the LF DAG--i.e., that the LF no-go theorem _follows_ from the LF DAG with any causal prescription under \(d\)-sep causal modeling--we also showed that the causal marginal problem of Theorem 1 can be considered equivalent to previous formulations of LF no-go theorem [33, 37]. That is, Theorem 2 states that the LF DAG, under GPT causal modeling, does not impose any extra constraints on compatible probability distributions beyond those imposed by the metaphysical assumptions in previous formulations of LF no-go theorems. As such, the LF no-go theorem can be exactly cast as a causal marginal problem using the LF DAG. Since the LF DAG with GPT causal modeling constitutes _the most powerful_ causal explanation satisfying the fundamental causal principles associated with the LF causal-metaphysical assumptions (see Theorem 3 and Corollary 1), one may interpret the LF DAG as a visual and memorable representation of the LF theorem. Mnemonically, the LF DAG with GPT or \(d\)-sep causal modeling stands in place of the LF causal-metaphysical assumptions, with neither loss nor gain of generality. Our results also underscore how Extended Wigner's Friend scenarios pose formidable challenges for the field of nonclassical causal inference. First, we strengthened the challenge previously presented in Ref. [37] (Theorem 3); that is, we showed that not _only_ quantum causal models, but rather _all_\(d\)-sep causal models are in tension with Relativistic Causal Arrow and Independent Settings. Second, our Theorem 4 presents a new conundrum for the field: _any_ causal structure under _non_classical causal modeling must violate the principle of No Fine-Tuning in order to explain LF inequality violations. Notably, Theorem 4 further extends beyond _acyclic_ causal structures to most (and potentially all) _cyclic_ ones. To show this, we developed a generalization of \(d\)-sep causal modeling--_compositional causal modeling_--adapting the No Fine-Tuning principle to cyclic causal structures, as explained in Sec. IV.2.1. Our results mean that the most general available framework for causal reasoning simply cannot account for the violations of the LF inequalities without compromising _both_ basic causal-metaphysical assumptions _and_ the No Fine-Tuning principle of causal discovery. This is a rather radical finding, as this framework is already extremely permissive: it does not constrain the specific theory that can be used in the causal explanation, and can thus accommodate even certain exotic theories beyond GPTs; it also allows cyclic causal models, as long as the separation rule is compositional, such as the \(\sigma\)-separation rule [28] or the \(p\)-separation rule [53]. Speculatively, how might the existing causal modeling framework be generalized even further, so as to be able to sensibly account for LF inequality violations? We see two divergent possibilities: one is to develop a framework for _non_compositional causal modeling, the other is to generalize the definition of a causal structure and how its nodes are associated with correlations among observed events. We are skeptical that the first option could be fruitful. Firstly, while noncompositional graphical-separation rule for cyclic causal model might salvage the No Fine-Tuning principle, it cannot help with the fact that Relativistic Causal Arrow and Independent Settings would imply _acyclic_ causal structure and thus renders cyclic causal model irrelevant. Additionally, noncompositionality would imply that properties of a set of variables fail to reduce to the individual _causal_ dependence of each element on its causal parents. These _holistic_ properties would nevertheless be causally related to other variables. As these holistic properties are function of the joint distribution of a set of variables, these properties would presumably not appear as nodes themselves in the graphical model, and yet, the model would posit a lack of causal dependence of these properties relative to other variables, where this causal dependence would not be captured in the graphical structure. But the goal of adopting noncompositionality should be exactly to capture such causal dependencies! Finally, note that we cannot entertain any noncompositional separation rule that would imply conditional independence relations _beyond_ those implied by the \(d\)-separation rule for DAGs. After all, even classical causal models can violate every conditional independence relation _not_ implied by a \(d\)-separation relation in a DAG. Perhaps a more fruitful way is to generalize the definition of causal structures and how they are associated with correlations among observed events. So far, the causal structure used in the framework is single and static, and all observed events are associated with a single node that carries a classical random variable; Absoluteness of Observed Events, in other words, is embedded in the compositional causal modeling frameworks. If one denies the absolute nature of Charlie's outcome, the premise of the marginal problem would no longer hold: there will no longer be a well-defined joint probability distribution \(P(abc|xy)\) simultaneously having \(\{P(ab|xy),P(ac|x{=}1)\}\) as its marginals. Some interpretations of quantum theory deny Absoluteness of Observed Events. For example, in Relational Quantum Mechanics [57, 58], QBism [59], and Everettian quantum mechanics [60], observed events are relative to specific contexts, agents, and branches, respectively. However, it is unclear how to construct causal models in such interpretations. The realization that Bell inequalities are classical compatibility inequalities has inspired the exploration of novel scenarios exhibiting quantum advantages and the construction of frameworks for modeling nonclassical causality. Analogously, we hope that our work will foster collaborations between two previously disconnected communities, namely, the community studying extended Wigner's friend theorems and the one focusing on nonclassical causal inference. For example, just as causal compatibility _inequalities_ (instead of equalities) had been much overlooked before the work of Ref. [2], so too _causal marginal problems_ have thus far received relatively sparse consideration by the classical causal inference community, let alone in nonclassical contexts. We anticipate that the techniques and findings in _nonclassical causal inference for marginal problems_ will ultimately expose novel insight-rich scenarios related to (or generalizations of) the extended Wigner's friend scenario. Finally, as mentioned, we hope that our work will inspire both communities to find a meaningful further generalization of the compositional causal modeling framework. ###### Acknowledgements. This project was originated in the discussions YY, MMA and ADB had at the 2022 Kefalonia Foundations workshop. Huge thanks to David Schmid for encouraging us to write this paper, providing important suggestions and feedback on the draft and for all the stimulating discussions. Also many thanks to V. Vilasini for sharing with us many valuable insights on cyclic causal models. We further thank Robert W. Spekkens, Howard M. Wiseman, John Selby, Veronika Baumann, Marcin Pawlowski, Marc-Olivier Renou, Victor Gitton, Ilya Shpitser, Isaac Smith, Markus Muller, Caroline L. Jones, Eleftherios Tselentis and Roberto D. Baldijao for insightful discussions. YY, MMA and EW were supported by Perimeter Institute for Theoretical Physics. Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities. YY and MMA were also supported by the Natural Sciences and Engineering Research Council of Canada (Grant No. RGPIN-2017-04383). ADB was supported by the John Templeton Foundation (Grant ID#61466) as part of the "Quantum Information Structure of Spacetime (QISS)" project (qiss.fr). EGC was supported by grant number FQXi-RFP-CPW-2019 from the Foundational Questions Institute and Fetzer Franklin Fund, a donor advised fund of Silicon Valley Community Foundation, and by the Australian Research Council (ARC) Future Fellowship FT180100317. ## Appendix A Additional causal inference background ### GPT compatibility The following is adapted from [20, Section 3], which states the case for OPTs. We assume that the reader has basic familiarity with the GPT framework. A probability distribution over observed nodes, denoted as \(P(\mathsf{obs}(\mathcal{G}))\), is GPT-compatible20 with a given causal structure \(\mathcal{G}\) if and only if it can be generated by a GPT in the way prescribed by \(\mathcal{G}\), namely, if and only if there exists a GPT \(\mathsf{T}\) such that: Footnote 20: Sometimes it is also called to satisfy the “generalized Markov condition”. [20] * a system in \(\mathsf{T}\) is associated to each edge that starts from a latent node in \(\mathcal{G}\), * for each latent node \(Y\) and for any value \(\mathsf{opa}(Y)\) of its observed parents \(\mathsf{Opa}(Y)\), there is a channel \(\mathcal{C}_{Y}(\mathsf{opa}(Y))\) in \(\mathsf{T}\) from the systems associated with latent-originating edges incoming to \(Y\) to the composite system associated with all edges outgoing from \(Y\), * for every value \(x\) of an observed node \(X\), and for any value \(\mathsf{opa}(X)\) of its observed parents \(\mathsf{Opa}(X)\), there is an effect \(\mathcal{E}_{X}(x|\mathsf{opa}(X))\) in \(\mathsf{T}\) on the system composed of all systems associated with edges incoming in \(X\), such that \(\sum_{x}\mathcal{E}_{X}(x|\mathsf{opa}(X))\) is the unique deterministic effect on the systems associated to edges coming from latent nodes to \(X\), * \(P(\texttt{obs}(\mathcal{G}))\) is the probability obtained by wiring the various tests \(\mathcal{E}_{X}(x|\texttt{opa}(X))\) and channels \(\mathcal{C}_{Y}(\texttt{opa}(Y))\). Note that the \(\mathsf{T}\) may contain several system types such as classical ones. According to these definitions above, an observed node with no latent parents is simply associated with a (conditional) probability distribution (a set of effects on the trivial system that sum to the unique deterministic effect on the trivial system), while a latent node with no latent parents is associated to a classically-controlled state over the relevant systems (i.e., a controlled deterministic channel from the trivial system). GPT causal modeling reduces to classical causal modeling when all the nodes in the causal structure are observed nodes. In that case, the probability distribution over all the variables of \(\mathcal{G}\) must be in the form of \[P\left(\texttt{obs}(\mathcal{G})\right)=\prod_{N}P\left(n|\texttt{pa}(N) \right). \tag{10}\] Another common special case of GPT causal modeling is quantum causal modeling, where systems associated to edges outgoing from latent nodes may be either classical or quantum. For example, when the systems outgoing from the latent common cause of the Bell DAG are all quantum, the probability distribution over all observed variables of \(\mathcal{G}\) must be in the form \[P(abxy)=\operatorname{Tr}\bigl{[}\rho_{AB}(E_{a|x}\otimes F_{b|y})\bigr{]}P(x )P(y), \tag{11}\] where \(\rho_{AB}\) is the state of some quantum state associated with the latent node, and \(E_{a|x}\) and \(F_{b|y}\) are POVMs associated with nodes \(A\) and \(B\), respectively. ### LF DAG as a GPT diagram Let us apply the definition of GPT compatibility to the LF DAG. First, we note that there is only one latent note \(\mathcal{L}\), with three outgoing edges, and no parents. Therefore, \(\mathcal{L}\) will be associated with a tripartite state preparation \(\rho\). \(\mathcal{L}\) has three observed children, \(A,B,\) and \(C\), and each of them has \(\mathcal{L}\) as their only latent parent. This means that each of them will be associated with a family of effects on one of the three systems coming out of \(\rho\). \(C\) has no other parents, so it is associated with a family \(\{\mathcal{E}_{C}(c)\}\), \(B\) has only one other parent, so it is associated with \(\{\mathcal{E}_{B}(b|y)\}\), and \(A\) has two observed parents, so it corresponds to \(\{\mathcal{E}_{A}(a|cx)\}\). Finally, the two observed nodes \(X\) and \(Y\) are associated with probability distributions \(P(x)\) and \(P(y)\), as they have no parents. Note that the three systems coming out of \(\rho_{\mathcal{L}}\) do not have to be of the same kind. Thus, a probability distribution \(P(abcxy)\) is GPT-compatible with the LF DAG if it corresponds to a diagram with the following shape (12) One can write this equivalently as the formula \[P(abcxy)\!=\!P\bigl{[}\mathcal{E}_{A}(a|cx)\mathcal{E}_{C}(c)\mathcal{E}_{B }(b|y)\rho_{\mathcal{L}}\bigr{]}P(x)P(y), \tag{13}\] where the values of \(P\bigl{[}\mathcal{E}_{A}(a|cx)\mathcal{E}_{C}(c)\mathcal{E}_{B}(b|y)\rho_{ \mathcal{L}}\bigr{]}\) will be specified by the GPT. To derive the LF inequalities, there is an extra condition on \(\mathcal{E}_{A}(a|cx)\), namely that \(\mathcal{E}_{A}(a|c,x\!\!=\!\!1)\) is \(\delta_{a,c}\) times the unique deterministic effect. ### \(d\)-separation and its compositionality In this appendix, we present the \(d\)-separation rule developed in Ref. [61; 45] and the compositionality of \(d\)-separation relations. In what follows, a _path_ of nodes in a DAG is a sequence of nodes connected by arrows, independently of the way these arrows are oriented. **Definition 10** (Blocked path).: _Let \(\mathcal{G}\) be a DAG, \(p\) be a path of nodes in \(\mathcal{G}\) and \(\mathbf{Z}\) be a set of nodes of \(\mathcal{G}\). We say that the path \(p\) is blocked by the set \(\mathbf{Z}\) if at least one of the following hold:_ 1. \(p\) _includes a sequence_ \(A\to C\to B\)_, called a_ chain_, where_ \(C\in\mathbf{Z}\)_._ 2. \(p\) _includes a sequence_ \(A\gets C\to B\)_, called a_ fork_, where_ \(C\in\mathbf{Z}\)_._ 3. \(p\) _includes a sequence_ \(A\to C\gets B\)_, called a_ collider_, where_ \(C\not\in\mathbf{Z}\) _and_ \(D\not\in\mathbf{Z}\) _for all the descendants_ \(D\) _of the node_ \(C\)_._ When a path \(p\) has a collider \(A\to C\gets B\), we will also say that \(p\) includes a _collider on_\(C\). **Definition 11** (\(d\)-separation relation).: **[Singleton-pair separation criterion]** _Let \(\mathcal{G}\) be a directed graph, where \(X\) and \(Y\) are single nodes in \(\mathcal{G}\), and \(\mathbf{Z}\) indicates a set of nodes of \(\mathcal{G}\). We say that \(X\) and \(Y\) are \(d\)-separated by the set \(\mathbf{Z}\), denoted \(X\perp_{d}Y|\mathbf{Z}\), if all of the paths connecting \(X\) to \(Y\) are blocked by \(\mathbf{Z}\)._ **[Setwise separation criterion]** _Two sets of nodes \(X\) and \(Y\) are said to be \(d\)-separated by \(\mathbf{Z}\), denoted \(\mathbf{X}\perp_{d}\mathbf{Y}|\mathbf{Z}\), whenever every node in \(\mathbf{X}\) is \(d\)-separated from every node in \(\mathbf{Y}\) given \(\mathbf{Z}\)._ Note that the relationship between setwise \(d\)-separation and singleton-pair \(d\)-separation implies that \(d\)-separation is compositional in the sense of Definition 8. The compositionality of \(d\)-separation was previously pointed out in Ref. [62] and is also emphasized in Ref. [63]. As we saw in Section III.1, the d-separation _rule_ states that, whenever a DAG \(\mathcal{G}\) presents a \(d\)-separation relation \(\mathbf{X}\perp_{\mathrm{d}}\mathbf{Y}|\mathbf{Z}\), a probability distribution \(P\) needs to satisfy the conditional independence \(P(\mathbf{X}\mathbf{Y}|\mathbf{Z})=P(\mathbf{X}|\mathbf{Z})P(\mathbf{Y}|\mathbf{Z})\) in order to be compatible with \(\mathcal{G}\), under all nonclassical causal prescriptions. By way of contrast, note that conditional independence relations are _not_ compositional. For instance, suppose that in a certain probability distribution \(P\), the variable \(U\) is independent of the variable \(W\) and the variable \(V\) is also independent of the variable \(W\). This _does not imply_ that the joint variable \(UV\) is independent of \(W\): for example, it is possible that \(W\) determines the parity of \(U\) and \(V\), even if it is independent of each one individually. ### Cyclic models violating the \(d\)-separation rule In this appendix, we will use two examples taken from [28] to explain what it means to say that the \(d\)-separation rule is not always valid for cyclic causal structures. In both of these examples there are no latent nodes; since observed nodes are always classical, the causal prescription does not make a difference here. The first example, Fig. 6, violates the \(d\)-separation rule but still obeys the \(\sigma\)-separation rule. The second example, Fig. 7, violates both the \(d\)-separation rule and the \(\sigma\)-separation rule, but obeys the \(p\)-separation rule. A counter-example to the \(p\)-separation rule is not known yet and it must involve post-quantum probabilistic theories. The first example concerns the causal structure of Fig. 6. There, both paths between \(B\) and \(D\) are blocked by \(AC\), so this graph presents the \(d\)-separation relation \(B\perp_{\mathrm{d}}D|AC\). However, it _is_ possible to construct a functional model on this graph where variables \(B\) and \(D\) are correlated when conditioning on \(AC\). Such a model is obtained from the functions \[A =D\cdot E_{A} \tag{10}\] \[B =A+E_{B}\] (11) \[C =B\cdot E_{C}\] (12) \[D =C+E_{D} \tag{13}\] Where \(E_{A}\), \(E_{B}\), \(E_{C}\) and \(E_{D}\), called the _error terms_[3], are uniformly distributed random variables that allow stochastic dependence between the cause and effect. We will assume that the error terms are binary; therefore, there are 16 possible evaluations that the set of error terms could take. We are interested in checking if \(B\) and \(D\) are independent when conditioned on \(A\) and \(C\). Suppose we condition on the case when \(a=c=0\). We can then use this to solve the system of equations above and try to find values of \(B\) and \(D\) for each of the 16 evaluations of the error terms. Some of these evaluations will not give valid solutions; this happens because we learn something about the error terms by postselecting on \(A\) and \(C\). By doing this procedure, we see that seven out of the sixteen evaluations of error terms do not give valid solutions to the system of equations. Out of the nine remaining evaluations, four give rise to \(B=D=0\), two give rise to \(B=1\) and \(D=0\), another two give rise to \(B=0\) and \(D=1\), and the remaining one gives rise to \(B=D=1\). Since we assume that the error terms were uniformly distributed (and thus all of their evaluations have the same probability), then this result says that there _is_ some correlation between \(B\) and \(D\) when conditioned on \(AC\). The second example concerns the causal structure of Fig. 7. There, there is no open path between \(C\) and \(D\), so it presents the \(d\)-separation relation \(C\perp_{\mathrm{d}}D\). In fact, it also presents the \(\sigma\)-separation relation \(C\perp_{\sigma}D\). However, it is possible to construct a functional model on this graph where \(C\) and \(D\) are perfectly correlated. Such a model is obtained from the functions \(a=c\oplus b\) and \(b=a\oplus d\), where all the variables are binary and \(\oplus\) is sum modulus two. Let us consider all possible cases: * When \(c=d=0\), we have \(a=b\); * When \(c=d=1\), we have \(a=b\oplus 1\); * When \(c\neq d\), there is no solution for \(A\) and \(B\). That is, the functional model only has solutions when \(c=d\). Thus, this is an example where both the \(d\)-separation rule and the \(\sigma\)-separation rule are not valid. ## Appendix B Comparing the LF DAG and the Bell DAG In this section, we use causal modeling reasoning to derive and provide intuitions for various relationships between probability polytopes. Figure 6: Cyclic directed graph that presents the \(d\)-separation relation \(B\perp_{\mathrm{d}}D|AC\) but is classical-compatible with distributions where \(B\) and \(D\) are correlated after conditioning on \(AC\). Figure 7: Cyclic directed graph that presents the \(d\)-separation relation \(C\perp_{\mathrm{d}}D\) but is classical-compatible with distributions where \(C\) and \(D\) are correlated. Fig. 8 shows the relationship between the LF DAG and the Bell DAG. Although we have been talking about the LF _monogamy relations_ throughout most of this text, this figure refers to the special case of the LF _inequalities_; that is, whenever a DAG has green arrows in Fig. 8, we assume \(P(a|c,x{=}1)=\delta_{a,c}\). The inclusion signs between the DAGs indicate the relationships between the sets of distributions \(P(ab|xy)\) that can be explained by them. Fig. 8 also labels each of these sets with the name of the corresponding polytope depicted in Fig. 9. Let us explain each set relation in Fig. 8. _The left equal sign:_ For the LF DAG with a classical latent node, since we are only interested in \(P(ab|xy)\), we can merge \(C\) and the latent node \(\Lambda\) into a new latent node \(\Lambda^{\prime}=\{\Lambda,C\}\). The constraint that \(P(a|c,x{=}1)=\delta_{a,c}\) is translated into a condition on the possible dependence of \(A\) on \(\Lambda^{\prime}\): \(A\) has to be a copy of a part of \(\Lambda^{\prime}\) (namely, \(C\)) when \(x=1\). However, this condition _does not_ impose any restriction on the dependence of \(A\) on \(\Lambda^{\prime}\): being a copy of an arbitrary part of a latent node is equivalent to having an arbitrary dependence on the latent node as a whole. Therefore, if treated as a classical causal structure, the LF DAG is compatible with the same sets of \(P(ab|xy)\) as the classical Bell DAG, even when demanding \(P(a|c,x{=}1)=\delta_{a,c}\). _The right equal sign:_ Since we are only concerned with \(P(ab|xy)\) and we do not demand \(P(a|c,x{=}1)=\delta_{a,c}\) for the upper DAG (there is no green arrow), the node \(C\) can simply be incorporated into the GPT latent node \(\mathcal{L}\). _The top left strict inclusion sign:_ The constraint \(P(a|c,x{=}1)=\delta_{a,c}\) only applies to the case when \(x=1\). Thus, for example, where \(x\) can take at least three different values, namely, \(x=1\), \(2\), or \(3\), the distributions \(P(ab|x=2,y)\) and \(P(ab|x=3,y)\) compatible with the upper middle diagram can violate the CHSH inequalities when the common cause is quantum. On the other hand, the distributions compatible with the upper left diagram satisfy Bell's inequalities, as made explicit by the equal sign between the leftmost DAGs. _The top right strict inclusion sign:_ From the right most equal sign, we know that any \(P(ab|xy)\) satisfying no-signaling is compatible with the LF DAG when \(\mathcal{L}\) is described by an arbitrary GPT. Thus, there exists \(P(ab|xy)\) compatible with the LF DAG violating LF inequalities. The problem is that none of them can do so while satisfying the constraint \(P(a|c,x{=}1)=\delta_{a,c}\). Thus, it is possible for \(P(ab|xy)\) GPT-compatible with the LF DAG to violate Bell's inequalities while satisfying \(P(a|c,x{=}1)=\delta_{a,c}\) (implied by the left equal sign and the top left inclusion sign), but it cannot violate the LF inequalities while satisfying \(P(a|c,x{=}1)=\delta_{a,c}\) (implied by the top right inclusion sign). Furthermore, it must satisfy the no-signaling condition (implied by the right equal sign). This emphasizes that the assumptions that go into deriving LF inequalities are weaker than the assumptions that go into deriving Bell's inequalities: the set of correlations \(P(ab|xy)\) obeying the LF inequalities is strictly larger than the set of correlations \(P(ab|xy)\) obeying Bell's inequalities. In the second row, we see how quantum theory is more powerful than classical theory but not the most powerful GPT in explaining Bell inequality violations. One important thing to note is that the quantum polytope (\(\mathcal{Q}\)) in Fig. 9 is essentially the set of correlations \(P(ab|xy)\) compatible with the Bell DAG under quantum causal prescription, i.e., the set represented by the bottom-middle diagram. This is because, in the quantum protocol, we assume that Alice can rewind Charlie's measurement so that when \(x\neq 1\), Alice ends up measuring Charlie's share of entangled system directly. That is, regardless of \(x=1\) or \(x\neq 1\), the outcome obtained by Alice can be viewed as an outcome of a measurement on Charlie's share of the entangled system. The fact that the set corresponding to the top-middle diagram is neither a subset nor a superset of the set Figure 8: The set symbols indicate the relations between the sets of compatible correlations \(P(ab|xy)\) that are also marginals of \(P(abc|xy)\), which needs to satisfy \(P(a|c,x{=}1)=\delta_{a,c}\) when the green arrows are present. Figure 9: The two-dimensional slice of the Local Hidden Variable (LHV) polytope, the LF polytope, the No-Signalling (NS) polytope, and the boundary of quantum (\(\mathcal{Q}\)) correlations. See Ref. [33] (from which the figure is reproduced) for further details. for the bottom middle diagram reflects the feature in Fig. 9 that the quantum (\(\mathcal{Q}\)) polytope is neither inside nor include the LF polytope. However, the fact that the quantum polytope (\(\mathcal{Q}\)) is essentially the set of correlations \(P(ab|xy)\) compatible with the Bell DAG under quantum causal prescription does _not_ mean that the bottom-middle diagram represents a good causal explanation for the minimal LF scenario. This is so because it would not explain Charlie's experience as an observer. The comparison between the derivation of Bell's inequalities and LF inequalities using their respective DAGs is shown in Table 1. ## Appendix C Discussions of the verification protocol in Sec. IV.2.2 Assuming one believes that Eqs. (6) and Eq. (10) hold in the minimal LF scenario, _be it due to operational evidence or other motivations_, Theorem 4 rules out any nonclassical causal model that reproduces violations of LF inequalities on the basis of No Fine-Tuning. Conversely, this theorem cannot be used to argue against such causal models if one cannot be persuaded that both Eqs. (6) and Eq. (10) hold. For instance, it is possible that one might reject even the operational no-signaling condition of Eqs. (6) if they believe that signaling will become observable once our technology advances sufficiently. For such an individual, the premise of Theorem 4 no longer holds, and thus it does not constrain their choice when constructing the causal model. Besides the strong empirical and metaphysical grounds to believe in no-signalling, the validity of (6) can be directly verified in any specific run of the minimal LF scenario. Verification of (10), on the other hand is less straightforward. Here, we list a few possible concerns that one may have regarding the use of the verification protocol presented in Sec. IV.2.2 as justification for employing Eq. (10) in the No Fine-Tuning no-go theorem (i.e., Theorem 4). For each of them, we provide arguments that either reject such objections or may help alleviate the concern. **The verification protocol assumes quantum theory.** Since the effectiveness of the verification protocol depends on the correctness of the quantum theory, one may worry that the validity of the verification protocol is questionable. However, it is not a flaw of the protocol's design since we are indeed trying to verify the independence between \(C\) and \(\{X,Y\}\) in a proposed _quantum_ protocol for violating LF inequalities. Having said that, if one is really uncomfortable with the quantum approach, there are alternative classical verification protocols for gathering evidence of Eq. (10), which are common approaches used by experimentalists or statisticians. In one of these classical protocols, the verifier randomly halts the experiment after Alice selects her measurement choice but before she executes the measurement. Subsequently, the data of \(C\), \(X\), and \(Y\) in these runs can be employed to examine the interdependence among these variables. This is similar to the cross-validation approach often used in classical causal inference. Nevertheless, we favor the quantum protocol detailed in Sec. IV.2.2. This preference arises from the fact that in the classical strategy, all values of \(C\), \(X\), and \(Y\) used for verification are no longer causally linked to Alice's measurement. Moreover, in those runs, it is impossible to continue the quantum experiment because Alice cannot reverse Charlie's measurement when \(C\) is known to a classical verifier. Essentially, the classical verification protocol cannot establish independence relations in runs involving Alice's reversal of Charlie's measurement. In other words, the classical verification protocol hinges on the runs terminated by the verifier for verification as being a truly representative sample of all runs of the experiment. In contrast, the quantum protocol gathers data for establishing independence from _all_ experiment runs, even when Alice reverses Charlie's measurement subsequently. **The Veronika protocol cannot be done in the spacelike-separated LF scenario.** Since Veronika needs to collect data for both \(X\) and \(Y\) in conjunction with \(C\), and given that Alice must reverse the potential entanglement between Veronika and \(C\) before proceeding with her measurement, \(A\) can no longer be spacelike-separated from \(Y\). This is of no concern. Note that this spacetime constraint is irrelevant for establishing the applicability of the No Fine-Tuning no-go theorem. This is because it no longer relies on assumptions such as Relativistic Causal Arrow, which requires spacelike separation between \(A\) and \(Y\). Besides, it is still possible to use the No Fine-Tuning argument even when one insists on conducting the experiment with spacelike separation, by adding an extra assumption, namely, that our candidate causal explanation must have an arrow from \(C\) to \(A\). This assumption is justified by the fact that Charlie informs Alice of his measurement outcome \(C\) when \(x=1\). Under this assumption, No Fine-Tuning implies that \(C\) and \(Y\) need to be \(d\)-separated: if they were not, then the arrow from \(C\) to \(A\) would imply in \(A\not\perp_{\mathrm{d}}Y|X\) (when there is a path from \(C\) to \(Y\) unblocked by \(X\)) and/or \(C\not\perp_{\mathrm{d}}X\) (when there is a path from \(C\) to \(Y\) blocked by \(X\)). However, No Fine-Tuning with Eq. (6) and \(P(c|x)=P(c)\) demands that \(A\perp_{\mathrm{d}}Y|X\) and \(C\perp_{\mathrm{d}}X\). Therefore, Veronika only needs to verify the independence of \(C\) from \(X\) instead of also from \(Y\); in this case, Alice can still make her measurement in a spacelike-separated manner from Bob's measurement. **The Veronika protocol may be susceptible to the "simultaneous memory loophole".** Readers who are familiar with loophole-free Bell tests may worry that performing the runs of an LF experiment in parallel may lead to the so-called "memory loophole" and in particular the "simultaneous memory loophole" [64]: carrying runs in parallel may increase the likelihood of correlations between runs and hence lead to a higher chance of violating the inequalities. This worry is essentially about the assumption that all the data in the parallel runs are independent and identically distributed. Such an assumption is also needed when the runs of the experiments are carried out sequentially instead of in parallel (and hence the "memory loophole" still exists in sequential experiment). It has been argued in the context of Bell experiments that it is possible to close the simultaneous loophole by careful experimental designs and sophisticated statistical analysis [64; 65; 66]. This suggests that there may also be a way to close the simultaneously memory loophole in the Veronika protocol. However, our case is not completely analogous, as there is an additional potential source of correlations arising from Veronika's interactions with all the Charles, which may introduce correlations between the runs. We leave to future research to understand what kind of additional simultaneous memory loophole it will bring and how to resolve it. **The DAGs under consideration are not proper representations of the verification protocol.** In the No Fine-Tuning argument, we implicitly assumed that the only nodes a DAG _must_ have for the minimal LF scenario are the observed nodes representing \(A\), \(B\), \(C\), \(X\), and \(Y\), since there are no any other variable appearing in the conditional Independence relations used in Theorem 4. Some might question whether the requirement of the existence of only these five nodes is sufficient for the verification protocol since there are 1) many Charles and 2) an extra agent, Veronika, both of which differ from a usual minimal LF scenario. In addition, one may propose that there should be 3) three separate nodes for events related to Charlie's outcome: one representing Charlie's actual outcomes (i.e., the usual \(C\)), one representing the ones reported to Veronika (denoted as \(C_{\rm V}\)), and the other representing the ones reported to Alice (denoted as \(C_{\rm A}\)). Then, the independence relation used in the No Fine-Tuning theorem should be \(P(c_{\rm V}|xy)=P(c_{\rm V})\) instead of \(P(c|xy)=P(c)\) (since only \(C_{\rm V}\) is directly checked by Veronika to be independent of \(X\) and \(Y\)). In other words, the set of DAGs under consideration are only _summary_ DAGs of what is actually going on in the minimal LF scenario with Veronika as the verifier. The concern is that such a summary DAG may overlook important details relevant to establishing the No Fine-Tuning no-go theorem. Let us examine the three different kinds of extra nodes one may wish to have in the DAG one by one to see if their inclusion is necessary and if it would render the No Fine-Tuning argument invalid: 1) Split \(C\) into nodes \(C_{1},C_{2},\ldots,C_{N}\) representing the outcomes of individual Charles. This is only of concern if one is worried about the simultaneous memory loophole mentioned before. If one grants that all data is independent and identically distributed in the parallel runs, then using a single node \(C\) for all Charles' outcomes in parallel runs is the same as using a single node \(C\) for the outcomes of a single Charlie's in sequential runs. 2) Extra node \(V\) representing Veronika's observation. It is entirely irrelevant to have such an extra node or any other extra node for the No Fine-Tuning argument. The No Fine-Tuning assumption and the proof for Theorem 4 do not include any assumption on the nonexistence of any extra nodes, rendering the existence of such nodes inconsequential. Note that this is not to say that Veronika is entirely irrelevant for causal modeling the LF experiment with the verification protocol. As acknowledged in the discussion for "simultaneous memory loopholes", the existence of Veronika and his verification process might affect the validity of the \begin{table} \begin{tabular}{l|l|l} \hline \hline & Bell inequalities & LF inequalities \\ \hline DAG & & \\ & & \\ & & \\ \hline Constraints & \(P(ab|xy)=\sum_{\lambda}P(\lambda|xy)P(ab|xy\lambda)\) & \(P(ab|xy)=\sum_{c}P(c|xy)P(ab|cxy)\) \\ \cline{2-3} implied by the DAG & \(\Lambda\perp_{\rm d}XY\) & \(C\perp_{\rm d}XY\) \\ \cline{2-3} DAG & \(\Rightarrow P(\lambda|xy)=P(\lambda)\) & \(\Rightarrow P(c|xy)=P(c)\) \\ \cline{2-3} & \(AX\perp_{\rm d}BY|\lambda\) & \(AC\perp_{\rm d}Y|X\) \(BC\perp_{\rm d}X|Y\) \\ & \(\Rightarrow P(ab|xy\lambda)=P(a|x\lambda)P(b|y\lambda)\) & \(\Rightarrow P(ac|xy)=P(ac|x),P(bc|xy)=P(bc|y)\) \\ \hline Additional constraint & No & \(P(ac|x{=}1)=\delta_{a,c}P(c)\) or \(P(a\not\neq c|x{=}1)=0\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Comparison of the derivation of Bell’s inequalities and LF inequalities using their respective DAGs.** assumption of independent and identically distributed data. 3) Add two extra nodes related to Charlie's outcomes, namely, \(C_{\rm V}\) for the outcomes of Charles' reported to Veronika and \(C_{\rm A}\) for the ones reported to Alice. In addition, instead of using \(P(c|xy)=P(c)\) in the No Fine-Tuning theorem (i.e., Theorem 4), one should use \(P(c_{\rm V}|xy)=P(c_{\rm V})\). At the first glance, this may seem as a special case of adding extra nodes as mentioned in the last point. However, it is slightly different from that because here, conditional independence relations also changed. Nevertheless, this is still not a problem, as we can have an alternative No Fine-Tuning no-go theorem to account for this detail as follows. **Theorem 5** (Alternative No Fine-Tuning no-go).: _No nonclassical causal model satisfying \(P(a|xy)=P(a|x),P(b|xy)=P(b|y)\) and \(P(c_{\rm V}|xy)=P(c_{\rm V})\) with No Fine-Tuning can explain any violation of the LF monogamy relations (including the LF inequalities)._ Then, both \(C\) and \(C_{\rm A}\) can be viewed as extra nodes compared to the ones appearing in the theorem and hence become irrelevant for proving the theorem. Note that here when one applies the LF monogamy relations, one of the marginals concerned by it should be changed correspondingly, namely, \(P(ac|x{=}1)\) should be replaced by \(P(ac_{\rm V}|x{=}1)\). That is, here, the LF monogamy relations constrain \(\{P(ab|xy),P(ac_{\rm V}|x{=}1)\}\), since the conditional independence is no longer imposed on Charlie's actual observation \(C\) but rather on his outcome reported to Veronika \(C_{\rm V}{\rm--}C_{rmV}\) is the relevant variable for the alternative No Fine-Tuning theorem. In summary, all these potential extra nodes do not raise new concerns beyond those already addressed.
2309.13441
Anytime valid and asymptotically optimal statistical inference driven by predictive recursion
Distinguishing two classes of candidate models is a fundamental and practically important problem in statistical inference. Error rate control is crucial to the logic but, in complex nonparametric settings, such guarantees can be difficult to achieve, especially when the stopping rule that determines the data collection process is not available. In this paper we develop a novel e-process construction that leverages the so-called predictive recursion (PR) algorithm designed to rapidly and recursively fit nonparametric mixture models. The resulting PRe-process affords anytime valid inference uniformly over stopping rules and is shown to be efficient in the sense that it achieves the maximal growth rate under the alternative relative to the mixture model being fit by PR. In the special case of testing for a log-concave density, the PRe-process test is computationally simpler and faster, more stable, and no less efficient compared to a recently proposed anytime valid test.
Vaidehi Dixit, Ryan Martin
2023-09-23T17:58:52Z
http://arxiv.org/abs/2309.13441v3
# Anytime valid and asymptotically optimal statistical inference driven by predictive recursion ###### Abstract Distinguishing two classes of candidate models is a fundamental and practically important problem in statistical inference. Error rate control is crucial to the logic but, in complex nonparametric settings, such guarantees can be difficult to achieve, especially when the stopping rule that determines the data collection process is not available. In this paper we develop a novel e-process construction that leverages the so-called predictive recursion (PR) algorithm designed to rapidly and recursively fit nonparametric mixture models. The resulting PRe-process affords anytime valid inference uniformly over stopping rules and is shown to be efficient in the sense that it achieves the maximal growth rate under the alternative relative to the mixture model being fit by PR. In the special case of testing for a log-concave density, the PRe-process test is computationally simpler and faster, more stable, and no less efficient compared to a recently proposed anytime valid test. _Keywords and phrases:_ e-process; mixture model; nonparametric; test martingale; universal inference. ## 1 Introduction Arguably the most fundamental problem in mathematical statistics is that of distinguishing between two classes of candidate models based on observed data. When the two classes of models are _simple_, i.e., there are just two distinct probability distributions being compared, then the seminal work of Neyman and Pearson (1933) settles the question on how to optimally distinguish the two. More specifically, Neyman and Pearson proved that the most powerful test to distinguish between the two candidate distributions is based on the magnitude of the likelihood ratio. Beyond the test's statistical properties, the _law of likelihood_(e.g., Edwards 1992; Hacking 1976; Royall 1997) offers principled justification for carrying out the comparison between the two models in this fashion. In real applications, however, it is rare for the relevant comparison to be between two simple hypotheses, so the Neyman-Pearson lemma is generally insufficient to settle these practical questions. That is, when either of the two classes are _composite_, i.e., consisting of more than one probability distribution, then it is no longer clear how to define the likelihood ratio and to operationalize the law of likelihood. To overcome this difficulty, there are two common strategies to quantify the "likelihood" of a composite hypothesis: the classical likelihood ratio testing, like that covered by Wilks's theorem (Wilks, 1938), maximizes the likelihood over the respective models, whereas the Bayesian approach averages the likelihood with respect to suitable priors. In either case, judgments are made as above by considering the magnitude of the corresponding (marginal) likelihood ratio. For various reasons, however, simply thresholding these likelihood ratios is not fully satisfactory: classical strategies to calibration are based on sampling distribution calculations that assume a specific data-generating process, and so the reliability statements lack robustness to the kind of departures from these assumptions common in practice, e.g., data-peeking. And since neither of these likelihood ratios generally meet the conditions necessary (see below) to achieve a sort of universal calibration independent of the data-generating process, there is a need for new approaches. To meet this need, there has been a recent surge of interest in testing procedures that are both valid and efficient under optional stopping; see Ramdas et al. (2022) for a survey of these rapidly-moving developments. The basic building blocks are _e-values_, which in some cases lead to _test martingales_(Shafer et al., 2011) and other kinds of _e-processes_. These also have close connections to betting interpretations of probability (Shafer, 2021) and, more generally, to _game-theoretic probability_(Shafer and Vovk, 2019). A practical challenge, however, is that the mathematical definition of an e-process is not enough to determine what specifically to do in applications, i.e., there are so many e-processes for the data analyst to choose from. Two general strategies for the construction of e-values (which are sometimes e-processes) are _universal inference_(Dunn et al., 2023; Gangrade et al., 2023; Wasserman et al., 2020) and _reverse information projection_(Grunwald, 2022, 2023; Grunwald et al., 2019). The former approach has different variants, but these can be computationally demanding in sequential settings and/or lacking in statistical efficiency. The latter strategy is efficiency-motivated, so much so that it can be difficult to apply in complex, nonparametric problems. So there is a need for an e-process construction that is simple--both conceptually and computationally--and makes efficient use of the available data. The present paper aims to fill this gap. As indicated above, the familiar forms of likelihood ratios fail to be _anytime valid_ in the sense of controlling errors independent of a data-generating process, stopping rule, etc. It turns out, however, as explained in Section 2.2 below, e-processes themselves are effectively likelihood ratios. Since the null model is determined by the problem itself, the e-process construction boils down to specification of the non-null model likelihood. To understand the considerations that affect this choice of non-null model likelihood, return again to the classical case covered by Neyman-Pearson. The most powerful test to distinguish the true distribution from a (false) simple null makes use of the true likelihood. Beyond this classical case, it is only natural for the likelihood in the e-process construction to match that true likelihood as closely as possible. Of course, the data analyst does not know the true likelihood, so his/her choice needs to be sufficiently flexible to adapt to what the data says about the underlying distribution. Mixture models are known to be quite flexible, so our proposal here is to build this non-null model likelihood using a nonparametric mixture model. Fitting such a mixture model can be computationally demanding using classical methods, but here we employ a fast, recursive algorithm called _predictive recursion_, or PR for short (e.g., Newton 2002; Newton et al. 1998; Newton and Zhang 1999); see Section 2.3 for a brief review of PR. With this flexible and computationally efficient strategy to construct a non-null model likelihood, we define a suitable e-process, which we call the _PRe-process_, that can be used for anytime valid and efficient statistical inference. Thanks to the recursive form of PR's mixture model fit, it immediately follows that the proposed PRe-process is upper-bounded by a genuine likelihood ratio, which is a test martingale under the null and hence has expected value upper-bounded by 1; see Theorem 1. Then the anytime validity, i.e., valid hypothesis tests and confidence regions uniformly over stopping rules that drive the data collection process, is an immediate consequence of Ville's inequality; see Corollaries 1-2. With finite-sample anytime validity guaranteed, we then turn our attention to the question of efficiency in Section 3.3. There, in Theorem 2, we establish that, under certain regularity conditions, the PRe-process achieves the optimal growth rate for true distributions in the alternative relative to the posited mixture model. And since our PRe-process proposal can itself be viewed as a variant of universal inference, our growth rate result also sheds light on the efficiency of universal inference more generally. Two quick illustrations are presented in Section 4--one testing for a monotone density and the other testing a parametric null--where it is shown that the empirical growth rate of the PRe-process closely matches the asymptotically optimal growth rate established in Theorem 2. Then, in Section 5, in the context of testing for a log-concave density, we show that the PRe-process is no less efficient than the e-process-based test in Gangrade et al. (2023), more efficient than the e-value-based test proposed in Dunn et al. (2021), and computationally much more efficient than both the methods. Finally, some concluding remarks are made in Section 6 and technical details can be found in Appendix A. ## 2 Background ### Notation Suppose the data consists of a sequence \(X_{1},X_{2},\ldots\) of random variables taking values in \(\mathbb{X}\); write \(X^{n}=(X_{1},\ldots,X_{n})\in\mathbb{X}^{n}\) for the first \(n\) entries in this sequence, with \(n\geq 1\). Let \(\mathscr{A}\) denote the \(\sigma\)-algebra of measurable subsets of \(\mathbb{X}^{\infty}\) and write \(\mathscr{A}_{k}=\sigma(X^{k})\), for the filtration, the sequence of (sub-)\(\sigma\)-algebras generated by \(X^{k}\), \(k\geq 1\). An integer-valued random variable \(N\) will be called a _stopping time_ if \(\{N\leq n\}\in\mathscr{A}_{n}\). Consider a collection \(\mathscr{P}\) of candidate joint distributions for \(X^{\infty}=(X_{1},X_{2},\ldots)\) on \(\mathscr{A}\) and let \(P=P^{\infty}\) denote a generic distribution in \(\mathscr{P}\). Write \(\mathscr{P}_{0}\) and \(\mathscr{P}_{1}\), both subsets of \(\mathscr{P}\), for the null and alternative hypotheses, respectively. Throughout, we use upper-case letters for distributions/probability measures and the corresponding lower-case letters for the associated density (with respect to some common dominating measure on \(\mathbb{X}\)). For example, if \(P_{0}\) is a member of \(\mathscr{P}_{0}\), then \(p_{0}\) is its corresponding density. ### E-processes An _e-value_, \(E\), for a distribution \(P_{0}\), is a non-negative function of the observable data such that \(\mathsf{E}_{P_{0}}(E)\leq 1\). When the expected value is equal to 1, then we can interpret \(E\) as the realized payoff for a bet of $1 against \(P_{0}\); see, e.g., Shafer (2021). With batch data, say, \(X\), and models that have densities, the non-negativity and expected value constraint implies that \(x\mapsto q(x):=E(x)\,p_{0}(x)\) is a density too; therefore, \(E(x)=q(x)/p_{0}(x)\) must be a likelihood ratio. With this connection to likelihood ratios, \(E\) can be understood as a measure of the evidence--"e" stands for evidence--in the data against \(P_{0}\), relative to the alternative \(Q\). The connection between the e-value and testing is now clear: if the observed value of \(E\) is large, then one would be inclined to reject the hypothesis \(P_{0}\). A Bayes factor is an example of an e-value in the case of a simple hypothesis involving a single \(P_{0}\). On the other hand, likelihood ratios of the form "\(\sup_{p}p(x)/p_{0}(x)\)" commonly used in mathematical statistics are not e-values. The situation described above is impractically simple. Indeed, it is rare that \(\mathscr{P}_{0}\) is a singleton and often data does not come in a batch. _Test martingales_(Shafer et al., 2011) offer a solution to the non-batch data problem. A test martingale is just a martingale \((M_{n})\), with respect to the distribution \(P_{0}\), adapted to the filtration \((\mathscr{A}_{n})\) with \(M_{0}\equiv 1\); consequently, \(\mathsf{E}_{P}(M_{n})=1\) for all \(n\) and, by Doob's optional stopping theorem, \(\mathsf{E}_{P}(M_{N})=1\) for all stopping times \(N\). Test martingales can also be expressed as likelihood ratios, i.e., if \(Q\) is absolutely continuous with respect to \(P_{0}\), then \[M_{n}=\frac{q(X^{n})}{p_{0}(X^{n})}=\prod_{i=1}^{n}\frac{q(X_{i}\mid X^{i-1})} {p_{0}(X_{i}\mid X^{i-1})},\quad n\geq 1. \tag{1}\] So then the construction of a test martingale boils down to the choice of \(Q\) for the numerator. To handle composite \(\mathscr{P}_{0}\) cases, one can construct a collection of test martingales \((M_{n}^{P_{0}})\) indexed by \(n\) and by \(P_{0}\in\mathscr{P}_{0}\). This has the same likelihood ratio representation as above but, mathematically, the choice of \(Q\) could vary with \(P_{0}\). From here, one can define an _e-process_\((E_{n})\) to be a sequence of non-negative random variables with \(E_{n}\leq M_{n}^{P_{0}}\) for all \(n\) and all \(P_{0}\in\mathscr{P}_{0}\). An example of this would be \(E_{n}=\inf_{P_{0}\in\mathscr{P}_{0}}M_{n}^{P_{0}}\) and, if it happened that the \(Q\) in (1) does not depend on \(P_{0}\), then this simplifies to \[E_{n}=\frac{q(X^{n})}{\sup_{P_{0}\in\mathscr{P}_{0}}p_{0}(X^{n})},\quad n\geq 1.\] This is a common construction of an e-process in applications, e.g., this is how the variant of universal inference in Section 8 of Wasserman et al. (2020) works, but it is not the only way; see Ramdas et al. (2022) for more discussion on the various alternatives, in particular, the proposals in Grunwald et al. (2019) and in Waudby-Smith and Ramdas (2022). We focus in this paper on e-processes that have the form above, i.e., only depend on a choice of \(Q\) representing the alternative. Why all the fuss about e-processes? There are some very powerful results that tightly control the probabilistic behavior of these processes. We are referring to the classical result known as _Ville's inequality_(e.g., Shafer and Vovk, 2019, Sec. 9.1). In our present case, this says that if \(M_{n}\) is a test martingale for \(\mathscr{P}_{0}\), then \[\sup_{P_{0}\in\mathscr{P}_{0}}P_{0}\big{(}\text{there exists $n\geq 1$ such that $M_{n}\geq\alpha^{-1}$}\big{)}\leq\alpha,\quad\text{all $\alpha\in(0,1)$},\] or, equivalently (Howard et al., 2021, Lemma 3), \[\sup_{P_{0}\in\mathscr{P}_{0}}P_{0}(M_{N}\geq\alpha^{-1})\leq\alpha,\quad\text {for all $\alpha\in(0,1)$ and all $N$},\] where \(N\) is a stopping time. Of course, the same holds true for any e-process \(E_{n}\) bounded by \(M_{n}\). The implications of this are far-reaching: it allows for the construction of statistical procedures--hypothesis tests and confidence sets--that are _anytime valid_ in the sense that the reliability claims hold (basically) no matter when or how the investigator decides to conclude their study and perform the statistical analysis, thereby offering some additional control or _safety_ compared to procedures that only control the Type I error rate for a particular sampling scheme. For example, a test that rejects \(\mathscr{P}_{0}\) based on data \(X^{n}\) when \(E_{n}\geq\alpha^{-1}\) will control the Type I error rate _even if the investigator peeked at the data_ when deciding whether to conclude the study at time \(n\). More details on this are given in Ramdas et al. (2022a) and Section 3 below. ### Predictive recursion Here and in what follows, we will focus on the case where \(X_{1},X_{2},\ldots\) are iid \(\mathbb{X}\)-valued random variables; possible extensions beyond the iid case will be discussed in Section 6. Following the notation above, a general mixture model for the (common) marginal distribution defines the density function as \[q^{\Psi}(x)=\int_{\mathbb{U}}p_{u}(x)\,\Psi(du),\quad x\in\mathbb{X}, \tag{2}\] where \(p_{u}\) is the density corresponding to a distribution \(P_{u}\), with \(\{P_{u}:u\in\mathbb{U}\}\subseteq\mathscr{P}\), and \(\Psi\) is a mixing distribution defined on (a suitable \(\sigma\)-algebra of subsets of) the indexing space \(\mathbb{U}\). Such models are incredibly flexible, so they are often used in contexts where the mixture structure itself is not directly relevant, e.g., in density estimation applications. It is common to think of \(x\mapsto p_{u}(x)\) as a kernel in some restricted parametric family, like Gaussian, but that is not necessary here; indeed, \(\mathbb{U}\) could just be a generic indexing of the entire model \(\mathscr{P}\). But even if \(\{P_{u}:u\in\mathbb{U}\}\) is a relatively narrow parametric family, it is well known that mixtures thereof, as in (2), are still incredibly flexible. The goal here is to fit the mixture model (2) to the observed data \(X^{n}=(X_{1},\ldots,X_{n})\). One common strategy is nonparametric maximum likelihood, as discussed in Laird (1978), Lindsay (1995), and others. Another common strategy is to introduce a prior distribution for \(\Psi\), e.g., a Dirichlet process prior (e.g., Ferguson 1973; Ghosal 2010; Lo 1984), and carry out a corresponding nonparametric Bayesian analysis. While there are advantages to sticking with the classical approaches, the complexity of these models creates serious computational challenges. In particular, since tailored Monte Carlo methods are required, one cannot capitalize on the Bayesian coherent updating property--"today's posterior is tomorrow's prior"--when data are processed sequentially. Maximum likelihood estimation faces similar challenges in the sequential data context. A novel alternative was developed by Michael Newton and collaborators in the late 1990s, namely, a fast, recursive algorithm called _predictive recursion_, or PR for short (e.g., Newton 2002), successfully applied in various large-scale inference settings, e.g., Tao et al. (1999), Newton et al. (2001), Martin and Tokdar (2012), Tansey et al. (2018), and Woody et al. (2022). Here we provide a quick review of the PR algorithm and its properties; for more details, see Martin (2021). Start with two user-specified inputs: a weight sequence \(i\geq 1\)) \(\subset(0,1)\) that satisfies \[\sum_{i=1}^{\infty}w_{i}=\infty\quad\text{and}\quad\sum_{i=1}^{\infty}w_{i}^{2}<\infty, \tag{3}\] and an initial guess \(\Psi_{0}\) of \(\Psi\) supported on the index space \(\mathbb{U}\). Then the PR algorithm updates the initial guess along the data sequence according to the rule \[\hat{\Psi}_{i}(du)=(1-w_{i})\,\hat{\Psi}_{i-1}(du)+w_{i}\,\frac{p_{u}(X_{i})\, \hat{\Psi}_{i-1}(du)}{\int_{\mathbb{U}}p_{v}(X_{i})\,\hat{\Psi}_{i-1}(dv)}, \quad u\in\mathbb{U},\quad i\geq 1. \tag{4}\] When the data is \(X^{n}\), the PR estimator \(\hat{\Psi}_{n}\) is returned. Note that PR is recursive in the sense that one only needs \(\hat{\Psi}_{n}\) and the new data point \(X_{n+1}\) to get the new \(\hat{\Psi}_{n+1}\). This implies that the update \(\hat{\Psi}_{n}\to\hat{\Psi}_{n+1}\) and, therefore, the update \(\hat{q}_{x^{n}}\to\hat{q}_{x^{n+1}}\) is an \(O(1)\) operation. Moreover, this leads naturally to a plug-in estimator of the mixture density \[\hat{q}_{x^{n}}(x):=q^{\hat{\Psi}_{n}}(x)=\int_{\mathbb{U}}p_{u}(x)\,\hat{ \Psi}_{n}(du),\quad x\in\mathbb{X}.\] Large-sample consistency properties of the PR estimators have been explored in Ghosh and Tokdar (2006), Martin and Ghosh (2008), Tokdar et al. (2009), Martin and Tokdar (2009, 2011), and Dixit and Martin (2023b). The property most relevant to our developments here will be explained below in Section 3. The above display can be interpreted as a data-driven predictive density for \(X_{n+1}\), given \(X^{n}\), not unlike the more familiar Bayesian predictive distribution sequence. The combination of being recursive and producing a predictive density explains the name _predictive recursion_. A point that will be especially relevant below is that PR also produces a joint marginal density for \(X^{n}\)--"marginal" in the sense that the mixing distribution \(\Psi\) has been integrated out--and it has a multiplicative form, a la Wald (1947, Eq. 10.10). That is, denoting this joint marginal density by \(\hat{q}^{\mbox{\tiny PR}}(\cdot)\), it satisfies \[\hat{q}^{\mbox{\tiny PR}}(x^{n})=\hat{q}_{x^{n-1}}(x_{n})\,\hat{q}^{\mbox{\tiny PR }}(x^{n-1})=\prod_{i=1}^{n}\hat{q}_{x^{i-1}}(x_{i}),\quad n\geq 1,\quad x^{n} \in\mathbb{X}^{n}. \tag{5}\] It is PR's flexibility and the multiplicative form of its joint marginal density, that makes it especially suitable for anytime valid nonparametric inference. ## 3 PRe-processes ### Construction As before, we have null and alternative hypotheses, \(\mathscr{P}_{0}\) and \(\mathscr{P}_{1}\), respectively. The proposal here is to construct a suitable marginal likelihood under the alternative by mixing over a specified class \(\{P_{u}:u\in\mathbb{U}\}\) of distributions in \(\mathscr{P}_{1}\)--or perhaps in \(\mathscr{P}_{0}\). One strategy would be to introduce a prior distribution supported on \(\mathbb{U}\) and then find the corresponding Bayesian marginal likelihood. The challenge is that fitting a sufficiently flexible Bayesian mixture model to accommodate the "nonparametric" aspects of the applications we have in mind would be computationally demanding. Here, instead, we make use of the PR algorithm reviewed in Section 2.3 above. Let \(\Psi\) be a mixing probability distribution supported on the specified index set \(\mathbb{U}\) and consider a basic mixture model with density \[q(x)=\int_{\mathbb{U}}p_{u}(x)\,\Psi(du),\quad x\in\mathbb{X}.\] This model can be fit to data \(X^{n}\) using the PR algorithm in (4) above, depending on a user-specified initial guess \(\Psi_{0}\) and weights (\(w_{i}:i\geq 1\)). We are not directly interested in estimation of the mixing distribution \(\Psi\); our focus is on the joint marginal density for \(X^{n}\) that is produced as a by-product, namely, \(\hat{q}^{\mbox{\tiny PR}}(X^{n})\) as given in (5). \[\hat{q}^{\mbox{\tiny PR}}(X^{n})=\hat{q}_{X^{n-1}}(X_{n})\,\hat{q}^{\mbox{ \tiny PR}}(X^{n-1})=\prod_{i=1}^{n}\hat{q}_{X^{i-1}}(X_{i}).\] Note, again, the same key multiplicative form in Wald (1947, Eq. 10.10) that was the driving force behind the formulation in Section 8 of Wasserman et al. (2020). Moreover, the PR update from \(\hat{q}^{\mbox{\tiny PR}}(X^{n-1})\) to \(\hat{q}^{\mbox{\tiny PR}}(X^{n})\) is an \(O(1)\) computation, compared to the \(O(n)\) computation one can expect with the "non-anticipatory" maximum likelihood estimation strategies employed in variants of universal inference (e.g., Gangrade et al. 2023). With this, we define the following PR-driven e-process, i.e., _PRe-process_, \[E_{n}^{\mbox{\tiny PR}}=E^{\mbox{\tiny PR}}(X^{n};\mathscr{P}_{0})=\frac{\hat {q}^{\mbox{\tiny PR}}(X^{n})}{\sup_{p\in\mathscr{P}_{0}}p(X^{n})},\quad n\geq 1. \tag{6}\] Intuitively, since likelihood tends to favor the true hypothesis, if \(H_{0}\) is true (resp. false), then \(E_{n}^{\mbox{\tiny PR}}\) ought to be small (resp. large). This intuition suggests a test, i.e., \[\mbox{reject $H_{0}$ if and only if $E_{n}^{\mbox{\tiny PR}}$ is large.}\] We will be more specific about what it means to be "large" and the corresponding properties of the proposed test in Section 3.2 below. Our proposal can be compared to two of the now-standard procedures for constructing e-processes reviewed in Ramdas et al. (2022a). Specifically, our proposal is, on the one hand, like the basic mixture method in that it forms a marginal likelihood under the alternative via a suitable mixture model, which we think is intuitively appealing. To achieve both the flexibility and the appealing intuition, the mixture model needs to be non-parametric, which would pose computational challenges for traditional likelihood-based methods. But the PR algorithm can easily and efficiently handle this major computational challenge since, again, the PRe-process updates when a new data point arrives are \(O(1)\)! So the PRe-process is like the (no data-splitting) variants of universal inference (Wasserman et al. 2020, Sec. 8), just with a specific focus on flexible mixture model alternatives with a fast, recursive updating scheme for fast, anytime valid inference. It is also statistically efficient, as we will demonstrate in Section 3.3. Our proposal can also be compared to that in Grunwald et al. (2019). While our choice to use "\(\sup_{p\in\mathscr{P}_{0}}p(X^{n})\)" in the denominator of (6) is a natural one, it is not the only option. Grunwald et al. propose a strategy based on the _reverse information projection_ (RIP), an idea which can be traced back at least to Li and Barron (2000), which amounts to replacing "\(\sup_{p\in\mathscr{P}_{0}}p(X^{n})\)" in the denominator with the likelihood at a fixed--but strategically chosen--density, say, \(p_{0}^{\dagger}\) in the convex hull, \(\operatorname{co}(\mathscr{P}_{0})\), of \(\mathscr{P}_{0}\). In our context, they might propose to make inference based on the ratio \[E_{n}^{\textsc{pr}+\textsc{RIP}}:=\hat{q}^{\textsc{pr}}(X^{n})\,/\,p_{0}^{ \dagger}(X^{n}),\] where \(P_{0}^{\dagger}\) is the reverse information projection of \(\hat{Q}^{\textsc{pr}}\) onto \(\mathscr{P}_{0}\), i.e., \[\inf_{P_{0}\in\operatorname{co}(\mathscr{P}_{0})}K(\hat{Q}^{\textsc{pr}},P_{0 })=K(\hat{Q}^{\textsc{pr}},P_{0}^{\dagger}).\] This strategy has powerful motivation and a number of nice properties concerning maximal growth rate, etc. However, the complexity of the applications we have in mind, along with the complexity in PR's learning process raise some important and non-trivial questions: first, how to compute the required \(P_{0}^{\dagger}\) and, second, whether \(E_{n}^{\textsc{pr}+\textsc{RIP}}\) does actually define a proper e-process. We leave these questions open for future investigation. ### Validity Of course, we cannot refer to the quantity defined in (6) as an "e-process" without showing that it satisfies the required properties. Theorem 1 establishes the basic e-process property from which all the relevant statistical properties (Corollaries 1-2) follow. **Theorem 1**.: _Consider a model \(\mathscr{P}\) for the iid data sequence \(X_{1},X_{2},\ldots\). For a model \(\mathscr{P}_{0}\subset\mathscr{P}\) to be tested, the PRe-process defined in (6) is an e-process._ Proof.: For any fixed \(n\), since the supremum over \(\mathscr{P}_{0}\) appears in the denominator of (6), the following inequality is immediate: \[E_{n}^{\textsc{pr}}=E^{\textsc{pr}}(X^{n};\mathscr{P}_{0})\leq E^{\textsc{pr}} (X^{n};\{p_{0}\})=\frac{\hat{q}^{\textsc{pr}}(X^{n})}{p_{0}(X^{n})},\quad\text {for all }P_{0}\in\mathscr{P}_{0}.\] The upper bound is a collection of test martingales indexed by \(P_{0}\in\mathscr{P}_{0}\) and, therefore, \((E_{n}^{\textsc{pr}})\) is an e-process under the null \(\mathscr{P}_{0}\). From this basic validity theorem, we can deduce a number of more directly interpretable statistical results. In particular, suitably-defined testing and confidence procedures control frequentist error rates under optional stopping. **Corollary 1**.: _For a null hypothesis \(H_{0}:P\in\mathscr{P}_{0}\) and alternative hypothesis \(H_{1}:P\in\mathscr{P}_{1}\), define the following testing procedures indexed by \(\alpha\in[0,1]\):_ \[T_{\alpha}(X^{n})=\begin{cases}1&\text{if }E^{\textsc{pr}}(X^{n};\mathscr{P}_{0}) \geq\alpha^{-1}\\ 0&\text{otherwise}.\end{cases}\] _That is, the above test rejects \(H_{0}\) if and only if the PRe-process exceeds \(\alpha^{-1}\). Then, under the setup of Theorem 1, the aforementioned test controls the frequentist Type I error at the designated level, i.e., for any stopping rule \(N\),_ \[\sup_{P_{0}\in\mathscr{P}_{0}}P_{0}\{T_{\alpha}(X^{N})=1\}\leq\alpha.\] Proof.: By definition, \(T_{\alpha}(X^{N})=1\) if and only if \(E^{\mbox{\tiny pr}}(X^{N};\mathscr{P}_{0})\geq\alpha^{-1}\). Theorem 1 together with Ville's inequality implies that the latter event has \(P_{0}\)-probability no more than \(\alpha\) for all \(P_{0}\in\mathscr{P}_{0}\), which proves the claim. Given null and alternative models, \(\mathscr{P}_{0}\) and \(\mathscr{P}_{1}\), respectively, one can define a corresponding "anytime p-value" \[\pi^{\mbox{\tiny pr}}(X^{n};\mathscr{P}_{0})=E^{\mbox{\tiny pr}}(X^{n}; \mathscr{P}_{0})^{-1}.\] It follows that, for all stopping times \(N\), \[\sup_{P\in\mathscr{P}_{0}}P\{\pi^{\mbox{\tiny pr}}(X^{N};\mathscr{P}_{0})\leq \alpha\}\leq\alpha,\quad\alpha\in[0,1].\] So, as expected, the anytime-valid test proposed in Corollary 1 is equivalent to a test that rejects when the above p-value is no more than \(\alpha\). By considering singleton null hypotheses \(\mathscr{P}_{0}=\{P_{0}\}\) and varying \(P_{0}\), the above p-value defines a function \(P_{0}\mapsto\pi^{\mbox{\tiny pr}}(X^{n};\{P_{0}\})\) which, as explained in Martin (2023), determines a full data-dependent imprecise probability distribution supported on \(\mathscr{P}\). This, in turn, determines a valid _inferential model_(e.g., Martin 2022; Martin and Liu 2013, 2015) that provides reliable imprecise-probabilistic uncertainty quantification about the unknown \(P\), but we will not comment further on this here. One familiar and particularly relevant consequence of the aforementioned reliability is the construction of an anytime-valid confidence set. Let \(\phi:\mathscr{P}\to\Phi\) be a map that extracts a relevant feature from \(P\in\mathscr{P}\). For example, \(\phi(P)=\int_{\mathbb{X}}x\,P(dx)\) is the mean of \(P\), \(\phi(P)=\inf\{x:P((-\infty,x])\geq\tau\}\) is the \(\tau^{\mbox{\scriptsize th}}\) quantile of \(P\), and \(\phi(P)=P\) is the distribution \(P\) itself. A valid confidence set for \(\phi(P)\) is given below. **Corollary 2**.: _Given a relevant feature map \(\phi:\mathscr{P}\to\Phi\), define the following data-dependent subset of \(\Phi\):_ \[C_{\alpha}(X^{n})=\big{\{}\phi(P):\pi^{\mbox{\tiny pr}}(X^{n};\{P\})>\alpha \big{\}},\quad\alpha\in[0,1].\] _Under the setup of Theorem 1, for any stopping time \(N\), the set \(C_{\alpha}(X^{N})\) is an anytime-valid \(100(1-\alpha)\)% confidence set in the sense that_ \[\sup_{P\in\mathscr{P}}P\{C_{\alpha}(X^{N})\not\ni\phi(P)\}\leq\alpha.\] Proof.: By definition, \(C_{\alpha}(X^{N})\not\ni\phi(P)\) if and only if \(E^{\mbox{\tiny pr}}(X^{N};\{P\})\geq\alpha^{-1}\). Theorem 1 together with Ville's inequality implies that the latter event has \(P\)-probability no more than \(\alpha\), which proves the claim. ### Asymptotic growth rate optimality As discussed above, the fact that \(E^{\mbox{\tiny pr}}_{n}\) is an e-process ensures its _validity_, i.e., that it provides frequentist error rate control. Validity, however, is a property relative to the distribution under the null \(H_{0}\). For the e-process procedure to be _efficient_, one needs to consider its properties under the alternative. In particular, when the alternative is true, we want the e-value to be large, at least asymptotically, so that we will be inclined to correctly reject the false null hypothesis. Hence, we aim to show that the PRe-process grows at the fastest possible rate, at least asymptotically. Towards this, consider the case where \(P^{\star}\not\in\mathscr{P}_{0}\) determines the true distribution of the data \(X^{\infty}\). The main result in Martin and Tokdar (2011) states that, under some conditions (Appendix A below), with \(P^{\star}\)-probability 1, \[n^{-1}\log\hat{q}^{\mbox{\tiny{\sc PR}}}(X^{n})=n^{-1}\log p^{\star}(X^{n})-K(P ^{\star},\mathscr{Q})+o(1),\quad n\to\infty, \tag{7}\] where \(K(P^{\star},Q)\) is the Kullback-Leibler divergence of \(Q\) from \(P^{\star}\) and \[K(P^{\star},\mathscr{Q})=\inf_{Q\in\mathscr{Q}}K(P^{\star},Q), \tag{8}\] with \(\mathscr{Q}=\mbox{co}(\{P_{u}:u\in\mathbb{U}\})\), the set of all mixtures having densities \(q^{\Psi}\) with the mixing distribution \(\Psi\) free to vary. Since minimizing Kullback-Leibler divergence \(Q\mapsto K(P^{\star},Q)\) over \(\mathscr{Q}\) is what the maximum likelihood estimator aims to do, we can conclude from (7) that PR is effectively maximizing the mixture model likelihood asymptotically. Next, concerning the null model \(\mathscr{P}_{0}\), we assume that \[\liminf_{n\to\infty}n^{-1}\log\{p^{\star}(X^{n})/\hat{p}_{0,n}(X^{n})\}\geq K( P^{\star},\mathscr{P}_{0}),\quad\mbox{with $P^{\star}$-probability 1}, \tag{9}\] where \(\hat{p}_{0,n}(X^{n})=\sup_{p\in\mathscr{P}_{0}}p(X^{n})\) is the maximum likelihood estimator under the null \(\mathscr{P}_{0}\). Of course, if \(\mathscr{P}_{0}=\{P_{0}\}\) is a singleton and \(K(P^{\star},P_{0})<\infty\), then it follows from the standard law of large numbers that \[n^{-1}\log\{p^{\star}(X^{n})/p_{0}(X^{n})\}\to K(P^{\star},P_{0})\quad\mbox{as $n\to\infty$, with $P^{\star}$-probability 1}.\] More generally, for a composite \(\mathscr{P}_{0}\), a uniform law of large numbers establishes (9)--with equality and "\(\liminf\)" replaced by "\(\lim\)"--if \(\mathscr{P}_{0}\) is a Glivenko-Cantelli class (e.g., Kosorok 2008; van de Geer 2000; van der Vaart 1998; van der Vaart and Wellner 1996). Sections 4-5 consider two nonparametric problems: testing monotonicity and testing log-concavity. That these form suitable Glivenko-Cantelli classes is demonstrated in van der Vaart (1998, Example 19.11) and Doss and Wellner (2016, Theorem 3.1), respectively. Putting everything together, we have that, with \(P^{\star}\)-probability 1, the PRe-process (6) satisfies the following asymptotic inequality: \[n^{-1}\log E_{n}^{\mbox{\tiny{\sc PR}}} =n^{-1}\log\{\hat{q}^{\mbox{\tiny{\sc PR}}}(X^{n})/\hat{p}_{0,n}(X ^{n})\}\] \[=n^{-1}\log\{p^{\star}(X^{n})/\hat{p}_{0,n}(X^{n})\}+n^{-1}\log\{ \hat{q}^{\mbox{\tiny{\sc PR}}}(X^{n})/p^{\star}(X^{n})\}\] \[\geq K(P^{\star},\mathscr{P}_{0})-K(P^{\star},\mathscr{Q})+o(1).\] This effectively proves the following theorem; sufficient conditions for (7) and the detailed proof are provided in Appendix A below. **Theorem 2**.: _Consider \(X_{1},X_{2},\ldots\) iid with distribution \(P^{\star}\). If (7) and (9) hold, then with \(P^{\star}\)-probability 1, the PRe-process has asymptotic growth rate_ \[\Delta(P^{\star};\mathscr{P}_{0},\mathscr{Q})=K(P^{\star},\mathscr{P}_{0})-K( P^{\star},\mathscr{Q}). \tag{10}\] _That is, the PRe-process satisfies_ \[E_{n}^{\mbox{\tiny{\sc PR}}}\geq\exp\bigl{[}n\bigl{\{}\Delta(P^{\star}; \mathscr{P}_{0},\mathscr{Q})+o(1)\bigr{\}}\bigr{]},\quad n\to\infty,\quad P^{ \star}\mbox{-probability 1}.\] A few remarks concerning the above conclusions are in order. First, consider the case where \(K(P^{\star},\mathscr{P}_{0})>0\), which is our primary interest here. Our claim above is that the growth rate \(\Delta(P^{\star};\mathscr{P}_{0},\mathscr{Q})\) is "optimal" which deserves some explanation. Start with the simple case of a simple-versus-simple test, i.e., \(\mathscr{P}_{0}=\{P_{0}\}\) and \(\mathscr{P}_{1}=\mathscr{Q}=\{P_{1}\}\). Then the usual likelihood ratio is an e-process and, if \(P^{\star}=P_{1}\), then the growth rate is \(K(P_{1},P_{0})\), which agrees with \(\Delta\) in (10). That one cannot get a faster growth rate by replacing the alternative \(P_{1}\) in the e-process by something else is a consequence of Gibbs's inequality (e.g., Shafer 2021). As Ramdas et al. (2022a) write: "in a simple versus simple test, the likelihood ratio is growth rate optimal." More generally, if we are committed to defining an e-process with "\(\sup_{p_{0}\in\mathscr{P}_{0}}p_{0}(X^{n})\)" in the denominator, then the leading term \(K(P^{\star},\mathscr{P}_{0})\) in the growth rate (10) is unavoidable. In that case, the only input the user has in the construction is the choice of "\(\hat{q}(X^{n})\)" in the numerator. If a commitment is made to choose "\(\hat{q}\)" in the convex hull \(\mathscr{Q}\), then there is no choice of numerator that can achieve a larger growth rate than that in (10). This can also be compared to the growth rates in Grunwald et al. (2019). Indeed, if the true \(P^{\star}\) was _known_ and used as the "alternative" in the construction of their class of candidate e-values, then their solution is, by definition, growth rate optimal and its growth rate agrees with that in Theorem 2. The rate cannot be better when the true \(P^{\star}\) is unknown, hence our PRe-process must be growth rate optimal too. Of course, other methods may have the same or similar asymptotic growth rate, but their rate cannot be faster than ours. Second, suppose \(K(P^{\star},\mathscr{P}_{0})=0\) and \(K(P^{\star},\mathscr{Q})>0\), i.e., the null is definitively true. In this case, we actually want and expect the PRe-process to be small, even vanishing with \(n\). Theorem 2 gets us close to this, the only snag is that it generally only gives a lower bound on the PRe-process, and a vanishing lower bound does not imply a vanishing PRe-process. However, if (9) holds with "\(\liminf\)" replaced by "\(\lim\)" and "\(\geq\)" replaced by "\(=\)," as would often be the case, then Theorem 2 gives an exact asymptotic description of the behavior of \(E_{n}^{\textsc{pr}}\). So, if the null is true, then indeed we get that the PRe-process is vanishing as \(n\to\infty\), exactly as one would hope/expect. The same vanishing-under-the-null property holds more generally for non-negative martingales, as shown in Section 8.2 of Ramdas et al. (2022b). For the (literal) edge case where \(K(P^{\star},\mathscr{P}_{0})=K(P^{\star},\mathscr{Q})=0\), e.g., if \(\mathscr{P}_{0}\) is the family of normal distributions and \(\mathscr{Q}\) consists of mixtures thereof, then a \(P^{\star}\in\mathscr{P}_{0}\) is also on the boundary of \(\mathscr{Q}\), and Theorem 2 offers no guidance. ## 4 Illustrations Here we consider two illustrations of the procedure described in Section 3: testing for monotonicity and testing a specific parametric null model. Validity of the PRe-process procedure is guaranteed by the general theory in Section 3.2, so the focus here is on efficiency, in particular, on comparing the empirical performance of the PRe-process procedure with that predicted by Theorem 2. So, in what follows, the data \(X_{1},X_{2},\ldots\) will be generated from a distribution \(P^{\star}\) that does not belong to \(\mathscr{P}_{0}\). To implement the PRe-process methodology, we need to compute the numerator and denominator of the ratio that defines \(E_{n}^{\textsc{pr}}\). The denominator requires maximizing the likelihood function over the null, and for the examples we have in mind here, there are existing algorithms and software to carry this out. The numerator is where the PR algo rithm is needed. It is well-known that location-scale Gaussian mixtures are quite flexible and this would be our general recommendation for the family \(\mathscr{Q}\). There are exceptions, however, like the monotonicity example below where the data are supported on the positive half-line; in that case, of course, a Gaussian mixture would be inappropriate, but we can easily substitute in a similarly-flexible gamma mixture model. Going with the recommendation in Martin and Tokdar (2009), we take the weight sequence for PR as \(w_{i}=(i+1)^{-0.67}\) for the examples here and in Section 5. One important advantage of PR is its computational efficiency. The calculation in equation (6) can be done recursively as new observations become available. This aligns well with the "anytime" aspect of the solution. In both illustrations, the PRe-process is computationally efficient and the empirical performance closely matches the predictions made by the asymptotic efficiency results presented in Section 3.3. _Example 1_.: Monotone densities are often encountered in biomedical, engineering, and astronomy applications. The most common estimator of a monotone density is the nonparametric maximum likelihood estimator, also known as the _Grenander estimator_(e.g., Grenander, 1956). Of course, there are many other estimators, including Bayesian estimators based on Dirichlet process (Salomond, 2014) or Bernstein polynomial priors (Turnbull and Ghosh, 2014), and even a method based on PR (Dixit and Martin, 2023b). Similarly, a number of procedures are available for testing for monotonicity, including Hall and Heckman (2000), Ghosal et al. (2000), Chakraborty and Ghosal (2022). Here we construct a new, anytime valid, PRe-process test for monotonicity. To compute the PRe-process, the Grenander estimator takes care of the denominator; in our case, we use the Grenander function in the R package REBayes(Koenker and Gu, 2017). For the numerator, we use a mixture model with a gamma kernel, i.e., \(p_{u}\) is a gamma density with \(u\) consisting of the shape and rate/scale parameter pair. Then we use the PR algorithm to fit a mixture of \(p_{u}\) over \(\mathbb{U}=[1,15]\times[10^{-5},5]\) with the initial guess taken to be a uniform distribution over \(\mathbb{U}\) and weight sequence as given above. For our experiments, the true distribution \(P^{\star}\), which has a non-monotone density, is a gamma distribution with unit rate parameter and varying shape parameter. The idea is that, if the shape parameter is \(1\), then \(P^{\star}\) would be exponential which is monotone; so as the shape parameter varies from \(2\), to \(5\), and to \(10\), the density becomes "less monotone." We generate \(100\) data sets of size \(5000\) from the true \(P^{\star}\) and calculate the PRe-process \(E_{n}^{\textsc{pr}}\) at the increments \(n=100,200,\ldots,5000\). A plot of the \(\log E_{n}^{\textsc{pr}}\) versus \(n\) is displayed in Figure 1(a), along with a reference line having slope \(K(P^{\star},\mathscr{P}_{0})\). Note, first, that the growth rate increases as the true density gets "less monotone" and, second, that the PRe-process test attains the theoretical growth rate across all three scenarios. _Example 2_.: Next, consider a testing problem where the null lies on the boundary of the alternative. That is, the null hypothesis \(\mathscr{P}_{0}\) is the Gaussian family indexed by their mean and variance parameters, while the alternative hypothesis \(\mathscr{P}_{1}\) is the family of Gaussian mixtures. This is a challenging problem, as discussed in, e.g., Tokdar et al. (2010) and Tokdar and Martin (2021). Here we propose an anytime valid PRe-process test. In this case, computation of the PRe-process's denominator is straightforward, since we have readily-available closed-form expressions for the maximum likelihood estimators under a normal model. For the numerator, let \(p_{u}\) denote a Gaussian kernel with \(u\) the mean and standard deviation pair, and apply the PR algorithm with support \(\mathbb{U}=[-10,20]\times[0.01,3]\), and initial guess \(\Psi_{0}\) uniform over \(\mathbb{U}\). For our simulation, we generate data from a distribution \(P^{\star}\), which has density \[p^{\star}(x)=\tfrac{3}{4}\,\mathsf{N}(x\mid 0,2)+\tfrac{1}{4}\,\mathsf{N}(x \mid\mu,2),\quad x\in\mathbb{R}, \tag{11}\] where, again, the distance between the two modes \(\{0,\mu\}\) acts as a measure of the "degree of non-normality," with large distances corresponding to higher degree of non-normality. The three specific cases we consider here are \(\mu=6,10,14\). We generate \(100\) data sets of size \(5000\) under each configuration, calculate \(E_{n}^{\textsc{pr}}\) at increments \(n=100,200,\ldots,5000\). and plot \(\log E_{n}^{\textsc{pr}}\) versus \(n\) in Figure 1(b), along with a reference line having slope \(K(P^{\star},\mathscr{P}_{0})\). As expected, the slope increases in the "degree of non-normality," and the log-PRe-process paths closely follow this trend. ## 5 Comparison: testing log-concavity The collection of log-concave densities on the real line includes the familiar Gaussian, logistic, and Laplace distribution, among others. Naturally, since log-concavity is a common structure, applications are abundant. Efficient maximum likelihood estimation techniques have been developed for estimating a log-concave density (e.g., Carpenter et al., 2018; Cule and Samworth, 2010; Dumbgen and Rufibach, 2011). As for valid testing of log-concavity, the literature is relatively scarce. Some recent advances have been made in Gangrade et al. (2023) and Dunn et al. (2021). Here, we present a test for log-concavity based on the PRe-process methodology described above. For the PRe-process construction, first consider the denominator. Under the null model \(\mathscr{P}_{0}\), we need the log-concave maximum likelihood estimator, which we get using the logConDens function from the R package logcondens(Dumbgen and Rufibach, 2011). For the numerator, we consider a mixture model with a Gaussian kernel \(p_{u}\), with \(u\) the mean and standard deviation pair. For PR, we take \(\mathbb{U}=[-10,20]\times[0.01,3]\), initial guess \(\Psi_{0}\) a uniform distribution over \(\mathbb{U}\), and weights \(w_{i}\) as above. An R package for PRe-process testing of log-concavity can be found at [https://github.com/vdixit005/PReprocess](https://github.com/vdixit005/PReprocess). For Dunn et al.'s test statistic, which we denote by \(E_{n}^{\textsc{\tiny UI}}\), we use the code available at [https://github.com/](https://github.com/) RobinMDunn/LogConcaveUniv with its default settings. For the test statistic proposed in Gangrade et al. (2023), which we denote by \(E_{n}^{\textsc{\tiny ULR}}\), we follow their recommendation and use an iteratively fit Gaussian kernel density estimator. For the simulation, we generate data from a bimodal distribution \(P^{\star}\) with density in (11), where in this case \(\mu\) acts as a measure of the "degree of non-log-concavity," i.e., if \(\mu\) is close to \(0\), then \(p^{\star}\) is only mildly non-log-concave; otherwise, \(p^{\star}\) is more severely non-log-concave. The three cases we consider in our experiments are \(\mu=6,10,14\). We generate \(100\) data sets of size \(5000\) under each, calculate \(E_{n}^{\textsc{\tiny PR}}\), \(E_{n}^{\textsc{\tiny UI}}\) and \(E_{n}^{\textsc{\tiny ULR}}\) at increments \(n=100,200,\ldots,5000\). Plots of \(\log E_{n}\) versus \(n\), along with a pointwise average over replications for each method are shown in Figure 2. The key takeaways are as follows. First, in terms of statistical efficiency, our PRe-process's growth rate is faster than Dunn et al.'s and no slower than Gangrade et al.'s in each scenario. Interestingly, the slopes of the \(n\) vs. \(\log E_{n}^{\textsc{\tiny UI}}\) lines in Figure 2 are roughly half that of the \(n\) vs. \(\log E_{n}^{\textsc{\tiny PR}}\) lines, which suggests that the loss of efficiency is due to the 50-50 data-splitting Dunn et al. employ. Second, in terms of stability, the log-PRe-process has considerably smaller variance than Gangrade et al.'s log-e-process. Third, in terms of computational efficiency, both \(E_{n}^{\textsc{\tiny UI}}\) and \(E_{n}^{\textsc{\tiny ULR}}\) need to process the entire observed data sequence each time a batch of data arrives because the previous calculation cannot be directly updated. The PRe-process, on the other hand, can be updated in \(O(1)\) many steps when a new data point arrives. ## 6 Conclusion In this paper we proposed a novel e-process construction based on the PR algorithm--a fast, recursive procedure for fitting flexible, nonparametric mixture models. It is shown that the corresponding PRe-process is a genuine e-process and, therefore, offers provable, finite-sample, anytime valid inference. Further, it is shown that the PRe-process attains the asymptotically optimal growth rate under the alternative relative to the posited mixture model fit by PR. Numerical results demonstrate that the PRe-process's empirical Figure 2: Plots of \(\log E_{n}^{\textsc{\tiny PR}}\) (blue), \(\log E_{n}^{\textsc{\tiny ULR}}\) (pink) and \(\log E_{n}^{\textsc{\tiny UI}}\) (green) versus \(n\) overlaid with the average line under each method based on the not-log-concave density \(p^{\star}\) in (11). Panel (a) corresponds to \(p^{\star}\) “closest” to log-concave while (c) is “farthest.” performance closely agrees with the behavior predicted by the asymptotic growth rate optimality theorem and, moreover, that our PRe-process is no less efficient than a recently proposed, anytime valid test in Gangrade et al. (2023), but is more stable in terms of variance and computationally much more efficient. The setup and theory presented here is general, but all of our illustrations consider only univariate data, i.e., testing for structure in a density function supported on the real line. The extension to the multivariate case is purely a computational problem--the bottleneck is PR's need to fit a mixture model over a relatively high-dimensional support. This is a challenge for the original PR algorithm where the normalization in (4) was based on quadrature. Recently, however, we developed a Monte Carlo-driven _PRticle filtering_ algorithm (Dixit and Martin 2023a) that can readily accommodate multivariate mixture models. The incorporation of this Monte Carlo strategy into the PRe-process framework for anytime valid inference in multivariate settings is the focus of ongoing work. ## Acknowledgments Thanks to Aaditya Ramdas for feedback on and corrections to a previous version. This work is partially supported by the U.S. National Science Foundation, SES-2051225. ## Appendix A Technical details ### Setup of Theorem 2 There are a few inputs that play key roles in the asymptotic properties of the PR algorithm. These include some user-specified inputs, namely, the family of kernel densities \(\{p_{u}:u\in\mathbb{U}\}\) that determine the set of mixtures \(\mathscr{Q}\), and PR's weights \((w_{i})\) and initial guess \(\Psi_{0}\); two more relevant quantities that are mostly, if not entirely, out of the user's control are the true distribution \(P^{\star}\), with density \(p^{\star}\), and the corresponding Kullback-Leibler projection of \(P^{\star}\) onto the set \(\mathscr{Q}\) of mixtures. This latter projection is defined as the distribution \(Q^{\star}\) with density \(q^{\star}\) that satisfies \[K(P^{\star},Q^{\star})=K(P^{\star},\mathscr{Q}):=\inf_{Q\in\mathscr{Q}}K(P^{ \star},Q).\] The existence of \(Q^{\star}\) is ensured by various sets of conditions; see, e.g., Liese and Vajda (1987, Ch. 8). In particular, existence of \(Q^{\star}\) is implied by Condition 1 below, as shown in Lemma 3.1 of Martin and Tokdar (2009). The following conditions, on which Theorem 2 is based, concern the basic properties of these individual inputs and their interplay. _Condition 1_.: \(\mathbb{U}\) is compact and \(u\mapsto p_{u}(x)\) is continuous for almost all \(x\). _Condition 2_.: The PR weight sequence \((w_{i})\) satisfies (3). _Condition 3_.: The kernel density \(p_{u}(x)\) satisfies \[\sup_{u_{1},u_{2}\in\mathbb{U}}\int\Bigl{\{}\frac{p_{u_{1}}(x)}{p_{u_{2}}(x)} \Bigr{\}}^{2}\,p^{\star}(x)\,dx<\infty \tag{12}\] _Condition 4_.: The Kullback-Leibler projection \(Q^{\star}\) in (8), with density \(q^{\star}\), satisfies \[\int\Bigl{\{}\frac{p^{\star}(x)}{q^{\star}(x)}\Bigr{\}}^{2}\,p^{\star}(x)\,dx<\infty. \tag{13}\] Here we offer some explanation and intuition. First, compactness of the mixing distribution support \(\mathbb{U}\) in Condition 1 is difficult to relax, but the fact that \(\mathbb{U}\) can be taken arbitrarily large means that this imposes effectively no practical constraints on the user. Continuity of the kernel can be relaxed, but at the expense of a much more complicated condition; the reader interested in this can consult Equation (7) in Dixit and Martin (2023b) and the relevant discussion. Condition 2 says simply that the weights must be vanishing to ensure convergence but not too quickly since the algorithm needs an opportunity to learn; the requirement in (3) is just right for this. Condition 3 is non-trivial but holds for Gaussian and other exponential family distributions thanks to the compactness of \(\mathbb{U}\) in Condition 1. Finally, Condition 4 concerns the quality of the mixture model itself. One cannot hope to achieve quality estimation/inference in any sense if the "best" member of the mixture model differs considerably from the true density \(p^{\star}\). Equation (13) is just a particular way to say that \(p^{\star}\) and \(q^{\star}\) do not differ by too much. Once the user makes his/her specification of the mixture model, Condition 3 determines a set of true densities \(p^{\star}\) for which the PR algorithm will provide consistent estimation. Indeed, Theorem 1 in Martin and Tokdar (2011) states that, under Conditions 1-3, the PR estimator \(\hat{q}_{X^{n}}\) satisfies \(K(p^{\star},\hat{q}_{X^{n}})\to\inf_{q\in\mathscr{D}}K(p^{\star},q)\) with \(P^{\star}\)-probability 1 as \(n\to\infty\). The user, of course, can vary the mixture model specification to tailor the PR algorithm toward what they expect \(p^{\star}\) to look like. But we need more than consistency for our purposes here, and Conditions 4 further restricts the set of true densities to those for which the PR algorithm can give us the "more" that we need here. ### Proof of Theorem 2 Theorem 2 follows immediately from (7) and (9) as explained in the main text. The goal here is to show that (7) holds under Conditions 1-4 as stated above. The argument closely follows that in Martin and Tokdar (2011), but we provide the details here for completeness since their context and notation is different. The strategy of the proof is as follows. First, simplify the notation by writing \(\hat{q}_{i-1}(X_{i})=\hat{q}_{X^{i-1}}(X_{i})\) for each \(i\). Next, define the sequence of random variables \[K_{n}=\frac{1}{n}\sum_{i=1}^{n}\log\frac{p^{\star}(X_{i})}{\hat{q}_{i-1}(X_{i} )},\] which might be interpreted as a sort of empirical Kullback-Leibler divergence. If \(K^{\star}=K(P^{\star},\mathscr{D})\), then (7) is equivalent to \(K_{n}\to K^{\star}\) with \(P^{\star}\)-probability 1 as \(n\to\infty\). It is this latter claim that we will prove below, using a martingale strong law in Teicher (1998). Towards this, define a sequence of random variables \(Z_{i}\) as, \[Z_{i}=\log\frac{p^{\star}(X_{i})}{\hat{q}_{i-1}(X_{i})}-K(p^{\star},\hat{q}_{i -1}),\quad i\geq 1.\] Recall that \(\mathscr{A}_{i-1}=\sigma(X^{i-1})\), so \(\mathsf{E}(Z_{i}\mid\mathscr{A}_{i-1})=0\) and, therefore, \(\{(Z_{i},\mathscr{A}_{i}):i\geq 1\}\) is a zero mean martingale sequence. For \(q^{\star}\) as in (8), we have \[\mathsf{E}(Z_{i}^{2}\mid\mathscr{A}_{i-1}) \leq\int\left\{\log\frac{p^{\star}(x)}{\hat{q}_{i-1}(x)}\right\} ^{2}p^{\star}(x)\,dx\] \[=\int\left\{\log\frac{q^{\star}(x)}{\hat{q}_{i-1}(x)}+\log\frac{p ^{\star}(x)}{q^{\star}(x)}\right\}^{2}p^{\star}(x)\,dx\] \[\leq 2\int\left\{\log\frac{q^{\star}(x)}{\hat{q}_{i-1}(x)}\right\} ^{2}p^{\star}(x)\,dx+2\int\left\{\log\frac{p^{\star}(x)}{q^{\star}(x)}\right\} ^{2}p^{\star}(x)\,dx\] \[=2A_{i}+2B\] Of course, \(B\) is a finite constant according to Condition 4. To bound \(A_{i}\) let us first define \(\mathbb{X}_{0}=\{x:q^{\star}(x)<\hat{q}_{i-1}(x)\}\). Using basic properties of the logarithm, we get \[A_{i} =\int\left\{\log\frac{q^{\star}(x)}{\hat{q}_{i-1}(x)}\right\}^{2 }p^{\star}(x)\,dx\] \[=\int_{\mathbb{X}_{0}}\left\{\log\frac{\hat{q}_{i-1}(x)}{q^{ \star}(x)}\right\}^{2}p^{\star}(x)\,dx+\int_{\mathbb{X}_{0}^{c}}\left\{\log \frac{q^{\star}(x)}{\hat{q}_{i-1}(x)}\right\}^{2}p^{\star}(x)\,dx\] \[\leq 2+\int\left[\left\{\frac{\hat{q}_{i-1}(x)}{q^{\star}(x)} \right\}^{2}+\left\{\frac{q^{\star}(x)}{\hat{q}_{i-1}(x)}\right\}^{2}\right]p ^{\star}(x)\,dx.\] Since both \(\hat{q}_{i-1}\) and \(q^{\star}\) in the two numerators in the above display are mixtures of the kernel \(p_{u}\), we can say that \[A_{i}\leq 2+2\sup_{u_{1},u_{2}\in\mathbb{U}}\int\Bigl{\{}\frac{p_{u_{1}}(x)}{p_{u _{2}}(x)}\Bigr{\}}^{2}p^{\star}(x)\,dx\] which is bounded by Condition 3. Therefore, \(\mathsf{E}(Z_{i}^{2}\mid\mathscr{A}_{i-1})\) is bounded too. Switching from index "\(i\)" to the more natural "\(n\)," we have that \[\frac{\mathsf{E}(Z_{n}^{2}\mid\mathscr{A}_{n-1})}{n^{2}(\log\log n)^{-1}} \lesssim n^{-2}(\log\log n)\to 0.\] Therefore, by Markov's inequality, with \(P^{\star}\)-probability 1 we have, \[\sum_{n=1}^{\infty}P^{\star}\Bigl{(}|Z_{n}|>\frac{n}{\log\log n}\Bigm{|}\mathscr {A}_{n-1}\Bigr{)}\lesssim\sum_{n=1}^{\infty}\frac{(\log\log n)^{2}}{n^{2}}<\infty.\] From this, it follows by Corollary 2 of Teicher (1998) that \(n^{-1}\sum_{i=1}^{n}Z_{i}\to 0\) with \(P^{\star}\)-probability 1. Therefore, also with \(P^{\star}\)-probability 1, \[\Bigl{|}K_{n}-\frac{1}{n}\sum_{i=1}^{n}K(p^{\star},\hat{q}_{i-1})\Bigr{|}= \Bigl{|}(K_{n}-K^{\star})-\frac{1}{n}\sum_{i=1}^{n}\{K(p^{\star},\hat{q}_{i-1}) -K^{\star}\}\Bigr{|}\to 0.\] From Theorem 1 in Martin and Tokdar (2011) and Cesaro's theorem, we have \(K_{n}-K^{\star}\to 0\) with \(P^{\star}\)-probability 1.
2303.18229
Supermassive black hole mass in the massive elliptical galaxy M87 from integral-field stellar dynamics using OASIS and MUSE with adaptive optics: assessing systematic uncertainties
The massive elliptical galaxy M87 has been the subject of several supermassive black hole mass measurements from stellar dynamics, gas dynamics, and recently the black hole shadow by the Event Horizon Telescope (EHT). This uniquely positions M87 as a benchmark for alternative black hole mass determination methods. Here we use stellar kinematics extracted from integral-field spectroscopy observations with Adaptive Optics (AO) using MUSE and OASIS. We exploit our high-resolution integral field spectroscopy to spectrally decompose the central AGN from the stars. We derive an accurate inner stellar-density profile and find it is flatter than previously assumed. We also use the spectrally-extracted AGN as a reference to accurately determine the observed MUSE and OASIS AO PSF. We then perform Jeans Anisotropic Modelling (JAM), with a new flexible spatially-variable anisotropy, and measure the anisotropy profile, stellar mass-to-light variations, inner dark matter fraction, and black hole mass. Our preferred black hole mass is $M_{\rm BH}=(8.7\pm1.2 [\text{random}] \pm1.3 [\text{systematic}]) \times 10^9 \ M_\odot $. However, using the inner stellar density from previous studies, we find a preferred black hole mass of $M_{\rm BH} = (5.5^{+0.5}_{-0.3}) \times 10^9 \ M_\odot $, consistent with previous work. We find that this is the primary cause of the difference between our results and previous work, in addition to smaller contributions due to kinematics and modelling method. We conduct numerous systematic tests of the kinematics and model assumptions and conclude that uncertainties in the black hole mass of M87 from previous determinations may have been underestimated and further analyses are needed.
David A. Simon, Michele Cappellari, Johanna Hartke
2023-03-31T17:40:35Z
http://arxiv.org/abs/2303.18229v2
Supermassive black hole mass in the massive elliptical galaxy M87 from integral-field stellar dynamics using OASIS and MUSE with adaptive optics: assessing systematic uncertainties ###### Abstract The massive elliptical galaxy M87 has been the subject of several supermassive black hole mass measurements from stellar dynamics, gas dynamics, and recently the black hole shadow by the Event Horizon Telescope (EHT). This uniquely positions M87 as a benchmark for alternative black hole mass determination methods. Here we use stellar kinematics extracted from integral-field spectroscopy observations with Adaptive Optics (AO) using MUSE and OASIS. We exploit our high-resolution integral field spectroscopy to spectrally decompose the central AGN from the stars. We derive an accurate inner stellar-density profile and find it is flatter than previously assumed. We also use the spectrally-extracted AGN as a reference to accurately determine the observed MUSE and OASIS AO PSF. We then perform Jeans Anisotropic Modelling (JAM), with a new flexible spatially-variable anisotropy, and measure the anisotropy profile, stellar mass-to-light variations, inner dark matter fraction, and black hole mass. Our preferred black hole mass is \(M_{\rm BH}=(8.7\pm 1.2)\times 10^{9}\ M_{\odot}\). However, using the inner stellar density from previous studies, we find a preferred black hole mass of \(M_{\rm BH}=(5.5^{+0.5}_{-0.3})\times 10^{9}\ M_{\odot}\), consistent with previous work. We conduct numerous systematic tests of the kinematics and model assumptions and conclude that uncertainties in the black hole mass of M87 from previous determinations may have been underestimated and further analyses are needed. keywords: black hole physics - instrumentation: adaptive optics - galaxies: elliptical and lenticular, cD - galaxies: individual: M87 - galaxies: kinematics and dynamics ## 1 Introduction Supermassive black holes play an important role in galaxy evolution. This is shown through empirical relations between black hole mass and luminosity (Kormendy & Richstone, 1995; Magorrian et al., 1998), as well as black hole mass and stellar velocity dispersion (Ferrarese & Merritt, 2000; Gebhardt et al., 2000). The reliability of these relationships depends on accurate black hole mass measurements. M87 is one of the most massive galaxies of the Virgo cluster and sits at the centre of the main sub-cluster (e.g. Cappellari et al., 2011, fig. 7). It is a prototypical massive slow rotator early-type galaxy, with a large Sersic (Sersic, 1963) index and a core in the nuclear surface brightness profile (Kormendy et al., 2009), and fits all characteristics for having assembled most of its mass by dry mergers (see review by Cappellari, 2016). Like other galaxies of its type, M87 contains a supermassive black hole at its centre (Kormendy & Ho, 2013) whose sphere of influence has the largest angular size of any known black hole outside of the Milky Way, making it a valuable target for black hole studies. The measurement of the black hole shadow by the Event Horizon Telescope (EHT) (Event Horizon Telescope Collaboration et al., 2019) makes M87 the first galaxy for which direct imaging of the supermassive black hole has taken place. This has the potential to serve as a powerful test of the general theory of relativity: but only if we can confirm through independent measurements that the black hole mass recovered assuming general relativity is correct. The measurement of the black hole shadow from the EHT (Event Horizon Telescope Collaboration et al., 2019) assuming general relativity determined the black hole mass to be \((6.5\pm 0.7)\times 10^{9}\)M\({}_{\odot}\). In the case of M87 there are two other such classes of measurement. Gas dynamical measurements by Harms et al. (1994), Macchetto et al. (1997), and Walsh et al. (2013) have measured the masses \((2.7\pm 0.8)\times 10^{9}\)M\({}_{\odot}\), \((3.6\pm 1.0)\times 10^{9}\)M\({}_{\odot}\), and \((3.3^{+0.8}_{-0.7})\times 10^{9}\)M\({}_{\odot}\), respectively. Stellar dynamical measurements (Gebhardt & Thomas, 2009; Gebhardt et al., 2011; Liepold et al., 2023) using the orbital superposition method of Schwarzschild (Sohvaraschild, 1979) have been made which found a black hole masses of \((6.0\pm 0.5)\times 10^{9}\)M\({}_{\odot}\), \((6.2\pm 0.4)\times 10^{9}\)M\({}_{\odot}\), and \((5.37^{+0.37}_{-0.25})\times 10^{9}\)M\({}_{\odot}\) respectively. There is thus a discrepancy in the recovered black hole masses by a factor of two depending on the method used. Jeter et al. (2019), Jeter & Broderick (2021) propose that this may be due to unrealistic assumptions made in the gas modelling. They find that more detailed accounting of the radial motion of the gas as well as allowing for a thick gas disk can alleviate the discrepancy. The agreement between stellar dynamical measurements and the measurement of the black hole shadow is reassuring, but it is still important to continue testing and independently verifying stellar dynamical models in order to fully understand possible systematics and to improve the robustness of the measurement. In this paper we derive new independent measurements of the black hole mass using stellar dynamics with two different high-resolution integral-field datasets from two different telescopes and a different dynamical modelling approach than previously used. This paper is laid out as follows: in section 2 we introduce the integral field data and photometric data used in this study. In section 3 we describe our spectral fitting and discuss a number of tests we performed and several methods of extracting the kinematics that we use. In section 4 we use a combination of IFU data with photometry to accurately measure the stellar density profile for M87 down to the region dominated by the AGN. In section 5 and section 6 we describe the details of our Jeans modelling and present our black hole mass constraints. We compare these with previous observations and discuss a number of systematic uncertainties. Lastly, in section 7 we summarize our results and comment on the future landscape for studies of M87. We take the distance to M87 to be 16.8 Mpc (Event Horizon Telescope Collaboration et al., 2019). All black hole masses quoted are scaled to this distance. This corresponds to a spatial scale of 81.1 pc per 1 arcsecond. ## 2 Data ### Integral Field Spectroscopy We use integral field observations from the Optically Adaptive System for Imaging Spectroscopy (OASIS) spectrograph made on the Canada-France-Hawaii Telescope (CFHT) (McDermid et al., 2006) and the Multi Unit Spectroscopic Explorer (MUSE) (Bacon et al., 2010) in narrow field mode (NFM) on the Very Large Telescope (VLT) with adaptive optics (AO). The OASIS integral field spectrograph (IFS) has a 10''\(\times\)8'' field of view with a 0''.27\(\times\)0''.27 pixel scale. For this measurement the spectrograph was configured to cover the wavelength range of 4760-5558 A with a resolution of 5.4 A FWHM (corresponding to an instrumental dispersion of \(\sigma_{\rm inst}\approx 134\) km s\({}^{-1}\)) sampled at 1.95 A per pixel. Three observations were made for 2700 seconds each, which were then combined to form the final image (see McDermid et al. (2006) for a detailed description of the observations and data reduction). The MUSE observations were made as part of program 0103.B-0581 (PI: N. Nagar) in NFM. The NFM covers a field of view of 7.5"x7.5" with a pixel scale of 0.025"/pix and is used together with the the adaptive optics facility GALACSI (Arsenault et al., 2008; Strobele et al., 2012), providing laser tomographic AO corrections. The observations were carried out on 20 February 2021 and consist of nine dithered exposures of 700s, resulting in a total exposure time of 6300s. The data were reduced with the standard MUSE data reduction pipeline (Weilbacher et al., 2020) in the ESO reflex environment (Freudling et al., 2013) with the default parameters for sky subtraction. The parameters for source detection and image alignment were optimized to align the individual exposures based on a combination of point sources and the knots of the jet of M87. MUSE covers a wavelength range from 4650-9300 A with a gap between 5780-6050 A due to a Na notch filter blocking the light from the four laser guide stars facility (4LGSF). The spectrum is sampled at 1.25 A per pixel with a resolution of about 2.6 A FWHM, corresponding to an instrumental dispersion \(\sigma_{\rm inst}\approx 63\) km s\({}^{-1}\). For our analysis, we restrict the spectral range to cover only 4800-5700 A. We introduce this restriction on the right end of the spectrum to account for noise due to skylines. We restrict the left side of the spectrum due to template mismatch. Note that this wavelength range allows both datasets to be analyzed in very similar ways. ### Photometry We use HST imaging from Cote et al. (2004) (HST proposal 9401) to construct our stellar surface brightness model while carefully removing the AGN. This is an F850LP ACS/WFC observation covering a field of view of approximately 211''\(\times\)212'' with a pixel scale of 0''.5. The exposure was made for 90 seconds, guaranteeing that the nucleus does not become saturated. We also use an r-band SDSS mosaic generated with the software Montage1 to constrain the stellar surface brightness at larger radii. The SDSS image covers a spatial scale of approximately 713''\(\times\)713'' with a pixel scale of 0''.396. Footnote 1: Available from [http://montage.ipac.caltech.edu/](http://montage.ipac.caltech.edu/) Footnote 2: Available from [https://pypi.org/project/worbin/](https://pypi.org/project/worbin/) ## 3 Kinematic Extraction ### Spectral Fitting We bin the galaxy spectra spatially using the Voronoi tesselation algorithm and VorBin software package3 described in Cappellari and Copin (2003). This algorithm takes the x and y coordinates of a set of data along with the assigned signal and noise and bins neighboring points to a target signal to noise ratio. We define the signal to be the median spectral flux and the noise to be the median error (this is done before logarithmically rebinning the spectra). For the OASIS data, we bin to a target signal to noise ratio of 50 per 1.95 A spectral pixel. This leaves all of the spaxels in the innermost arcsecond unbinned, allowing for maximum spatial resolution. MUSE has a much higher spatial resolution, requiring some spatial binning in the innermost parts of the galaxy to have sufficient signal for kinematic fitting. We start by masking the innermost 0''.1 as these spectra are entirely dominated by the AGN. We tested binning with a target signal to noise ratio of 50, 40, 30, 20, 15, and 10 per 1.25 A spectral pixel. The recovered black hole mass is consistent for 50, 40, 30, and 20, but increases sharply for lower signal to noise. This is due to the fact that with this level of noise, a flat fit to the spectra is allowed in the innermost regions, resulting in many central spectra having anomalously large dispersion. For the rest of this work, we use the case with a signal to noise ratio of 50 per 1.25 A spectral pixel. For both datasets, we remove all data points for which the fraction of the flux due to stars is less than 50%. Footnote 3: Available from [https://pypi.org/project/ppxf/](https://pypi.org/project/ppxf/) We logarithmically resample the spectra to a velocity scale \(\Delta V=c\Delta\ln\lambda\) of 105 and 66 km s\({}^{-1}\) for OASIS and MUSE respectively and fit the binned spectra using the penalized Pixel Fitting method pXRT software package4 of Cappellari and Emsellem (2004) and Cappellari (2017, 2022). This method allows for the simultaneous fitting of template stellar spectra, template gas spectra, and continuum contributions/template mismatch with the addition of additive polynomials. We can write an observed galaxy spectrum as5 \[G= \sum_{i}w_{i}[T_{i}^{g}(x)*\mathcal{L}_{i}^{g}(cx)]+\sum_{j}u_{j}[T_{j} ^{g}(x)*\mathcal{L}_{j}^{g}(cx)] \tag{1}\] \[+\sum_{k}b_{k}\mathcal{P}_{k}(x)\] Here \(T^{g}\) represents the stellar templates, \(T^{g}\) the gas templates, \(\mathcal{L}\) the corresponding line of sight velocity distribution, and \(\mathcal{P}\) additive polynomials (in this case taken to be Legendre polynomials). We assume that the line of sight distribution can be treated as a Gaussian without the inclusion of higher order Gauss-Hermite moments. We made this assumption because (i) Cappellari et al. (2007, sec. 2) found using synthetic galaxy models that the sigma obtained from a Gaussian fit (moments=2) with rPXF provides a better approximation to the second velocity moments than computing of the second moment by integrating the LOSVD from a fit which includes higher Gauss-Hermite moments (e.g. moments=4); (ii) making this assumption one is able to accurately predict with JAM the observed \(V_{\mathrm{rms}}\) of hundreds of real galaxies (e.g. fig. 1 of (e.g. Cappellari et al., 2015, fig. 1); and (iii) in the specific case of the stellar kinematics of M87, previous studies have found that, within the range of our data, the lowest order Gauss-Hermite moments are either consistent with zero or small (Liepold et al., 2023, fig. D2), implying that the LOSVD is essentially Gaussian. This is a reasonable assumption as previous studies of the stellar kinematics of M87 have found that, within the range of our data, the lowest order Gauss-Hermite moments are either consistent with zero or small (Liepold et al., 2023, fig. D2). The strongest absorption feature in the OASIS and MUSE spectra that contributes to the fits of the stellar kinematics is due to Mgb around 5200 A. However, in the innermost arcsecond of the galaxy this feature is contaminated by gas emission from [NI]\(\lambda\)\(\lambda\)5197,5200 (see Fig. 1, Fig. 2, and Fig. 3). When it comes to fitting the absorption feature from Mgb then, there is a degeneracy between the stellar dispersion and the gas kinematics. This is compounded by the fact that it is precisely in the innermost regions that the relative flux in each spectra due to the AGN increases, further increasing the uncertainty in the kinematic extraction. In order to test how the treatment of this affects the extracted kinematics, we consider two separate scenarios. In the first scenario we simply fit the full spectrum. This has the advantage that outside of the innermost arcsecond the kinematic extraction is very reliable with the disadvantage of the innermost arcsecond being less reliable. In the second scenario we restrict the spectra to begin at 5250 A (these spectra are referred to as spectra to Figure 1: Plot of the MUSE spectra as a function of radius. The black curve is the data and the red curve is the best rPXF fit to the stars and the orange curve is the best fit to the gas. The \(r=0^{\prime\prime}\) spectra is the combined innermost \(0\aas@@fstack{\prime\prime}1\) of MUSE. This is not fit as we do not include it in the final analysis. The double peaked structure of the gas appears intermittently at a variety of radii. The bottom spectra shows the single stellar template that we use to fit the stellar kinematics for all of the MUSE spectra. Figure 2: Plots of the OASIS spectra as a function of radius. The black curve is the data and the red curve is the best rPXF fit to the stars and the orange curve is the best fit to the gas. The \(r=0^{\prime\prime}\) spectra is the central spaxel for OASIS. This is not fit as we do not include it in the final analysis. The double peaked structure of the gas appears intermittently at a variety of radii. The bottom spectra shows the single stellar template that we use to fit the stellar kinematics for all of the OASIS spectra. the right of [NI] or RNI spectra) so that we exclude the contaminated region. This has the benefit of avoiding any uncertainty from the fit to Mgb at the cost of further restricting the spectral range. This is done for both OASIS and MUSE. The details of the kinematic extraction for each of these scenarios is significantly different. Spectral fitting over the full wavelength range is challenging to perform due to the presence of strong gas emission lines from H\(\beta\), [OIII]\(\lambda\lambda\)4959,5007, and [NI]\(\lambda\lambda\)5197,5200. Furthermore, we observed multiple gas kinematic components for H\(\beta\) and [OIII] in the inner most part of the galaxy (see Fig. 3). We start by assuming that the stellar population is constant in the region we are considering (we fit one stellar template). This template is determined by co-adding the spectra with no gas emission lines and fitting this with stars from the MILES stellar library5(Sanchez-Blazquez et al., 2006; Falcon-Barroso et al., 2011). We use the stellar library consisting of nearly 1000 individual stellar spectra. We also tried this using SPS from the MILES stellar library (Vazdekis et al., 2010) including synthetic spectra with alpha enhancement (Vazdekis et al., 2015) allowing for maximum possible variation in the parameters, as well as the MUSE stellar library (Ivanov et al., 2019), consisting of 35 individual stellar spectra, but we ultimately find the best fit using the MILES stars. The OASIS spectral resolution is much larger than that of the stars in the MILES stellar library which have an instrumental resolution of 2.51 A FWHM corresponding to \(\sigma_{\rm instr}\approx 61\) km s\({}^{-1}\) at 520nm. We account for this by degrading the resolution of the template spectra (before logarithmically rebinning) with a gaussian to a constant resolution per angstrom so that the spectral resolution for the two are the same. The spectral resolution for MUSE over the wavelength range we use is nearly the same as that of the MILES stars, so we do not apply a correction. We determine the single stellar template Figure 4: The top and centre panels show the extracted dispersion for M87 under a number of different assumptions for MUSE and OASIS, respectively. Note that in each case the dispersion profile is consistent up to a constant scaling outside of one arcsecond. Within one arcsecond, however, there is a large deviation in the extracted profiles. This is due to the uncertainty in the kinematic extraction caused by contamination from the AGN and the degeneracy in the fits between the fit to [NI] and the Mgb absorption. The bottom panel shows the dispersion profiles used in the final analysis scaled to match the SAURON profile. Data shown in the same color are combined in the final analysis. Figure 3: The top panels show a fit of the MILES stellar spectra to the M87 spectra at large radii without gas emissions. This is done separately for MUSE and OASIS. This fitted spectra is then used to fit the stellar kinematic component for each spaxel. The middle panels show a spectrum from the centre of M87 that has been fitted with the above stellar template, as well as with multiple gas templates. This shows strong gas emissions with a double peaked profile for H\(\beta\) and [OIII]. These must be carefully accounted for in order to produce a reliable kinematic fit. The bottom panels show a fit of the stellar template to a spectrum in the central parts of M87 where the spectra has been restricted to start at the right of [NI] (denoted RNI spectra). Here is no gas, simplifying the fitting, but the noise in the spectrum increases. separately for both OASIS and MUSE. We then allow gas templates in the rPXF fits for \(H\beta\), [OIII] and [NI], but allow H\(\beta\) and [OIII] to have two distinct kinematic components each. Given the number of templates (1 stellar spectrum + 2 H\(\beta\) + 2 [OIII] + 2 [NI] spectra) and the fact that the gas could be challenging to fit due to the possibility of there being multiple local best fits, we experimented with a number of constraints, such as treating the gas components of H\(\beta\) and [OIII] as a part of the same kinematic component by fixing their velocity and dispersion to be equal. Ultimately, we found that the most reliable fit to the stellar spectra is comes from allowing maximum freedom in the gas fit. That is, treating each gas template as having its own velocity and dispersion. This is because even slight offsets in the gas fits for central spectra result in large residuals that end up driving the stellar fit. This means having a total of six kinematic components: one for the stars, one for each set of H\(\beta\) and [OIII], and one for [NI]. One issue that we faced was some fits returning large stellar velocities in the centre of the galaxy. M87 is well known to be a slow rotator (Emsellem et al., 2007, 2014), so any large deviations in the velocity suggest an error in the fitting. We handle this by fixing the velocity across the field for both datasets to equal the recession velocity6. We calculate errors in the dispersion using wild bootstrapping (Davidson & Flachaire, 2008) of the spectra residuals and repeating the pPXF fits 100 times on the bootstrapped spectra. Lastly, we perform this analysis three separate times using Legendre polynomials of degrees 4, 5, and 6. We find that the extracted dispersion in the centre most region depends on the choice of Legendre polynomial (see 4). This is likely due to the degeneracy between [NI] and Mgb absorption. Different degrees of Legendre polynomial, especially those with very high degree given the total wavelength range, can go beyond accounting for template mismatch and can start reproducing parts of the stellar spectrum. Footnote 6: Note that Emsellem et al. (2014) finds evidence for a kinematically decoupled core with rotation velocity \(\pm\)5 km s\({}^{-1}\). This is within the errors of this analysis. Kinematic extraction redwards of [NI] is done using the same single stellar template as in the case for the full spectra, no gas templates, and additive Legendre polynomials of degree 1, 2, and 3. As in the previous case, we also fix the velocity of each spectra to the recession velocity, which we determined as the median over the field of a free fit. A dispersion map of the extracted kinematics in the case where we fit the full spectrum with additive Legendre polynomials of degree 4 is shown in Fig. 5. The map appears symmetric, as we expect since the core of M87 is highly spherical. As a result, we plot the dispersion profile for the remaining scenarios as a function of radius in Fig. 4. The supermassive black hole in M87 has the largest angular size from Earth of any known black hole outside of the Milky Way. We define the sphere of influence \(r_{\rm BH}\) as \[M_{\rm BH}=M^{*}(<r_{\rm BH}) \tag{2}\] i.e. the radius such that the black hole mass equals the mass in stars. Assuming the range of black hole masses and \(M/L\) values determined in this work, we determine the \(r_{\rm BH}\) is between \(\sim\)5 and 6\({}^{\prime\prime}\)and possibly even larger. Thus the black hole sphere of influence dominates much of the field of view for both OASIS and MUSE. This makes measuring parameters such as the stellar mass to light ratio challenging as, within this field of view, there is a large degeneracy between the black hole mass and the stellar mass. We can break this degeneracy by adding larger field kinematic data that is more sensitive to the mass of the stars. We do this by including SAURON data (Emsellem et al., 2004) for M87 as reanalysed for the ATLAS\({}^{3\rm D}\) project7 by Cappellari et al. (2011). When considering large field data, it is important to be conscious of the fact that any subsequent parameter studies will be heavily influenced by information provided at larger radii because there is a greater volume of data points at large radius than at small radius. To account for this we only consider SAURON data out to a radius of 15\({}^{\prime\prime}\). To avoid accounting for uncertainties in the SAURON PSF, we do not include any SAURON data within the innermost 2\({}^{\prime\prime}\)of the galaxy. We observe an offset between our extracted dispersions and the SAURON dispersion. This is expected due to systematic uncertainties in the kinematic extraction, especially due to template mismatch. To account for this, we apply a multiplicative scale to the MUSE/OASIS dispersions so that they match the SAURON data between 2\({}^{\prime\prime}\) and 4\({}^{\prime\prime}\). Scaling the velocity axis by a factor T is equivalent to changing the overall mass normalization of the model by a factor V\(\overline{\rm T}\). For MUSE, the factor required to scale the dispersion to match the SAURON data varies between 0.95 and 1 (given the choice of Legendre polynomial). For OASIS, the range is between 0.87 and 0.9. These correspond to an increase in the black hole mass of up to \(\sim\)7 per cent. Footnote 7: Available from [https://purl.org/atlas3d](https://purl.org/atlas3d) Given the large number of dispersion profiles generated, we have to make a choice of which ones to study. We rule out using the degree 2 and 3 RNI spectra as we expect that using such a high degree of Legendre polynomial over such a small wavelength range will lead to an unreliable kinematic extraction (indeed, for both MUSE and OASIS, we see the dispersion profile either completely flattening out or decreasing in the centre). For the degree 4,5,6 polynomials note that the degree 5 polynomial for MUSE closely resembles the degree 6, and for OASIS the degree 5 dispersion profile closely resembles the degree 4 dispersion profile. As such, we choose to throw out the degree 5 profile from each data set and are left with 6 dispersion profiles as shown in the bottom panel of Fig. 4. In the final analysis, we combine different data sets in order to draw a reliable result. We choose the combinations: 1. SAURON + OASIS degree 6 + MUSE degree 4 2. SAURON + OASIS degree 4 + MUSE degree 6 3. SAURON + OASIS RNI degree 1 + MUSE RNI degree 1 In the latter case we refer to the kinematics as the RNI spectra. ### PSF from AGN Spectrum The PSF from IFS data has often been determined by comparing convolved HST photometry to the flux from the IFS cube (e.g. McDermid et al., 2006; Krajnovic et al., 2018, fig.B1). This method is most reliable when the centre of the galaxy has a cusp since the observed profile will be more strongly affected by the PSF. Since the nuclear region of M87 has a core, we expect this method to produce a less reliable PSF. However, we can circumvent this by noting that M87 has a bright AGN that can be assumed to be unresolved and can be used as a point source to infer the PSF directly. The profile of the PSF can thus be measured if we can extract out the component of each spectra due to the AGN. This can be done using the additive polynomials in our spectral fitting. The AGN likely has a flat non-thermal continuum that can be well approximated by polynomials while the underlying galaxy does not. We extract the shape of the PSF by performing a rPXF fit to the unbinned central spaxels in each dataset over the largest possible wavelength range (4800-5700A for MUSE and for 4760-5558A OASIS) using the fixed stellar template determined before from the gas-free and AGN-free spectra with multiple gas components and additive Legendre polynomials of degree 4. In this case we do not mask any of the central spaxels. In Fig. 6, we show the results for MUSE and OASIS, as well as a plot of the one dimensional profile as a function of radius to the left or right of the origin. We parametrize the PSF using a multi-gaussian expansion in the form of \[\mathrm{PSF(R)}=\sum_{i=1}^{Q}\frac{G_{i}}{2\pi\sigma_{i}^{2}}\exp\left(\frac {-\mathrm{R}^{2}}{2\sigma_{i}^{2}}\right) \tag{3}\] with \(R\) the radius, \(\sigma_{i}\) the standard deviation of gaussian \(i\), and \(G_{i}\) the normalization of each gaussian satisfying \(\sum_{i=1}^{Q}G_{i}=1\). We model the observed AGN profile by integrating the PSF over the lenslet size of OASIS/MUSE. We do this by first noting that a PSF convolved observable \(S_{\mathrm{obs}}(x,y)\) can be written as (e.g. Qian et al., 1995, appendix. D) \[S_{\mathrm{obs}}(x,y)=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}S(x,y)K( x-x^{\prime},y-y^{\prime})dxdy \tag{4}\] with \[\begin{split} K(x,y)=\sum_{i=1}^{Q}\frac{G_{i}}{4}& \left[\mathrm{erf}\left(\frac{\mathrm{L}_{x}/2-\mathrm{x}}{\sqrt{2} \sigma_{i}}\right)+\mathrm{erf}\left(\frac{\mathrm{L}_{x}/2+\mathrm{x}}{\sqrt {2}\sigma_{i}}\right)\right]\\ &\times\left[\mathrm{erf}\left(\frac{\mathrm{L}_{y}/2-\mathrm{y} }{\sqrt{2}\sigma_{i}}\right)+\mathrm{erf}\left(\frac{\mathrm{L}_{y}/2+\mathrm{ y}}{\sqrt{2}\sigma_{i}}\right)\right]\end{split} \tag{5}\] Note that \(K\) is the analytic expression of a gaussian integrated over a lenslet of size \(L_{x}\) by \(L_{y}\) centred on the point \((x,y)\). As our observable is unresolved, we can treat it as a delta function. Substituting that into Equation 4 gives that the model of the PSF is simply \(K(x,y)\), with the lenslet size substituted for that of MUSE or OASIS. We then fit the PSF parameters by matching this model to the observed spectrally determined AGN by employing the optimize_least_squares function of SciPyr (Virtanen et al., 2020). We assume the errors are constant except in the centre, where we set them to be small so as to force a good fit at both large and small radii. The measured parameters for each PSF is given in Table 1. The FWHM for OASIS is \(0\aas@@fstack{\prime\prime}561\), which is consistent with the seeing on the night of the observation (McDermid et al., 2006). The FWHM for MUSE is \(0\aas@@fstack{\prime\prime}049\), which is close to diffraction limited. ## 4 Photometry and mass modeling ### Stellar Tracer Distribution Modelling the stellar tracer distribution of M87 is challenging due to non-stellar contributions in the photometric data, namely the AGN and jet. It is sufficient to mask the jet, but masking the AGN would mean masking the location where the supermassive black hole is. This is the most important region to model for black hole studies, implying that we must take another approach. The way that we circumvent this is by using the information from our spectral fits. Spectral fitting determines the contribution of stars, gas, and the AGN (through additive polynomials) such that one can extract out the pure stellar contribution. To do this, we measure the flux due to the stars in the pPXF fit to the MUSE data from section 3.2. We do this by subtracting from each spectra the best fit gas lines and additive Legendre polynomials (which approximate the AGN spectrum). The radial surface brightness before and after accounting for the AGN are shown in Fig. 7. \begin{table} \begin{tabular}{c c c} \hline \hline Number & \(G_{i}\) & \(\sigma_{i}\) (arcsec) \\ \hline 1 & 0.515 & 0.215 \\ 2 & 0.485 & 0.423 \\ \hline 1 & 0.00769 & 0.00793 \\ 2 & 0.20493 & 0.03595 \\ 3 & 0.59847 & 0.14573 \\ 4 & 0.18891 & 0.79289 \\ \hline 1 & 0.34266 & 0.02454 \\ 2 & 0.36807 & 0.08022 \\ 3 & 0.13923 & 0.09772 \\ 4 & 0.04820 & 0.21524 \\ 5 & 0.10184 & 0.47409 \\ \hline \end{tabular} \end{table} Table 1: Table of fitted MGE parameters for OASIS, MUSE, and the HST F850LP TinyTast PSF, respectively. The FWHM for OASIS is \(0\aas@@fstack{\prime\prime}561\), for MUSE is \(0\aas@@fstack{\prime\prime}049\), and for HST is \(0\aas@@fstack{\prime\prime}063\). The HST PSF is larger than the MUSE PSF because the wavelength observed in the HST observation is longer than that in the MUSE observation. Figure 5: Top panel: MUSE kinematics extracted over the full spectrum with additive Legendre polynomial degree of 4. The centermost \(0.1\arcsec\) is masked. Bottom panel: OASIS kinematics extracted over the full spectrum with additive Legendre polynomial degree of 4. Both are oriented so that north is facing up and east is facing to the left. We fit this over the region of the MUSE data using a double power law and match it to the radial surface brightness from Hubble outside of the innermost 0.5''. From this we create a modified Hubble image which has the innermost arcsecond replaced with our fitted AGN-free profile. We parametrize the galaxy surface brightness using the Multi-Gaussian Expansion method (Emsellem et al., 1994; Cappellari, 2002). In order to model the full extent of the galaxy, we match an SDSS F-band mosaic of M87 to the Hubble image and fit them simultaneously using the robust MGE fitting algorithm and the MgErTr8 software package of Cappellari (2002). This algorithm fits the projected surface brightness using a multi-gaussian expansion of the form Footnote 8: Available from [https://pypi.org/project/mgefit/](https://pypi.org/project/mgefit/) \[\Sigma(x,y)=\sum_{j=1}^{N}I_{j}\,\exp\left[-\frac{1}{2\sigma_{j}^{2}}\left(x_{ j}^{2}+\frac{y^{2}}{q_{j}^{2}}\right)\right] \tag{6}\] We mask the jet and gap between the detectors in the HST image, and we mask a prominent star in the SDSS image. We also provide MoErTr with the Hubble ACS/WFC PSF in order to obtain the PSF-deconvolved stellar distribution as opposed to the observed distribution. We generate this Hubble PSF using the tool TinyTim (Krist et al., 2010). We record our MGE expansion for this in Table 1. The MGE/galaxy contours are shown in Fig. 8 and the parameters we fit are shown in Table 2. We express the MGE in the AB photometric system and the F850LP band of HST where we have used the absolute magnitude of the sun \(M_{\odot,\rm F850LPn4-50}\) mag from Willmer (2018) and the galactic extinction \(A=0.029\) mag from Schlafly & Finkbeiner (2011). The zero point at the time of the observation was \(ZP=24.873\). Our spectral-decomposition approach allows for a much more reliable measurement of the stellar surface brightness near the black hole than was possible before. The most recent stellar dynamical determinations of the black hole mass in M87 (Gebhardt & Thomas, 2009; Gebhardt et al., 2011; Liebold et al., 2023) have used the surface density profile provided in Kormendy et al. (2009). We find a much flatter core in the innermost region which, holding everything else constant, will result in a larger black hole mass. This is discussed in more detail later. ### Mass Modelling with M/L Gradients In order to perform dynamical modelling, one has to parametrize the gravitational potential. Sarzi et al. (2018) used stellar population models to measure gradients in the stellar \(M^{*}/L\) in M87 by allowing for both ages, metallicity and stellar initial mass function (IMF) variations. Although these measurements are assumption dependent and quite uncertain, they provide an estimate for a possible stellar \(M^{*}/L\) variation within the innermost 30-40'' of M87, implying that it is not sufficient to assume that mass follows light without testing the alternative. We allow for variations away from mass follows light in this region by allowing for a radially dependent \(M^{*}/L\) profile given by \[\left(\frac{M^{*}}{L}\right)(r)=\begin{cases}(M/L)_{1}&r<1\arcsec\\ \left[\left(\frac{M}{L}\right)_{1}-\left(\frac{M}{L}\right)_{2}\right]\frac{ \log(r/30)}{\log(1/30)}+\left(\frac{M}{L}\right)_{2}&1\arcsec\leq r\leq 30 \arcsec\\ (M/L)_{2}&r>30\arcsec\end{cases} \tag{7}\] Figure 6: The top panel shows a plot of the spectroscopic data of the AGN with the best fit multi-gaussian expansion of the PSF after being integrated over the MUSE/GASIS lenslet. This is presented as a one dimensional function where the x-axis is the radius of each data point, with those lying to the left of the origin having \(x<0\) and those lying to the right having \(x>0\). The colormap shows the 2D image of the spectrally extracted AGN on a log scale. The bottom panel shows this for OASIS. These results show the reliability of using the AGN contribution to the spectra as measurement of the PSF. This functional form is motivated by the fact that, within the error bars, figure 11 of Sarzi et al. (2018) is well fit by this function. The observed nearly linear variation in \(M^{*}/L\) with \(\lg(r)\) cannot represent the true \(M^{*}/L\) variation since it is unbounded at 0 and infinity, so we set it to be constant outside of the regions constrained by the data. We implement this within the MGE formalism in the following way: we evaluate \((M^{*}/L)(r)\) at each \(\sigma_{j}\) (where \(\sigma_{j}\) is the standard deviation of each gaussian in the MGE) and multiply the surface brightness \(I_{F850LP,j}\) by this value. In principle, \((M^{*}/L)_{1}\) and \((M^{*}/L)_{2}\) are free parameters, but we can determine possible upper and lower bounds by studying the effect of varying the IMF: Sarzi et al. (2018) finds that the largest possible M/L ratio assuming a Kroupa IMF is close to 5.0 in the r-band. From figure 2 of Cappellari et al. (2012), we see that, empirically in the galaxy population, the largest \(M^{*}/L\) increase one can expect due to the IMF normalization is a factor 2.6 heavier than the value corresponding to a Kroupa IMF. Taking the r-band \(M^{*}/L\) with Kroupa IMF as reference, the heaviest \(M^{*}/L\) one can realistically expect for M87, allowing for extreme IMF gradients, correspond to \(M^{*}/L\)=13 in the r-band. Lastly, we can convert this to the SDSS z-band (which we take to be approximately the same as the ACS/WFC F850LP band) using the conversion formula \[M/L_{z}=M/L_{r}\times 10^{0.4([z-r]_{\rm M87}-[z-r]_{\odot})} \tag{8}\] Using M87 colors \(z=9.92\) and \(r=10.70\) from SDSS DR7, and solar magnitudes \(z_{\odot}=4.50\) and \(r_{\odot}=4.65\) from Willmer (2018), we find that the upper bound is \((M^{*}/L)_{z}\lesssim\)7.23. This becomes important later in the analysis where we introduce this cut off when our modelling would otherwise prefer an unphysically large \(M^{*}/L\) ratio. This choice of parametrization restricts the steepest possible \(M^{*}/L\) variation. One could further increase the freedom of this parametrization by allowing the innermost radius at which the \(M^{*}/L\) ratio becomes constant to vary. We discuss the impact of this in subsection 6.2. ### Dark Matter In addition to the mass contribution from stars, we include a NFW dark matter halo (Navarro et al., 1997) \[\rho(r)=\frac{\rho_{0}}{(r/r_{s})(1+r/r_{s})^{2}} \tag{9}\] The range of our data is much less than \(r_{s}\) so we set it arbitrarily to 20kpc because its precise value is irrelevant for the modelling results. We parametrize the overall magnitude as the fraction of matter within a sphere of one half light radius consisting of dark matter, \(f_{\rm dm}\). We calculate the enclosed masses from the MGE analytically using the routine mge_radial_mass from JaMP9. The value we adopt for the half light radius is \(70^{\prime\prime}\). We compute this from the MGE of Table 2 using the routine mge_half_light_radius in JaMP. Footnote 9: Available from [https://pypi.org/project/jampy/](https://pypi.org/project/jampy/) ## 5 Dynamical Modeling ### Jeans Modelling The Jeans Anisotropic Modelling (JAM) method (Cappellari, 2008, 2020) has been used to model the stellar dynamics of galaxies and study their stellar mass-to-light ratios and dark matter content in large integral-field spectroscopic surveys such as ATLAS\({}^{\rm 3D}\)(Cappellari et al., 2013), SAMI (Scott et al., 2015), MaNGA (Li et al., 2018) as well as surveys at high redshift e.g. for the LEGA-C survey (van Houdt et al., 2021). It was applied to the study of galaxies' total density profiles out to large radii (Cappellari et al., 2015) and in several studies of smaller galaxy samples. More recently, JAM was employed to accurately predict Gaia kinematics of the Milky Way using all six-dimensional components of the stellar phase space (Nitschai et al., 2020, 2021). Tests of JAM using high-resolution N-body simulations (Labalnache et al., 2012) and lower-resolution but more extensive cosmological hydrodynamical simulations (Li et al., 2016) have shown that, with high-S/N data, JAM recovers accurate total density profiles with negligible bias. More recently, JAM was compared in detail against the Schwarzschild (1979) method using samples of both observed galaxies, with circular velocities from interferometric observations of the CO gas, and numerical simulations respectively. Both studies consistently found that the JAM method produces even more accurate (smaller scatter vs the true values) density profiles (Leung et al., 2018, fig. 8) and enclosed masses (Jin et al., 2019, fig. 4) than the more general Schwarzschild models. More quantitatively, between 0.8-1.6 effective radii, where the gas is well-resolved and \begin{table} \begin{tabular}{c c c c} \hline \(j\) & \(\lg(I_{j})\) & \(\lg(\sigma_{j})\) & \(q_{j}\) \\ & \((L_{\odot}\) pc\({}^{-2})\) & (arcsec) & \\ \hline 1 & 3.070 & -0.834 & 0.957 \\ 2 & 3.372 & -0.147 & 0.957 \\ 3 & 3.354 & 0.246 & 0.957 \\ 4 & 3.366 & 0.624 & 0.957 \\ 5 & 3.435 & 0.820 & 0.957 \\ 6 & 3.310 & 1.056 & 0.957 \\ 7 & 3.001 & 1.314 & 0.957 \\ 8 & 2.389 & 1.583 & 0.847 \\ 9 & 2.380 & 1.745 & 0.936 \\ 10 & 1.894 & 2.018 & 0.743 \\ 11 & 1.488 & 2.317 & 0.743 \\ \hline \end{tabular} \end{table} Table 2: MGE parameters for the deconvolved ACS/WFC F850LP surface brightness. This corresponds to the orange profile in Fig. 7 and the red contours in Fig. 8. Figure 7: Comparison of a scaled HST radial profile with a profile where the centre region is set by the stellar profile measured spectroscopically from MUSE (the orange line). We also compare with the profile used in the previous stellar dynamical determinations of Gebhardt & Thomas (2009); Gebhardt et al. (2011); Liepold et al. (2023). Their profile is a factor of 2 larger than ours in the centre. the \(V_{\rm c}\) is better determined, Leung et al. (2018) reports a mean 1 \(\sigma\) error 1.7\(\times\) smaller for JAM over the equivalent Schwarzschild model. Similarly, when considering all 45 model fits to the N-body simulations by Jin et al. (2019), the 68th percentile deviation (1 \(\sigma\) error) is 1.6\(\times\) smaller for JAM than the equivalent Schwarzschild model. The increased accuracy of JAM in extracting density distributions may be due to JAM assumptions acting as an empirically-motivated prior and reducing the degeneracies of the dynamical inversion. For supermassive black hole studies, JAM was found to accurately recover the "known" mass of the two most accurate benchmark black holes in NGC4258 (Prehmer et al., 2015) and the Milky Way (Feldmeier-Krause et al., 2017, sec. 4.1.2). Moreover, extensive tests of a few tens of galaxies have found that JAM and Schwarzschild methods recover black hole masses that are generally consistent with one another (Cappellari et al., 2010; Seth et al., 2014; Thater et al., 2017, 2022; Krajnovic et al., 2018). However, not all BH measurements from different methods agree within the uncertainties and further comparisons between different approaches are still needed. There are two implementations of JAM: one where the velocity ellipsoid is assumed to be aligned with the cylindrical-polar coordinate system (Cappellari, 2008), and one where the velocity ellipsoid is assumed to be aligned with the spherical-polar coordinate system (Cappellari, 2020). The choice of which implementation to use depends on the galaxy's intrinsic shape. M87 is a slow rotator early type galaxy (Emsellem et al., 2011) and slow rotators as a class are weakly triaxial, or nearly spherical, inside the half-light radius, becoming more triaxial at larger radii (see review by Cappellari (2016)). M87 has a specific angular momentum \(\lambda_{\rm R_{g}}\approx 0\) within the uncertainties (Emsellem et al., 2011). It showed some barely detectable misaligned stellar rotation from SAURON data (Emsellem et al., 2004), which became more clearly visible from high-S/N MUSE data (Emsellem et al., 2014). The ellipticity of M87 increases at large radii (see Fig. 8), where misaligned stellar rotation and triaxiality become evident (Liepold et al., 2023). Within the slow-rotators class, M87 was classified as a non-rotator (Krajnovic et al., 2011). As a class, these are massive early type galaxies generally found at the centre of clusters, which tend to be rounder than \(\epsilon\lesssim\) 0.15 in projection (Emsellem et al., 2011, fig.6), indicating that, although triaxial, they must be intrinsically close to spherical with a ratio between the minor to major axes of the ellipsoidal density \(c/a\gtrsim\) 0.85. This excludes the possibility that M87, which is nearly round in projection, may appear as such due to a special viewing angle. Instead, M87 must be intrinsically close to spherical in the region where it shows circular isophotes. JAM models, unlike the Schwarzschild models, do not require large-radii kinematics to constrain the models because they are nearly insensitive to the mass distribution at radii outside the region where kinematics are available. For this reason, one can expect axisymmetric JAM models with the velocity ellipsoid aligned with spherical-polar coordinates, to provide an accurate description of the inner dynamics of M87, even though the galaxy, like all slow rotators, becomes more strongly triaxial at much larger radii than those we model. ### Parameterization JAM takes as parameters the galaxy inclination, anisotropy profile, stellar tracer distribution, and mass distribution, including the black hole mass. The kinematic axis is poorly constrained due to M87 being dominated by unordered motion in the central region, (Emsellem et al., 2014; Sarzi et al., 2018) so we fix the kinematic axis to align with the galaxy photometric axis as given in Krajnovic et al. (2011). The stellar tracer and mass distributions are described in subsection 4.1. As the inner regions of M87 are nearly spherically symmetric with little ordered motion, we fix the inclination to 90 degrees. Changing this has a negligible effect on the measured black hole masses, because a nearly spherical model appears spherical from any inclination. We can exclude M87 being an intrinsically flat but nearly face-on disk, because of the general shape distribution of slow rotators (Cappellari, 2016; Li et al., 2018). The last thing to specify is the anisotropy profile. It is well known that for a spherical system, there is a degeneracy between the anisotropy and the density profile. This so-called mass-anisotropy Figure 8: Top panel: best fit MGE contours in the innermost 100 arcseconds of the HST image. The regions in yellow are the masked jet and gap between the detectors. Bottom panel: best fit MGE contours over the full SDSS field. We mask a prominent star at the top of the field. You can see that the MGE fit covers the full shape of the galaxy and provides an excellent fit in the innermost region where our data is the most sensitive. The galaxy is oriented so that north is at the top and east is to the left. degeneracy implies that for a range of assumed density profiles, one can adopt a corresponding anisotropy profile in such a way that the model reproduces the same profile of second velocity moments (Binney & Mamon, 1982; Gerhard, 1993). The degeneracy, however, is not complete, and the range of allowed profiles depends on the specific situation because the anisotropy is limited by the two extreme cases where the orbital distribution is fully radial or fully tangential respectively. For this reason, without further assumptions, one would generally expect large uncertainties in BH masses from spherical models based on the Jeans equations. The situation, however, has improved dramatically from the days when the mass-anisotropy degeneracy was first discovered. Since then, many studies have modelled the inner dynamics of galaxies using general models that allow one to account for the full shape of the line-of-sight velocity distribution, rather than the moments alone. We think we now even have a good understanding of the underlying physics of the orbital distributions we have measured. In particular, we have found that massive slow-rotator galaxies with a core in their surface brightness profile, like M87, are consistently characterized by a nearly isotropic, or just slightly radially anisotropic orbital distribution outside the break radius, while orbits start becoming tangentially biased inside that radius, reaching the peak tangential anisotropy well inside the BH sphere of influence (Gebhardt et al., 2003 fig.10; Cappellari et al., 2008 fig.2; Thomas et al., 2014). The observations are quantitatively well reproduced by models in which both the cores in the surface brightness and the tangentially biased orbits are due to gas-free mergers of galaxies with supermassive black holes in their centres. The black holes sink towards the centre of the gravitational potential via dynamical friction, while ejecting stars on radial orbits (e.g. Milosavljevic & Merritt, 2001; Milosavljevic et al., 2002; Rantala et al., 2019; Frigo et al., 2021). In the case of M87, due to its very flat inner core, one can place constraints on its orbital anisotropy even from theoretical arguments alone. The cusp-slope vs central anisotropy theorem by An & Evans (2006) states that for a spherical power-law tracer population \(\rho\propto r^{-\gamma}\) in a Keplerian potential, there is a relation between the anisotropy \(\beta=1-\sigma_{r}^{2}/\sigma_{T}^{2}\) and the logarithmic slope \(\gamma\) of the tracer, such that \(\beta<\gamma-1/2\). The inner slope of M87 varies from nearly flat in the centre (\(\gamma\approx 0\) see Fig. 7) to \(\gamma\approx 0.27\) between 1-5''(Lauer et al., 2007). We can thus conservatively conclude that the inner anisotropy has an upper limit of \(\sigma_{r}/\sigma_{T}\lesssim 0.9\) and possibly even \(\sigma_{r}/\sigma_{T}\lesssim 0.8\) for \(\gamma=0\). The assumptions of the theorem are satisfied well inside the sphere of influence of the BH of M87 and for this reason the theorem provides additional support for the expected significant tangential anisotropy near the BH of M87. For all these theoretical and empirical reasons, nowadays, it does not make sense to assume complete freedom in the orbital anisotropy of Jeans models as done in the past. Instead, the knowledge we accumulated on the galaxies anisotropy can be used as a Bayesian prior, which is easy to enforce to our JAM models. In the next section we describe a new way of specifying the anisotropy variations in JAM models, which is ideally suited to enforce anisotropy priors. ### Fitting a given anisotropy profile with JAM Let's consider for simplicity a spherical non-rotating JAM model. The intrinsic stellar dispersion of the model is given by the luminosity-weighted sum of the dispersion of the individual Gaussians making up the MGE \[\nu\sigma_{r}^{2}=\sum_{k}\left[\nu\sigma_{r}^{2}\right]_{k}. \tag{10}\] Fig. 9 shows the contribution of the individual \(\left[\nu\sigma_{r}^{2}\right]_{k}\) for different anisotropies, for a Hernquist (1990) model with mass \(M_{*}=10^{11}\,\mathrm{M}_{\odot}\) containing a typical nuclear supermassive black hole of mass 0.5% that of the stellar mass (Kormendy & Ho, 2013, eq. 11). When the Gaussians in a JAM model are isotropic or have tangential anisotropy, each Gaussian essentially contributes to the total \(\nu\sigma_{r}^{2}\) only near a radius close to its dispersion \(r\approx\sigma_{k}\) (Fig. 9a). In these cases, one can construct a desired anisotropy profile \(\beta(r)\) by simply assigning the anisotropy \(\beta_{k}=\beta(\sigma_{k})\) to the Gaussians with dispersion \(\sigma_{k}\), as pointed out in Cappellari (2008, sec. 3.2.2). Fig. 9 shows that this approximation works quite well in general. However, when the Gaussians are significantly tangential anisotropic, or near the central supermassive black hole, the total \(\nu\sigma_{r}^{2}\) rises steeply at small radii and a single Gaussian does not contribute to the total JAM model only around \(r\approx\sigma_{k}\) (Fig. 9c,d). In those situations, there is no simple precise relation between the anisotropy of a given Gaussian and the total anisotropy of the JAM model at \(r\approx\sigma_{k}\). This is generally not a problem, if one is not interested in the fitted anisotropy. However, if one wants to quantitatively reproduce a specific total anisotropy profile one has to numerically fit for the anisotropies \(\beta_{k}\) of the different Gaussians. In this paper we parametrize the anisotropy using a rather flexible logistic function of logarithmic radius \[\beta(r)=\beta_{0}+\frac{\Delta\beta}{1+(r_{a}/r)^{\alpha}}, \tag{11}\] with \(\Delta\beta=\beta_{\infty}-\beta_{0}\). This anisotropy function was also used by Baes & van Hese (2007, eq. 30). For \(\beta_{0}=0,\beta_{\infty}=1\) and \(\alpha=2\) it reduces to the Osipkov-Merritt special form (Osipkov, 1979; Merritt, 1985). For \(\alpha=1\) it specializes to the homographic anisotropy function used by Bacon (1985). In the top panel of Fig. 10 we show how one can reproduce our logistic anisotropy profile with JAM. In the figure, we adopted a rather extreme anisotropy variation, with inner tangential anisotropy \(\sigma_{r}/\sigma_{\theta}=1/2\) (\(\beta_{0}=-3\)), outer radial anisotropy \(\sigma_{r}/\sigma_{\theta}=2\) (\(\beta_{\infty}=0.75\)), anisotropy radius \(r_{a}=a/2\), where \(a\) is the break radius of the Hernquist (1990) profile, and the sharpness of the transition is \(\alpha=1.5\). One can see that by fine-tuning the anisotropy of the different Gaussians, one can reproduce quite general anisotropy variations. The problem with this approach is that the anisotropy \(\beta_{k}\) of the individual Gaussians has to be fitted non-linearly in a least-square sense, while repeatedly computing the intrinsic velocity moments of the model, to obtain the total anisotropy for a given choice of \(\beta_{k}\) parameters. An alternative is to solve the original Jeans equations for an axisymmetric model with spherically-aligned velocity ellipsoid, in Cappellari (2020, eq. 8) by relaxing the assumption of a constant anisotropy per Gaussian. This is only possible analytically for special choices of the anisotropy function \(\beta(r,\theta)\). We found that, adopting the parametrization of Equation 11 for the anisotropy, the solution of Cappellari (2020, eq. 10), using the same notations, generalizes to \[\nu\overline{\nu_{r}^{2}}(r,\theta)=\int_{r}^{\infty}\left(\frac{ r^{\prime}}{r}\right)^{2\beta_{0}}\left[\frac{1+(r^{\prime}/r_{a})^{\alpha}}{1+(r/r_{a})^{ \alpha}}\right]^{\frac{2\beta_{0}}{\alpha}}\Psi(r^{\prime},\theta^{\prime}) \ \mathrm{d}r^{\prime} \tag{12a}\] \[\theta^{\prime}=\arcsin\left\{\left(\frac{r^{\prime}}{r}\right)^{ \beta_{0}-1}\left[\frac{1+(r^{\prime}/r_{a})^{\alpha}}{1+(r/r_{a})^{\alpha}} \right]^{\frac{2\beta_{0}}{\alpha}}\sin\theta\right\}. \tag{12b}\] In the special case \(\alpha=1\), this solution reduces the one using homographic functions in Bacon (1985, eq. 3)10. Footnote 10: The published expression has a typo, with an extra density \(\nu(\rho,\,\alpha)\). The application of a generic varying anisotropy function to the axisymmetric cylindrically-aligned Jeans solution of Cappellari (2008, eqs. 8, 9), is straightforward if one make the cylindrical anisotropy a function \(\beta_{z}(|z|)\) of the modulus of the cylindrical coordinate \(z\). One simply has to replace the constant axial anisotropy \(\beta_{z}\), with the corresponding varying expressions \(\beta_{z}(z)\) parametrized by the function of Equation 11. In the case of the tangential anisotropy \(\gamma\), the situation is identical regardless of the alignment of the velocity ellipsoid, and one can just replace the constant with an arbitrary function of the coordinates \(\gamma(R,z)\), without having to change anything else. When applying the axisymmetric solution to a model with both the tracer distribution and the total density described by an MGE (Emsellem et al., 1994; Cappellari, 2002), following the steps outlined in Cappellari (2020, sec. 5.1), the resulting solution requires only minimal changes and the numerical algorithm can be left unchanged. One only needs to replace the following two expressions with the corresponding one indicated by the arrows, in all the expressions of Cappellari (2020, eqs. 46-53) \[\beta_{k} \to\beta_{0}+\frac{\Delta\beta}{1+(r_{a}/r)^{\alpha}} \tag{13}\] \[\left(\frac{r^{\prime}}{r}\right)^{2\beta_{k}} \to\left(\frac{r^{\prime}}{r}\right)^{2\beta_{0}}\left[\frac{1+(r^{ \prime}r_{a})^{\alpha}}{1+(r/r_{a})^{\alpha}}\right]^{\frac{2\Delta\beta}{ \alpha}}. \tag{14}\] In the spherical limit, the expression for the intrinsic radial velocity dispersion in Cappellari (2020, eq. B2) generalizes to \[\nu v_{P}^{2}(r)=\int_{r}^{\infty}\left(\frac{r^{\prime}}{r}\right)^{2\beta_{0 }}\left[\frac{1+(r^{\prime}/r_{a})^{\alpha}}{1+(r/r_{a})^{\alpha}}\right]^{ \frac{2\Delta\beta}{\alpha}}\frac{r(r^{\prime})M(r^{\prime})}{r^{\prime 2}} \mathrm{d}r^{\prime}\,. \tag{15}\] Unlike the constant-anisotropy case, when projecting this model along the line-of-sight one cannot remove one of the two resulting integrals, except for some special cases (Mamon and Lokas, 2005), which are not very useful for practical applications. We implemented these changes in \(\forall 7.0\) of the Python JamPy package11, which now allows one to compute axisymmetric or spherical models with the logistic radial anisotropy variation, for both the spherically-aligned and cylindrically-aligned solutions. An application in the spherical limit is shown in the middle panel of Fig. 10. As expected the JAM model with the same variable-anisotropy for all Gaussians produces the same dispersion profile as the model with constant anisotropy for each individual Gaussian, as they both follow by design the same given anisotropy profile. Footnote 11: [https://pypi.org/project/jampy](https://pypi.org/project/jampy) ### MCMC Analysis For each of the models and dispersion profiles we perform an MCMC analysis to carefully assess the influence of the modelling and systematic effects of the kinematic extraction on the recovered BH mass. Figure 9: Top row: the contributions to the total luminosity weighted second moment from different MGE components. Middle row: intrinsic anisotropy ratio compared with the anisotropy assigned to each gaussian. Bottom row: projected velocity dispersion. The columns correspond to different choices of the total anisotropy, ranging from constant to radially varying. We define the \(\chi^{2}\) to be \[\chi^{2}=\sum_{\ell=1}^{N}\left(\frac{\sigma_{m}^{\ell}-\sigma_{d}^{\ell}}{\Delta \sigma^{\ell}}\right)^{2} \tag{16}\] where \(\sigma_{d}^{\ell}\) is the extracted dispersion in binned spaxel \(\ell\), \(\sigma_{m}^{\ell}\) is the model dispersion, and \(\Delta\sigma^{\ell}\) is the bootstrapped uncertainty in the extracted dispersion. We thus express the \(\chi^{2}\) for the combined data sets as \[\chi^{2}=\chi^{2}_{\rm MUS}+\chi^{2}_{\rm DASIS}+\chi^{2}_{\rm SAURON} \tag{17}\] From here, we perform an MCMC analysis using the code emcee of Foreman-Mackey et al. (2013). For each of the three combinations of different kinematic extractions described in subsection 3.1, we test four models, for a total of 12 different combinations of data and models. In each model we adopt as free parameters the four anisotropy parameters (\(\beta_{0},\beta_{\infty},r_{a},\alpha\)) described in subsection 5.2, the black hole mass \(M_{\rm BH}\), and make the following assumptions: * Constant stellar \(M/L\) without NFW dark matter. This adds an extra parameter \((M/L)_{\rm tot}\) for a total of 6 model parameters. * Constant stellar \(M/L\) with NFW dark matter. This add two extra free parameters, the stellar \(M*/L\) and the normalization of the halo, quantified by the dark matter fraction within one effective radius \(f_{\rm dm}(<R_{\rm e})\), for a total of 7 model parameters. * Varying \(M/L\) without NFW dark matter. This adds two extra free parameters: \((M/L)_{1}\) and \((M/L)_{2}\) which parametrize the mass-to-light variation (see Equation 7), for a total of 7 parameters. * Varying ML with NFW dark matter. This combines both a NFW dark matter halo, parametrized by the dark matter fraction \(f_{\rm dm}\), along with the parameters \((M/L)_{1}\) and \((M/L)_{2}\) which parametrize the mass-to-light variation, for a total of 8 parameters. These varying assumptions allow us to make contact with previous work which have made various assumptions on the parametrization of the gravitational potential, and also allow us to test the impact of each model assumption on the final result. In order to best sample the space of possible parameters, we re-express the anisotropy parameters in terms that result in a more efficient and uniform sampling of the model posterior. Namely, we sample the anisotropy parameters defined such that \[(\sigma_{r}/\sigma_{t})_{0}=\frac{1}{\sqrt{1-\beta_{0}}} \tag{18}\] \[(\sigma_{r}/\sigma_{t})_{\infty}=\frac{1}{\sqrt{1-\beta_{\infty }}}\] (19) \[a=\lg\alpha \tag{20}\] Given what we know about the anisotropy profile in M87, we restrict these parameters to the ranges \(0.5\leq(\sigma_{r}/\sigma_{r})_{0}\leq 1\) and \(1\leq(\sigma_{r}/\sigma_{r})_{\infty}\leq 1.3\). The bound at 1 comes from the enforced condition that the anisotropy becomes tangential in the center and radial at large radii. The bounds at 0.5 and 1.3 represent the largest range of anisotropy values reliably observed in early type galaxies Gebhardt et al. (2003); Cappellari et al. (2008, 2009); McConnell et al. (2012); Thomas et al. (2014); Krajnovic et al. (2018), as well as in simulations of slow rotators Rantala et al. (2019); Frigo et al. (2021). This range also encompasses the range of anisotropy profiles previously determined for M87 (Cappellari & McDermid, 2005; Gebhardt et al., 2011). In Table 3, we list all of the parameters along with the corresponding bounds on their values. We also restrict the parameter \(\lg\alpha\) to be between -0.3 and 0.6. This is due to the fact that small values of \(\alpha\) correspond to no anisotropy transition, which we want to exclude, and large values of \(\alpha\) give rise to an infinitely large parameter space where the spatial transition of the anisotropy takes place nearly instantaneously. In practice, the preferred range of Figure 10: The top panel shows the contributions to the radial velocity dispersion where the desired anisotropy profile is reached by fitting the anisotropy for each gaussian in the MGE. The center panel shows the same thing using the analytic implementation described in equations 12-14. The bottom panel shows the intrinsic anisotropy ratio and confirms that the result using the two methods is the same, the key difference being that the top one requires fitting the anisotropy for each gaussian while the center one requires a small modification to the analytic solution. \begin{table} \begin{tabular}{c c c} \hline Parameter & Lower Bound & Upper Bound \\ \hline \((\sigma_{r}/\sigma_{t})_{0}\) & 0.5 & 1 \\ \((\sigma_{r}/\sigma_{t})_{\infty}\) & 1 & 1.3 \\ \(r_{a}\) & 0 & \(\infty\) \\ \(\lg\alpha\) & -0.3 & 0.6 \\ \(M_{\rm fb}\) & 0 & \(\infty\) \\ \((M/L)_{1}\) & 0 & 7.23 \\ \((M/L)_{2}\) & 0 & 7.23 \\ \(\delta_{\rm m}\) & 0 & 1 \\ \hline \end{tabular} \end{table} Table 3: Table of free parameters and their permitted upper and lower bounds. The limits on \((\sigma_{r}/\sigma_{t})_{0}\) and \((\sigma_{r}/\sigma_{r})_{\infty}\) come from the large literature of observations and simulations of core galaxies. The limits on \(M/L\) come from considering the heaviest possible IMF combined with the results from Sarzi et al. (2018) parameter space almost always lies between -0.3 and 0.6 so this does not impact our final results. Running JAM for the required number of steps necessary to generate reliable posteriors for all of these combinations of models and data is very computationally expensive. The final contours exhibit strong covariances that require a long burn in time for the walkers to sample and populate the posterior. The likelihood also has multiple local minima which further increases the run time if the chain is started further away from the global minimum. In order to speed up this process, we start by running emcee for 300000 steps with 100 walkers for the JAM model of a spherical galaxy using the routine jam_sph_proj in Jampry. 12 This is a good approximation to the true Jeans solution as M87 is highly spherical, especially in the range of our data. Once this step is complete, we have a good approximation of the posterior. From there, we run emcee for 50000 steps with 100 walkers for the JAM model assuming axisymmetry and a spherically aligned velocity ellipsoid using the routine jam_axl_proq. These final 50000 steps are what is used for the final analysis. Footnote 12: Note that this is not the same as JAM with a spherically aligned velocity ellipsoid. The final routine used is jam_axl_proq which solves the axisymmetric Jeans equation assuming the velocity ellipsoid is spherically aligned. Here we assume that the system truly is spherically symmetric, thus simplifying the computation required to solve the Jeans equation. We present the posteriors for this in Fig. 16. We also show the posteriors assuming the most general model for the individual MUSE RNI, OASIS RNI, and SAURON data sets in Fig. 11. Given our choice of physically motivated priors, some of the posteriors are not symmetric and even run into the boundary. As such, we report the posterior median and left and right \(1\sigma\) confidence interval for all of the combinations of models and combined data in Table 4. As we expect, the results for M87 change very little between using spherical JAM and axisymmetric JAM. We also show the model fits to the observed dispersion in Fig. 13. ## 6 Discussion ### Black Hole Results In Fig. 11 we show the full posteriors using the RNI spectra for each of the four models. In Fig. 12 we show \(1\sigma\) and \(3\sigma\) contours of the black hole mass and total \(M/L\) within one half light radius for the four different models using the RNI spectra. The results are similar for the other two sets of joint data. We find a large range of permissible black hole mass values, from nearly \(4\times 10^{9}M_{\odot}\) to greater than \(12\times 10^{9}M_{\odot}\) depending on the model assumptions. The value obtained using the most general model and the kinematics derived from RNI spectra is \(M_{\rm BH}=(8.7\pm 1.2)\times 10^{9}~{}M_{\odot}\). The final allowed range of black hole masses depend strongly on the model assumptions. The most simple model with only constant \(M^{*}/L\) has the smallest range of permitted black hole masses. Including dark matter slightly shifts this range to the right. This is because the SAURON data strongly constrains the kinematics at large radii, so there is a tight correlation between \(f_{\rm dm}\) and \(M^{*}/L\) which allows one to interchange stars and dark matter. Decreasing stars at large radii in favor of dark matter, however, must be compensated at small radii with an increased black hole mass. Throughout this the anisotropy parameters remain fairly constant. This changes significantly with the introduction of varying \(M^{*}/L\). In the model with a \(M^{*}/L\) variation and no dark matter, this results in a much larger range of viable black hole masses at the lower mass end. This is due to a correlation between \((M/L)_{1}\) and the black hole mass which effectively results in the mass of the black hole being exchanged for mass in stars. The results for the black hole mass are similar for the most general model featuring both varying \(M/L\) and DM. The most interesting difference compared with previous studies is that the black hole mass we measure is more massive than that found in previous stellar dynamical studies. One key difference between our analysis and previous work is that we directly measure the stellar distribution within the influence of the AGN and find the stellar profile to be flatter than that used in all previous black hole studies of M87 using stellar dynamics. In order to determine if this could explain the increase in the black hole mass, we performed an MCMC run using the stellar profile of Kormendy et al. (2009) as this is used for the tracer distribution in previous studies (Gebhardt & Thomas, 2009; Gebhardt et al., 2011; Liepold et al., 2023). We do our modelling with the RNI spectra and in the model with DM and constant M/L, as this most closely approximates the models used in Gebhardt & Thomas (2009) and Gebhardt et al. (2011). The 1 and \(3\sigma\) posteriors for this are shown in Fig. 12. We find a preferred black hole mass of \(M_{\rm BH}=(5.5^{+0.5}_{-0.3})\times 10^{9}~{}M_{\odot}\), which agrees much more closely with previous measurements which range between \(5.4\times 10^{9}~{}M_{\odot}\)(Liepold et al., 2023) to \(6.2\times 10^{9}~{}M_{\odot}\)(Gebhardt et al., 2011). Naively one might think that the difference in the black hole mass is purely due to the difference in the gravitational potential between the two models. On its own, our stellar profile should increase the measured black hole mass over previous determinations as it is exchanging the mass of the stars in the central regions of the galaxy with the mass of the black hole. However, one can calculate the decrease in stellar mass in the centre of the galaxy between our model and the model of Kormendy et al. (2009) and we find that, assuming a constant stellar \(M/L\) ratio of 3.4 without DM, that within \(5^{\prime\prime}\)(approximately the sphere of influence) the decrease in stellar mass is \(5.6\times 10^{7}~{}M_{\odot}\), or around 1 per cent of the total stellar mass within \(5^{\prime\prime}\). This implies that the modification to the gravitational potential due to our model cannot be the sole cause of the difference between the two results. One possible alternative is that this is a result of the very flat core. As pointed out by Kormendy & Ho (2013, sec. 3.1), measuring BHs with stellar dynamics in galaxies with flat cores is intrinsically less accurate than in galaxies with steep inner profiles, because of the increase importance of orbital anisotropy. Additionally, in centrally cuspy galaxies, the line of sight velocity distribution along the photometric centre of the galaxy receives its largest contribution from the three dimensional origin of the galaxy. In the case of a very flat core, the line of sight velocity distribution along the photometric centre receives an even contribution from a larger range of radii. This further increases the uncertainty in the kinematic modelling (note the much larger uncertainties using our profile over the Kormendy et al. (2009) profile) and has the potential to impact the extracted kinematics. Further study of this will be required in future work. ### Mass to Light Ratio Constraints One important feature in this work is the inclusion of stellar \(M/L\) variations. In Fig. 15 we show 1000 \(M/L\) profiles randomly sampled from the posterior of the most general model using the RNI data. We find a strong preference for an increasing \(M/L\) ratio in the central regions of the galaxy. Our best fit \(M/L\) variation agrees with the bottom end of figure 11 of Sarzi et al. (2018) assuming a Kroupa IMF. Previous studies have either failed to take \(M/L\) variations into account (Gebhardt & Thomas, 2009; Gebhardt et al., 2011), or directly used the profile measured in Sarzi et al. (2018)(Liepold et al., 2023). Our result suggests that it is important to treat the profile as a free parameter. One important limitation of these results is the fact that the \(M/L\) variation parametrization we use does not permit particularly steep variations due to it fixing the \(M/L\) ratio at 1''and 30''. To test the effect of this, we ran a model where the inner radius at which the \(M/L\) ratio becomes fixed is free to vary. We find a preference for a range of values between 1''and 4''but do not find significant changes in the distribution of \(M/L\) profiles in the range of the Sarzi et al. (2018) data. Our work thus establishes a preference for a \(M/L\) gradient at the lower end of what was reported in Sarzi et al. (2018). However, we caution that the recovered \(M_{*}/L\) profile is degenerate with the slope of the dark matter profile and the one we derive relies on a fixed NFW halo. Obviously no dynamical model can distinguish between a variation in the total density due to the stellar \(M_{*}/L\) or the dark matter without assumptions. The total \(M/L\) ratio (defined as \(M/L\) within one \(R_{\rm e}\)) that we measure exhibits some model dependence. In Fig. 12, we see that the total \(M/L\) ranges between 3.2 up to 3.7 depending on the model assumptions. This range of values is primarily due to the uncertainty in the anisotropy profile at large radius. The model with the largest total \(M/L\) is the model with varying \(M/L\) and DM. We see in Fig. 16 that the preferred value of \((\sigma_{r}/\sigma_{t})_{\infty}\) is smaller than in the other models. This decrease in anisotropy at large radius must be made up by an increase in the mass. This further highlights the importance of the mass-anisotropy degeneracy when studying M87. The total \(M/L\) ratio of M87 in the I-band has been previously measured in Cappellari et al. (2006) and in the r-band from Cappellari et al. (2013) (using mass follows light models in both cases). Converting these to the SDSS-z band (as a proxy for the ACS/WFC F850LP band) gives \(M/L\) ratios of 4.2/4.813, and 4.0. This is slightly larger than our range of \(M/L\). In both previous determinations, the models were fit to the data out to R\(\approx\)35'', while we only fit the data out to R\(=\)15''. The slightly larger total \(M/L\) of previous determinations may be explained as due to both their smaller adopted BH and an increase in the dark matter fraction between these radii. Gebhardt \begin{table} \begin{tabular}{l c c c c c c c c c} \hline Data & Model & \((\sigma_{r}/\sigma_{t})_{0}\) & \((\sigma_{r}/\sigma_{t})_{\infty}\) & \(r_{a}\) & lg \(\alpha\) & \(M_{\rm bh}\) & \((M/L)_{1}\) & \((M/L)_{2}\) & \(f_{\rm dm}\) \\ \hline M RNI + O RNI + S & Constant \(M/L\) & \(0.55^{+0.08}_{-0.04}\) & \(1.24^{+0.03}_{-0.05}\) & \(1.4^{+0.5}_{-0.3}\) & \(0.08^{+0.04}_{-0.04}\) & \((9.7^{+0.7}_{-0.9})\times 10^{9}\) & \(M_{O}\) & \(3.4^{+0.1}_{-0.1}\) & N/A & N/A \\ M RNI + O RNI + S & NFW DM & \(0.54^{+0.07}_{-0.03}\) & \(1.26^{+0.03}_{-0.05}\) & \(1.6^{+0.5}_{-0.5}\) & \(0.09^{+0.04}_{-0.04}\) & \((10.2^{+0.8}_{-0.7})\times 10^{9}\) & \(M_{O}\) & \(3.2^{+0.1}_{-0.2}\) & N/A & \(0.06^{+0.07}_{-0.04}\) \\ M RNI + O RNI + S & Varying \(M/L\) & \(0.58^{+0.14}_{-0.06}\) & \(1.21^{+0.06}_{-0.06}\) & \(1.2^{+0.5}_{-0.5}\) & \(0.04^{+0.07}_{-0.08}\) & \((8.2^{+1.7}_{-1.7})\times 10^{9}\) & \(M_{O}\) & \(4.7^{+0.8}_{-0.9}\) & \(3.1^{+0.2}_{-0.2}\) & N/A \\ M RNI + O RNI + S & Varying \(M/L\) + NFWDM & \(0.56^{+0.08}_{-0.05}\) & \(1.19^{+0.07}_{-0.07}\) & \(1.1^{+0.5}_{-0.5}\) & \(0.02^{+0.08}_{-0.08}\) & \((8.7^{+1.2}_{-1.7})\times 10^{9}\) & \(M_{O}\) & \(6.5^{+0.5}_{-0.8}\) & \(1.5^{+0.6}_{-0.5}\) & \(0.36^{+0.14}_{-0.14}\) \\ M d=4 + O d=6 + S & Constant \(M/L\) & \(0.53^{+0.08}_{-0.02}\) & \(1.24^{+0.04}_{-0.04}\) & \(1.1^{+0.5}_{-0.3}\) & \(0.05^{+0.07}_{-0.04}\) & \((9.2^{+1.0}_{-1.0})\times 10^{9}\) & \(M_{O}\) & \(3.4^{+0.1}_{-0.1}\) & N/A & N/A \\ M d=4 + O d=6 + S & NFW DM & \(0.53^{+0.05}_{-0.02}\) & \(1.26^{+0.03}_{-0.05}\) & \(1.23^{+0.03}_{-0.04}\) & \((9.6^{+0.6}_{-0.7})\times 10^{9}\) & \(M_{O}\) & \(3.3^{+0.1}_{-0.1}\) & N/A & \(0.03^{+0.05}_{-0.02}\) \\ M d=4 + O d=6 + S & Varying \(M/L\) & \(0.59^{+0.15}_{-0.07}\) & \(1.19^{+0.07}_{-0.07}\) & \(0.9^{+0.6}_{-0.04}\) & \(0.42^{+0.08}_{-0.08}\) & \((7.7^{+1.2}_{-1.6})\times 10^{9}\) & \(M_{O}\) & \(5.0^{+0.8}_{-0.8}\) & \(3.1^{+0.2}_{-0.2}\) & N/A \\ M d=4 + O d=6 + S & Varying \(M/L\) + NFWDM & \(0.57^{+0.11}_{-0.06}\) & \(1.14^{+0.07}_{-0.06}\) & \(0.9^{+0.4}_{-0.4}\) & \(0.43^{+0.09}_{-0.09}\) & \((8.3^{+1.3}_{-1.5})\times 10^{9}\) & \(M_{O}\) & \(6.5^{+0.5}_{-0.9}\) & \(2.0^{+0.6}_{-0.6}\) & \(0.24^{+0.14}_{-0.12}\) \\ M d=6 + O d=4 + S & Constant \(M/L\) & \(0.67^{+0.17}_{-0.09}\) & \(1.22^{+0.06}_{-0.06}\) & \(2.2^{+1.5}_{-0.7}\) & \(0.27^{+0.07}_{-0.06}\) & \((8.9^{+1.4}_{-1.4})\times 10^{9}\) & \(M_{O}\) & \(3.5^{+0.1}_{-0.1}\) & N/A \\ M d=6 + O d=4 + S & NFW DM & \(0.54^{+0.04}_{-0.03}\) & \(1.26^{+0.02}_{-0.04}\) & \(1.2^{+0.3}_{-0.2}\) & \(0.05^{+0.04}_{-0.04}\) & \((9.6^{+0.6}_{-0.8})\times 10^{9}\) & \(M_{O}\) & \(3.3^{+0.1}_{-0.1}\) & N/A & \(0.03^{+0.05}_{-0.02}\) \\ M d=6 + O d=4 + S & Varying \(M/L\) & \(0.68^{+0.19}_{-0.12}\) & \(1.17^{+0.07}_{-0.07}\) & \(2.0^{+1.2}_{-0.7}\) & \(0.14^{+0.08}_{-0.09}\) & \((8.7^{+1.6}_{-1.7})\times 10^{9}\) & \(M_{O}\) & \(4.5^{+0.9}_{-0.9}\) & \(3.2^{+0.2}_{-0.2}\) & N/A \\ M d=6 + O d=4 + S & Varying \(M/L\) + NFWDM & \(0.66^{+0.13}_{-0.11}\) & \(1.16^{+0.09}_{-0.07}\) & \(1.7^{+1.1}_{-0.5}\) & \(0.08^{+0.09}_{-0.1}\) & \((8.9^{+1.4}_{-1.6})\times 10^{9}\) & \(M_{O}\) & \(6.2^{+0 & Thomas (2009) and Murphy et al. (2011) also present measurements of the V-band stellar \(M/L\) ratio of 6.3 and 8.2. Converting these to SDSS-\(z\) band gives 2.8 and 3.7, respectively. These cannot be directly compared to the results in Fig. 12 as we present the total \(M/L\) rather than just the stellar \(M/L\). However, we can still conclude that a stellar \(M/L\) of 3.7, after including dark matter, will lead to a total \(M/L\) that is slightly above the range of what we have determined. Likewise, a stellar \(M/L\) of 2.8, combined with a dark matter fraction of \(\sim 15\) will result in a total \(M/L\) of \(\sim 3.2\), which is at the lower end of what we measure. In this work we measure the stellar distribution within the influence of the AGN and find the stellar profile to be flatter than in previous work. The difference to the enclosed mass within 5''is close to 1%, implying that this does not significantly modify the gravitational potential. However, M87 is a large galaxy where the AGN covers only a small fraction of the stellar distribution relative to the size of the black hole. For other galaxies, such as NGC 4151, one of the key uncertainties in the black hole mass determination is uncertainty on the cuspiness of the inner stellar distribution (Roberts et al., 2021). Applying this technique to that case or similar cases could significantly reduce the uncertainties in the final black hole mass measurement. ### Dark Matter In this work we assume a NFW dark matter halo with break radius equal to 20kpc and find, in our most general models, a preference for a dark matter fraction within one effective radius of around \(f_{\rm dm}(<R_{\rm e})\approx 0.2\). This closely agrees with a result from Murphy et al. (2011) which determined the dark matter fraction within one effective radius to be approximately 17%. Our results, however, are strongly model and data dependent. In the less general models we consistently find a preference for no or very little dark matter (see Fig. 16). Additionally, on their own, the MUSE, OASIS, and SAURON data do not have a preference for a dark matter halo (Fig. 11). This data only goes out to 1.2 kpc, so we do not expect to very strongly constrain the dark halo. This highlights the importance of including large scale kinematic data for constraining information on the dark matter halo. ### Anisotropy Profile Constraints In Fig. 14 we show 1000 anisotropy profiles randomly chosen from the posterior of the NFW DM + Varying \(M/L\) model using the RNI Figure 14: Plot of 1000 anisotropy profiles randomly sampled from the MCMC chain with MUSE RNI + OASIS RNI + SAURON and colored according to their supermassive black hole mass. The best fit anisotropy profile from Gebhardt et al. (2011) is shown in black. We find strong evidence for a radially increasing anisotropy ratio while varying strongly due to the mass anisotropy degeneracy. Figure 12: Plot of the 1 and 3\(\sigma\) confidence intervals for black hole mass and mass to light ratio within a sphere of one half light radius for each of the scenarios shown in Fig. 16. Figure 13: Plot of 1000 dispersion profiles randomly sampled from the MCMC chain with MUSE RNI + OASIS RNI + SAURON and colored according to their supermassive black hole mass. This is compared with a set of binned data points combining the MUSE RNI, OASIS RNI, and SAURON data. We also show a model with no black hole mass. It is technically possible to fit the data with no black hole mass if there is a highly radial anisotropy in the centre. To ensure this does not happen, we fix \(r_{a}=1\) and find the best fit model enforcing \(M_{\rm BH}=0\). spectra. We find a remarkable agreement with previous work. The profiles all display the radially increasing behavior expected of slow rotators. We also visually see that the profiles tend to transition from constant on the right hand side to lower values near 10\({}^{\prime\prime}\). This is close to the size of the core of M87 (5\(\aas@@fstack{\prime\prime}\)66 according to Lauer et al. (2007)), and agrees with the results of previous studies demonstrating that the size of the core in slow rotators is close to the radius at which the velocity anisotropy ratio becomes tangential (Thomas et al., 2014). Another observation we should make is the strong correlation between the black hole mass and the anisotropy profile in Fig. 14. This clearly demonstrates the strong role that the mass-anisotropy degeneracy plays in this analysis. One important comment is that these results depend on our use of physically motivated priors. We see in Fig. 16 that in many cases, the posteriors for \((\sigma_{\tau}/\sigma_{t})_{0}\) and \((\sigma_{\tau}/\sigma_{t})_{\infty}\) run into the imposed boundaries and thus are unable to explore the full range of parameter space capable of reproducing the data. ## 7 Conclusion and Future Prospects We have studied the galaxy M87 using stellar kinematics from SAURON, OASIS, and MUSE using the code JamPy and our primary conclusions are as follows: * The stellar distribution of M87 can be measured directly within the influence of the AGN without needing to resort to extrapolating fitted profiles from outside of the AGN. This is done by directly measuring the fraction of the spectral flux due to stars during the kinematic extraction. The shape of the stellar distribution profile used in previous studies (Kormendy et al., 2009) overestimates the the stellar density in the central region of M87 by a factor of 2 (see Fig. 7). * For galaxies with an AGN, the PSF can be accurately measured in integral field spectroscopy by measuring the continuum flux during the kinematic extraction (see Fig. 6 and Table 1). This is due to the fact that the AGN spectral contribution is thought to be smooth, and hence is well approximated by additive polynomials. We measure the FWHM of the OASIS PSF to be 0\(\aas@@fstack{\prime\prime}\)561 and the FWHM of the MUSE PSF to be 0\(\aas@@fstack{\prime\prime}\)049. These are consistent with what we expect from the seeing for OASIS and the AO capabilities of MUSE. * We use JAM dynamical models of the kinematics in a Bayesian fashion. We find a preferred black hole mass of \((8.96^{+1.04}_{-1.72})\times\)10\({}^{9}\)\(M_{\odot}\). This range is consistent with the EHT measurement and previous stellar dynamical models or M87, though with a distinct preference for a larger black hole mass. Our analysis also highlights the fact that, even with excellent data, the derived black hole mass is sensitive to a variety of assumptions on both the kinematic extraction and the M/L variation. The resulting uncertainties, when accounting for these systematics, are significantly larger than usually adopted. * We find a strong preference for a radially decreasing \(M^{*}/L\) ratio at the lower end of what is found in Sarzi et al. (2018). This has the effect of expanding the range of allowed black hole masses to lower values. * We measure the anisotropy profile of M87 assuming a new flexible analytic parametrization for the anisotropy which is a logistic function of logarithmic radius and find a strong preference for a radial increase. This also clearly shows the mass-anisotropy degeneracy which strongly contributes to the uncertainty in the black hole mass. We conclude that, contrary to what is sometimes assumed, one can obtain stringent constraints on both black hole masses and on the anisotropy profile, using the Jeans equations, by combining priors on the anisotropy, which is now well-understood in galaxy centres, with realistic parametrizations for the total density. There remain many important questions about M87 that are well suited to be studied in the near future. Recent work has suggested that improved modelling of the gas disk is able to bring the supermassive black hole measurements from gas dynamics into agreement with those from stellar kinematics and the EHT (Jeter et al., 2019; Jeter & Broderick, 2021). This, however, relies on resolving details of the gas kinematics within the innermost arcsecond of the galaxy. This has been challenging to perform as previous measurements have made use of H\(\alpha\) and [NII] lines which become very broad and overlapping in the centre of M87 and hence are challenging to decompose. Additionally, we find evidence for multiple gas kinematic components which further complicates the extraction and identification of the gas kinematics in the central regions of the galaxy. Furthermore, as this work shows, future studies of the supermassive black hole mass of M87 using stellar kinematics will also require detailed studying of systematic effects in order to produce reliable black hole mass results. High quality data for these tasks may not be far off. There is a cycle I WST proposal (2228, PI: Jonelle Walsh) to measure the central supermassive black hole of M87 using NIRSpec. This will cover a wavelength range including the CO bandhead that is similar to the wavelength range covered in Gebhardt et al. (2011) though it will be unaffected by skylines and will have much higher spatial resolution and signal to noise. This will provide the most detailed view of M87's inner kinematics to date. ## Acknowledgements This research is based on observations collected at the European Southern Observatory under ESO program 0103.B-0581. This research is based on observations obtained at the Canada-France-Hawaii Telescope (CFHT) which is operated by the National Research Council of Canada, the Institut National des Sciences de l'Univers of the Centre National de la Recherche Scientifique of France, and the University of Hawaii. This research is based on observations made with the NASA/ESA Hubble Space Telescope Figure 15: Plot of 1000 M/L profiles randomly sampled from the MCMC chain with MUSE RNI + OASIS RNI + SAURON and colored according to their supermassive black hole mass. This is compared with range of M/L variations assuming Kroupa IMF from Sarzi et al. (2018) which is shown in gray and outlined with black lines. Our data strongly prefers an increasing M/L variation towards the centre of the galaxy consistent with that in Sarzi et al. (2018). obtained from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555. These observations are associated with program GO-9401. Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS Web Site is [http://www.sdss.org/](http://www.sdss.org/). The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max-Planck-Institute for Astronomy (MPIA), the Max-Planck-Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington. This research has made use of the NASA/IPAC Extragalactic Database (NED), which is funded by the National Aeronautics and Space Administration and operated by the California Institute of Technology. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research made use of Montage. It is funded by the National Science Foundation under Grant Number ACI-1440620, and was previously funded by the National Figure 16: Corner plots for the joint MUSE RNI + OASIS RNI + SAURON data for each of our four assumed models. Starting from the top left and going clockwise the models are: Varying M/L + NFW DM, Varying M/L without NFW DM, Constant M/L without NFW DM, and constant M/L with NFW DM. Aeronautics and Space Administration's Earth Science Technology Office, Computation Technologies Project, under Cooperative Agreement Number NCC5-626 between NASA and the California Institute of Technology. This research made use of Jupyter (Perez & Granger, 2007; Kluyver et al., 2016), NumPy (Harris et al., 2020), SciPy (Virtanen et al., 2020), Matplotlib (Hunter, 2007), Astropy, a community-developed core Python package for astronomy (Astropy Collaboration et al., 2013, 2018), emcee (Forcman-Mackey et al., 2013), Schwimmbad (Price-Whelan & Foreman-Mackey, 2017). D.A.S. is supported by a STFC/UKRI doctoral studentship. ## Data Availability The MUSE integral field data underlying this article is publicly available through the ESO science archive facility ([http://archive.eso.org/cms.html](http://archive.eso.org/cms.html)). The SAURON integral field data is available on the ATLAS\({}^{3}\)b website ([https://www-astro.physics.ox.ac.uk/atlas3d/](https://www-astro.physics.ox.ac.uk/atlas3d/)). The HST ACS/WFC F850LP image is publicly available through the Mikulski Archive for Space Telescopes ([https://archive.stsci.edu/](https://archive.stsci.edu/)). The r-band SDSS image is available from [https://drl2.sdss.org/](https://drl2.sdss.org/) We also include the OASIS RNI degree 1 and MUSE RNI degree 1 kinematics in the online supplementary materials.
2309.17141
Narrow and ultra-narrow transitions in highly charged Xe ions as probes of fifth forces
Optical frequency metrology in atoms and ions can probe hypothetical fifth-forces between electrons and neutrons by sensing minute perturbations of the electronic wave function induced by them. A generalized King plot has been proposed to distinguish them from possible Standard Model effects arising from, e.g., finite nuclear size and electronic correlations. Additional isotopes and transitions are required for this approach. Xenon is an excellent candidate, with seven stable isotopes with zero nuclear spin, however it has no known visible ground-state transitions for high resolution spectroscopy. To address this, we have found and measured twelve magnetic-dipole lines in its highly charged ions and theoretically studied their sensitivity to fifth-forces as well as the suppression of spurious higher-order Standard Model effects. Moreover, we identified at 764.8753(16) nm a E2-type ground-state transition with 500 s excited state lifetime as a potential clock candidate further enhancing our proposed scheme.
Nils-Holger Rehbehn, Michael K. Rosner, Julian C. Berengut, Piet O. ~Schmidt, Thomas Pfeifer, Ming Feng Gu, José R. Crespo López-Urrutia
2023-09-29T11:12:12Z
http://arxiv.org/abs/2309.17141v1
# Narrow and ultra-narrow transitions in highly charged Xe ions as probes of fifth forces ###### Abstract Optical frequency metrology in atoms and ions can probe hypothetical fifth-forces between electrons and neutrons by sensing minute perturbations of the electronic wave function induced by them. A generalized King plot has been proposed to distinguish them from possible Standard Model effects arising from, e.g., finite nuclear size and electronic correlations. Additional isotopes and transitions are required for this approach. Xenon is an excellent candidate, with seven stable isotopes with zero nuclear spin, however it has no known visible ground-state transitions for high resolution spectroscopy. To address this, we have found and measured twelve magnetic-dipole lines in its highly charged ions and theoretically studied their sensitivity to fifth-forces as well as the suppression of spurious higher-order Standard Model effects. Moreover, we identified at 764.8753(16) nm a E2-type ground-state transition with 500 s excited state lifetime as a potential clock candidate further enhancing our proposed scheme. + Footnote †: preprint: APS/123-QED Indirect evidence from galactic rotation, gravitational lensing and cosmological evolution suggests the existence of dark matter (DM) [1; 2]. Its constituents, by coupling to Standard Model (SM) particles, could also influence the neutrino-mass hierarchy [3] and explain open physics questions. Additional fields could cause a fifth-force coupling of electrons with neutrons, inducing small but measurable effects [4; 5; 6] in atomic systems. In an electronic transition, sensitivity to a fifth-force arises because the overlap with the nucleus changes between the ground and excited state, reflecting interactions as small as a fraction of the linewidth. However, since the transition energies cannot be calculated accurately, isotope shift spectroscopy is employed. The classical method of the King plot (KP) uses two transitions in at least three different isotope pairs are used. Plotting the isotope shifts scaled by the nuclear-mass parameter eliminates the charge radius, typically leading to a linear behavior [7] from which atomic constants characterising atomic recoil (mass shift) and the overlap of the electronic wavefunction with the nucleus (field shift) can be derived. Optical frequency metrology has reduced uncertainties in the determination of transition energies of forbidden transitions by orders of magnitude, making KP methods far more sensitive. On top of such isotopic shifts (IS), hypothetical fifth forces would add minute perturbations causing a deviation from linearity [5; 6; 8; 9; 10; 11]. However, unknown SM effects of higher order [6; 8] (sometimes dubbed'spurions') could also induce sizable non-linearities that are hard to distinguish from those of the hypothetical forces. A recently devised generalized King plot (GKP) [12; 13] overcomes this by using more transitions and isotope pairs to build a set of linear equations determining the higher-order effects, and even disposing of the need of exact nuclear masses ('no-mass GKP'). Recent dedicated experiments in ytterbium measured a deviation from the King linearity [14], which has since been confirmed using other transitions [15; 16; 17]. The most likely reason for the deviation is changes in the nuclear deformation of Yb between isotopes [18], however the analysis show that a second, yet unidentified source of nonlinearity is also present. In contrast, a measurement in calcium [19], where higher-order effects are expected to be smaller, was consistent with King linearity. To further enhance the GKP sensitivity, as many transitions and even-isotopes as possible are sought after. An ideal candidate is xenon (\(Z=54\)). It has seven natural zero-nuclear-spin isotopes (124, 126, 128, 130, 132, 134, 136), and four more radio-isotopes (118, 120, 122 and 138) with lifetimes longer than minutes. Its mass reduces Doppler shifts in comparison with lighter elements, and its nuclear charge magnifies relativistic and QED effects [20], which is in itself a key field of research [21]. In this Letter, we find twelve xenon ground-state, magnetic dipole (M1) transitions in the optical region, with wavelength uncertainties in the order of 0.1 pm, and one optical electric quadrupole (E2) clock transition in charge states Xe\({}^{9+}\) through Xe\({}^{17+}\). We have calculated and evaluated theoretical King plots to find which combination of transitions leads to the highest possible sensitivity to a hypothetical fifth-force between neutrons and electrons. Highly charged ions (HCl) such as those studied here are very well suited for high-precision experiments [22], since their very low polarizability suppresses effects of external electromagnetic perturbations. Moreover, the reduced number of bound electrons reduces their theoret ical complexity. Recent experimental developments like sympathetic cooling [23], application of quantum logic spectroscopy [24] to HCI [25], and algorithmic cooling [26] have made clocks based on HCI with a sub-Hz uncertainty possible [27]. They will help extending GKP applications into beyond-the-SM (BSM) parameter regions not yet constrained by scattering experiments [28; 29; 30; 31; 32] and fifth-force studies [33; 34; 35; 36; 37; 38]. We produced the ions of interest with an electron beam ion trap (EBIT) [40; 41], the Heidelberg EBIT (HD-EBIT) [42], from a differentially pumped atomic beam of Xe interacting with electrons at selected energies. For producing the HCI of interest Xe\({}^{9+}\) through Xe\({}^{33+}\), we scanned the electron beam energy in the range 100-2500 eV in 10-eV increments. Every time the ionization potential of a given charge state is surpassed, the next higher one appears in the trap, and with it a different spectrum. In each cycle, we kept these ions trapped for 60 s, and dumped them at the end by briefly inverting the axial trapping potential set by trap electrodes. This removes impurity ions that slowly accumulate due to evaporation of barium and tungsten from the electron-gun cathode. Electron-impact excitation populates the upper levels of the observed transitions both directly and through cascades. The electron and ion density conditions in the EBIT are such that optical magnetic-dipole transitions with Einstein coefficients higher than \(\approx 10\) s\({}^{-1}\) can be measured [43; 44; 45; 46; 47]. For this purpose, a set of in-vacuo lenses projects an image of the cylindrical ion cloud through a quartz vacuum window. This intermediate image is rotated by 90 degrees by a periscope and relayed by two lenses to the entrance slit of an optical spectrometer, as in Refs. [43; 44]. In the present work, we used a Czerny-Turner spectrometer with 2 m focal length [45; 46; 47] to record the wavelength range 250-800 nm, and calibrated it with hollow-cathode lamps of different elements. In its focal plane, a CCD-camera, cooled to \(-80^{\circ}\)C, took several images for averaging with an exposure time of 60 minutes each. Pixels showing high signals due to cosmic muons were identified and removed from the images. Stray-light background was also subtracted. After obtaining overview spectra with a 150 grooves/mm grating, we performed measurements at higher resolution using two gratings with 1800 and 3600 grooves/mm, respectively. The results are shown in Figure 1. For identification, we calculated for each ion the electronic structure and transition rates with the Flexible Atomic Code (fac) [48] and ambit[39]. The advantage of fac is its calculation speed, while ambit is used Figure 1: Measured optical ground-state transitions of Xe\({}^{9+}\) through Xe\({}^{17+}\). The Zeeman structure (arrows) was fitted based on line identification with fac calculations. For Xe\({}^{10+}\) the Grotrian diagram has been expanded. Dotted line: calculated E2 transition; dashed line: calculated M1 transition (see Supplemental Materials [URL will be inserted by publisher] for details and further Grotrian diagrams). for its more precise results. Since the 8-T field of the EBIT separates the Zeeman components to a resolvable extent, we fitted for each line its centroid, experimental linewidth, the g-factors of the upper and lower state, \(\pi\) and \(\sigma\) amplitudes, and compare the results with theory. We corrected the identification of the 436.2 nm line in Ref. [49], from Xe\({}^{18+}\) to Xe\({}^{17+}\) based on the Zeeman splitting. Table 1 presents the key parameters of the thirteen discovered ground-state transitions. Wavelength uncertainties are the quadratic average of those from the Zeeman fits and the spectrometer dispersion. Results of ambit calculations are also given there: ab-initio wavelength \(\lambda\), Einstein coefficient \(A_{ki}\), and expected g-factors. Furthermore, we tabulate in the Supplemental Material [URL will be inserted by publisher] many other identified lines not involving the electronic ground-state. A key result of our search is the identification of a ground-state clock transition of E2 electric quadrupole in Xe\({}^{10+}\) from the lowest excited state. By means of Ritz-Rydberg combinations (see Supplemental Materials [URL will be inserted by publisher]), we determine for the \(4d^{8}\)\({}^{3}\!F_{2}\rightarrow^{3}\!\!F_{4}\) transition a vacuum wavelength of 764.8753(16) nm. We compare this with an ambit calculation yielding 735.6 nm and an E2 transition rate of \(A_{ki}=0.002\) s\({}^{-1}\), which is within the expected calculation uncertainty. The low transition rate suggests that this is a suitable candidate for an optical clock with a sensitivity to fifth forces that can be fully exploited due to its narrow linewidth of 0.3 mHz. We also have preliminary assignments for a few additional E2 candidates in other charge states in the Supplemental Materials [URL will be inserted by publisher], but a conclusive identification will require complementary measurements. Following an established approach, we checked the suitability of the found ground-state transitions for GKP studies. For this, we added to the electromagnetic interaction potentials used for the fac calculations an additional term for the hypothetical fifth-force as a Yukawa \begin{table} \begin{tabular}{c c c c c c c c c c c} \hline \hline & & \multicolumn{4}{c}{Observed values} & \multicolumn{4}{c}{ambit calculations} \\ Ion & Transition & Energy (eV) & \(\lambda_{\text{vac}}\) (nm) & \(g_{\text{upper}}\) & \(g_{\text{lower}}\) & \(A_{ki}\) (s\({}^{-1}\)) & \(\lambda\) (nm) & \(g_{\text{upper}}\) & \(g_{\text{lower}}\) & Type \\ \hline Xe\({}^{9+}\) & \(4d^{9}\) & \({}^{2}\!D_{3/2}\) & - & \({}^{2}\!D_{5/2}\) & 2.07151156(28) & 598.520419(90) & 0.792(2) & 1.189(1) & 67.5 & 595.4 & 0.8 & 1.2 & M1 \\ Xe\({}^{10+}_{10}\) & \(4d^{8}\) & \({}^{3}\!F_{3}\) & - & \({}^{3}\!F_{4}\) & 1.8808627(14) & 659.18789(56) & 1.082(4) & 1.238(3) & 88.4 & 665.1 & 1.0833 & 1.2426 & M1 \\ Xe\({}^{10+}_{10}\) & \(4d^{8}\) & \({}^{3}\!F_{2}\) & - & \({}^{3}\!F_{4}\) & 1.6209726(34) & 764.8753(16)(Ritz) & & & 0.002 & 735.6 & 0.9792 & 1.2426 & **E2** \\ Xe\({}^{10+}_{10}\) & \(4d^{8}\) & \({}^{3}\!F_{4}\) & - & \({}^{3}\!F_{4}\) & 5.0567231(73) & 245.18684(36)(Ritz) & & & 67.1 & 242.9 & 1.0074 & 1.2426 & M1 \\ Xe\({}^{10+}_{10}\) & \(4d^{7}\) & \({}^{3}\!F_{0/2}\) & - & \({}^{3}\!F_{9/2}\) & 3.9393517(50) & 315.16451(45) & 1.105(37) & 1.321(35) & 107.7 & 313.3 & 1.0823 & 1.3054 & M1 \\ Xe\({}^{11+}_{10}\) & \(4d^{7}\) & \({}^{3}\!F_{2/2}\) & - & \({}^{3}\!F_{9/2}\) & 1.73135855(57) & 723.53667(37) & 1.242(9) & 1.306(7) & 82.8 & 727.4 & 1.2276 & 1.3054 & M1 \\ Xe\({}^{12+}_{12}\) & \(4d^{6}\) & \({}^{3}\!H_{4}\) & - & \({}^{3}\!D_{4}\) & 3.7252183(8) & 332.82398(79) & 1.058(32) & 1.455(31) & 110.8 & 328.6 & 1.0438 & 1.462 & M1 \\ Xe\({}^{15+}_{15}\) & \(4d^{3}\) & \({}^{4}\!H_{1/2}\) & - & \({}^{3}\!F_{3/2}\) & 4.2747781(74) & 290.03657(50) & 2.17(20) & 0.78(9) & 16.8 & 297.7 & 2.2137 & 0.47304 & M1 \\ Xe\({}^{15+}_{15}\) & \(4d^{3}\) & \({}^{4}\!H_{3/2}\) & - & \({}^{3}\!F_{3/2}\) & 4.0448609(24) & 306.52277(20) & 0.856(57) & 0.799(28) & 101.7 & 337.1 & 1.4357 & 0.47304 & M1 \\ Xe\({}^{16+}_{10}\) & \(4d^{2}\) & \({}^{1}\!D_{2}\) & - & \({}^{3}\!F_{2}\) & 4.4207651(39) & 280.45869(25) & 1.197(11) & 0.704(15) & 84.4 & 271.3 & 1.2027 & 0.70681 & M1 \\ Xe\({}^{16+}_{16}\) & \(4d^{2}\) & \({}^{3}\!F_{3}\) & - & \({}^{3}\!F_{2}\) & 2.24105833(38) & 553.23949(11) & 1.073(1) & 0.702(2) & 137.6 & 555.4 & 1.0833 & 0.70681 & M1 \\ Xe\({}^{16+}_{16}\) & \(4d^{2}\) & \({}^{3}\!F_{1}\) & - & \({}^{3}\!F_{2}\) & 4.9243614(65) & 251.77721(33)(Ritz) & & & 10.3 & 244.3 & 1.5 & 0.70681 & M1 \\ Xe\({}^{17+}_{17}\) & \(4d^{2}\) & \({}^{3}\!D_{5/2}\) & - & \({}^{3}\!D_{3/2}\) & 2.84178291(47) & 436.290179(73) & 1.184(2) & 0.785(3) & 123.7 & 433.8 & 1.2 & 0.8 & M1 \\ \end{tabular} \end{table} Table 1: Transitions in highly charged Xe ions: Measured energies, wavelengths (\(\lambda_{\text{vac}}\), vacuum) and g-factors of upper and lower energy levels obtained through fitting of the Zeeman-structure with their corresponding uncertainties. Theoretical transition probabilities \(A_{ki}\), ab-intio wavelengths and g-factors were calculated with ambit[39]. Figure 2: (a) Isotopic shift from a hypothetical fifth-force for the forbidden ground-state transitions in Xe\({}^{9+}\) through Xe\({}^{17+}\) for a fixed coupling constant \(y_{e}y_{n}=1\times 10^{-13}\) and varying mediator mass. (b) Theoretical King plot for the six possible Xe-isotope pairs using the Xe\({}^{11+}_{b}\), Xe\({}^{12+}_{b}\) pair as example. We set the coupling constant to \(y_{e}y_{n}=1\times 10^{-13}\) and the mediator mass to \(m_{\Phi}=1\times 10^{5}\frac{eV}{\varepsilon^{2}}\). The error bars represent 100 mHz measurement uncertainty modified by the mass parameter \(\mu_{a}\). potential [12]: \[V_{\Phi}(r)=y_{\mathrm{e}}y_{\mathrm{n}}(A-Z)\frac{\hbar c}{4\pi r}\exp\left(- \frac{c}{\hbar}m_{\Phi}r\right), \tag{1}\] where \(y_{\mathrm{e}}y_{\mathrm{n}}\) is the coupling strength between electrons and neutrons; \(A\) the nuclear mass, \(Z\) the nuclear charge; \(\hbar\) the reduced Planck constant, and \(c\) the speed of light. The range of the force is defined by the mass parameter \(m_{\Phi}\). Using an automatized script, we performed fac calculations varying the force range, its strength, as well as the nuclear charge radius and mass, and calculated the corresponding perturbation on the isotope-shifts. Between two isotopes, the total isotope shift (IS) \(\delta\nu_{i}^{a}=\nu_{A_{\mathrm{ref}}}-\nu_{A}\) is expressed by the following equation [12] for the transition \(i\) and the isotope pair \(a=(A_{\mathrm{ref}},A)\): \[\delta\nu_{i}^{a}=K_{i}\mu_{a}+F_{i}\delta(r^{2})_{a}+y_{\mathrm{e}}y_{\mathrm{ n}}X_{i}\gamma_{a}. \tag{2}\] Here, the first two terms represent the mass shift and the field shift, respectively. The former depends on the difference of the inverse of the isotope masses \(\mu_{a}=1/m_{A_{\mathrm{ref}}}-1/m_{A}\), and the latter on the mean-square charge radii difference \(\delta(r^{2})_{a}=(r^{2})_{A_{\mathrm{ref}}}-(r^{2})_{A}\). The third term is caused by the fifth force, and depends on the coupling strength \(y_{\mathrm{e}}y_{\mathrm{n}}\) between electrons and neutrons, the electronic constant \(X_{i}=X_{i}(V_{\Phi})\) due to the Yukawa potential, and a factor \(\gamma_{a}=(A_{\mathrm{ref}}-Z)-(A-Z)\) depending on the difference in number of neutrons. We plot the fifth-force shift versus varying mediator masses \(m_{\Phi}\) in Fig. 2(a). To study the effect of a fifth-force, we used a King plot [7], where the isotope shift in Eq. 2 is divided by the mass parameter \(\mu_{a}\), which then yields a common nuclear parameter \(\delta(r^{2})_{a}/\mu_{a}\) in the SM part. Between two modified transitions, the SM parts lead to a linear behavior, but a fifth-force would break it. For the six Xe-isotope pairs this is shown as theoretical prediction in Fig. 2(b). The mediator mass \(m_{\Phi}=1\times 10^{5}\frac{\mathrm{eV}}{\mathrm{c}^{2}}\) and the coupling parameter \(y_{\mathrm{e}}y_{\mathrm{n}}=10^{-13}\) are fixed at arbitrary, but theoretically possible values [6]. Mass uncertainties are neglected. The fifth-force shift in Fig. 2(a) is on the order of hundreds of Hz, but only a small fraction remains as a non-linearity due to the alignment of the SM linearity with the fifth-force contributions [6]. Although each pairing of transitions experiences a different non-linearity, similar fine-structure transitions share common-mode shifts, reducing the total sensitivity as in Refs. [13; 47]. Following these works, we quantified non-linearities in the King plot with the area NL spanned by the isotope pairs. Error propagation of the assumed measurement uncertainty on the isotope-shifts yielded its uncertainty \(\Delta\mathrm{NL}\). With this, we defined the resolution \(R\) as \(R=\mathrm{NL}/\Delta\mathrm{NL}\). A value \(R=1\) sets the lower bound of the coupling parameter \(y_{\mathrm{e}}y_{\mathrm{n}}\) that can be resolved, as shown for given transition pairs in Fig. 3. We assume a frequency uncertainty of \(\Delta\nu=100\,\mathrm{mHz}\) as recently achieved [25; 27] and discussed in Ref. [47]. Dashed lines in Fig. 3 depict King plots neglecting mass uncertainty and higher-order SM effects. Inclusion of the former in one of these pairs, e.g., Xe\({}^{12+}\), Xe\({}^{11+}_{\mathrm{e}}\), for isotopes 130 to 136 (relative mass uncertainties \(\approx 10^{-10}\)) and 124 to 128 (between \(10^{-8}\) and \(10^{-7}\)) [50] diminishes sensitivity by three orders of magnitude. Another order of magnitude is lost if nuclear deformation causes a second-order field shift of \(1\,\mathrm{kHz}\) (dash-dotted line) [51; 18] estimated by evaluating calculated higher-order shifts in other elements [8]. We can overcome these losses with a no-mass generalized King plot [13] (shown as solid line) by adding more transitions to expand the King plot into a higher dimension. This restores the sensitivity by three orders of magnitude, as shown in Fig. 3 and brings it to the level of the ytterbium GKP with an assumed frequency uncertainty of \(100\,\mathrm{mHz}\) from Ref. [13]. A different xenon pairing would improve the sensitivity around \(10^{5}\) eV. Full sensitivity is not recovered due to the error propagation of the four transitions needed. Note that either a reduction of the \(\nu\)-uncertainty down to \(5\) mHz, as already achieved by optical clocks [52], or a separate, sufficiently accurate mass measurement would lead to better sensitivity. If more higher-order SM contributions are expected, more transitions can be used to suppress their spurious effects. The seven stable even isotopes of Xe can tackle up to three such'spurious' by expanding the KP into five dimensions. With thirteen ground-state transitions, one can use various pairings to optimize sensitivity to NP. By contrast, other candidates such as calcium or ytterbium only have enough even isotopes to suppress one'spurion'. Figure 3: Sensitivity limit for \(y_{\mathrm{e}}y_{\mathrm{n}}\) for selected transition pairs with assumed frequency uncertainty of \(100\) mHz. Dashed lines: Classical King plot (KP) neglecting mass uncertainty. Calcium King plot from Solaro _et al._[19] with \(100\) mHz uncertainty included for comparison. Dash-dotted line: Inclusion of a second-order field shift of \(1\,\mathrm{kHz}\) and mass uncertainty. Solid lines: Predicted no-mass GKP for Yb [13] and for the present Xe, using four transitions.
2309.08061
Density Analysis for coupled forward-backward SDEs with non-Lipschitz drifts and Applications
We explore the existence of a continuous marginal law with respect to the Lebesgue measure for each component $(X,Y,Z)$ of the solution to coupled quadratic forward-backward stochastic differential equations (QFBSDEs) {for which the drift coefficient of the forward component is either bounded and measurable or H\"older continuous. Our approach relies on a combination of the existence of a weak {\it decoupling field} (see \cite{Delarue2}), the integration with respect to space time local time (see \cite{Ein2006}), the analysis of the backward Kolmogorov equation associated to the forward component along with an It\^o-Tanaka trick (see \cite{FlanGubiPrio10})}. The framework of this paper is beyond all existing papers on density analysis for Markovian BSDEs and constitutes a major refinement of the existing results. We also derive a comonotonicity theorem for the control variable in this frame and thus extending the works \cite{ChenKulWei05}, \cite{DosDos13}. As applications of our results, we first analyse the regularity of densities of solution to coupled FBSDEs. In the second example, we consider a regime switching term structure interest rate models (see for e.g., \cite{ChenMaYin17}) for which the corresponding FBSDE has discontinuous drift. Our results enables us to: firstly study classical and Malliavin differentiability of the solutions for such models, secondly the existence of density of such solutions. Lastly we consider a pricing and hedging problem of contingent claims on non-tradable underlying, when the dynamic of the latter is given by a regime switching SDE (i.e., the drift coefficient is allowed to be discontinuous). We obtain a representation of the derivative hedge as the weak derivative of the indifference price function, thus extending the result in \cite{ArImDR10}.
Rhoss Likibi Pellat, Olivier Menoukeu Pamen
2023-09-14T23:23:13Z
http://arxiv.org/abs/2309.08061v2
# Density analysis for coupled forward-backward SDEs with non-Lipschitz drifts ###### Abstract. We explore the existence of a continuous marginal law with respect to the Lebesgue measure for each component \((X,Y,Z)\) of the solution to coupled quadratic forward-backward stochastic differential equations (QFBSDEs) for which the drift coefficient of the forward component is either bounded and measurable or Holder continuous. Our approach relies on a combination of the existence of a weak decoupling field, the integration with respect to space time local time (see [18]), the analysis of the backward Kolmogorov equation associated to the forward component along with an Ito-Tanaka trick (see [20]). The framework of this paper is beyond all existing papers on density analysis for Markovian BSDEs and constitutes a major refinement of the exisiting results. As an independent result, we also derive a comonotonicity theorem for the control variable in this frame and thus extending the works [10], [46]. Key words and phrases:Forward-Backward SDEs; Quasi-linear PDE; Non-smooth drifts; Regularity of Density; quadratic drivers; Integration with Local time; Malliavin calculus 2010 Mathematics Subject Classification: Primary: 60H10, 35K59, Secondary: 35K10, 60H07, 60H30 R. Likibi Pellat was supported by DAAD under the programme PhD-study at the African Institute for Mathematical Sciences, Ghana. O. Menoukeu Pamen acknowledges the funding provided by the Alexander von Humboldt Foundation, under the program financed by the German Federal Ministry of Education and Research entitled German Research Chair No 01DG15010. when the terminal value and the driver are both functionals of a diffusion process \(X\) which satisfy: \[X_{t}=x+\int_{0}^{t}b(s,X_{s})\mathrm{d}s+\int_{0}^{t}\sigma(s,X_{s})\mathrm{d}W _{s} \tag{1.2}\] with \(b,\sigma,\phi,f\) deterministic functions satisfying their standing Assumptions (X) (see [37, page 2823]). Assuming that \(f\) is of quadratic growth (in the sense of [29]) and twice continuous differentiable, the authors claim that the solution \((X,Y,Z)\) of the FBSDE (1.1)-(1.2) is twice Malliavin differentiable. They then use this fact to analyse the density of the control variable \(Z\). We do not however understand why the solution \(X\) of the forward (1.2) belongs to \(\mathbb{D}^{2,p}\) for \(p\geq 1\), under only Lipschitz assumption of its coefficients \(b\) and \(\sigma\). We believe that, some assumptions that guarantee the second order Malliavin differentiability of the system (1.1)-(1.2) are provided in [24, Theorem 4.1]. The first result on the analysis for coupled FBSDEs was recently obtained in [42]. As opposed to the traditional approach, the authors exploited the regularity properties of the solutions to the associated quasi-linear parabolic PDE to derive the existence, Gaussian-type bounds and the tail estimates of the law of each component \((X,Y,Z)\) solution to a scalar FBSDE, on the entire real line. We do, however, point out an inaccuracy in their statement Theorem 3.1 about the density estimates for the law of the forward component \(X_{t}\). An inspection of its proof suggests that additional assumptions are required to obtain the sought estimates. Indeed, the Lamperti transformation utilized by the authors (see their subsection 2.6) to obtain the latter requires more regularity on the coefficients. For example, the drift function must be at least Lipschitz continuous in all variables, whereas the diffusion coefficient must be non-degenerate and admit a finite derivative in time and a bounded second order derivative in space. Let us also mention that, the analysis of densities for BSDEs driven by Gaussian processes (fractional Brownian motion in particular) was addressed in [19] whereas the multidimensional setting was treated in [11]. While the globally Lipschitz condition is a necessary requirement to show the existence of the density of \(Y\) in the above works, we do not need this assumption in the present paper. We use the decoupling field along with the regularisation effect of the Brownian motion to show that the solution to the forward SDE has a density, despite the roughness of the drift coefficient(and even the discontinuity of the drift). The decoupling field once more enables us to transfer the smoothing properties from the SDEs to the BSDEs. We consider the following coupled forward-backward system \[\begin{cases}X_{t}=x+\int_{0}^{t}b(s,X_{s},Y_{s},Z_{s})\mathrm{d}s+\int_{0}^{ t}\sigma(s,X_{s})\mathrm{d}W_{s},\\ Y_{t}=\phi(X_{T})+\int_{t}^{T}f(s,X_{s},Y_{s},Z_{s})\mathrm{d}s-\int_{t}^{T}Z_ {s}\mathrm{d}W_{s},\end{cases} \tag{1.3}\] assuming that the coefficients are Holder continuous and that the diffusion \(\sigma\) is regular enough (see Assumption 5.1, therein), we first improve the result on the well posedness of forward-backward SDEs established in Section 3 by proving that the system (1.3) admits a unique solution on any arbitrarily prescribed Brownian set-up even when the forward component lies in a multiple dimension frame. Moreover, the solution process \((X,Y,Z)\) has a continuous version which is differentiable with respect to the initial state of the forward component and also Malliavin regular. As a matter of fact, the latter leads to the study of existence and regularity of densities of the marginal laws of each component solution to the system (1.3). We will briefly describe our strategy below. The circular dependence of solutions of both equations and the roughness of the coefficients involved, pose a significant challenge for solving FBSDE (1.3). Via the weak decoupling field argument obtained in Section 3, we deduce that the forward equation in (1.3) reduces to the following SDE: \[X_{t}=x+\int_{0}^{t}\tilde{b}(s,X_{s})\mathrm{d}s+\int_{0}^{t}\sigma(s,X_{s}) \mathrm{d}W_{s}, \tag{1.4}\] \(\tilde{b}(t,x):=b(t,x,v(t,x),\nabla_{x}v(t,x))\) and \(v\) is solution to the associated PDE (3.2). Unfortunately, the transformed drift \(\tilde{b}\) is only \(C^{\theta}_{b}\) with \(\theta\in(0,1)\), thus non differentiable and does not satisfy the standard requirement of most of the scholars. Hence, locating certain results in the literature that deal with the differentiability (Malliavin or classical sense) of \(X\) solution to (1.4) as well as the existence and smoothness of its density, are not straightforward. To overcome this issue of roughness of the drift, we use a version of the Ito-Tanaka trick (or Zvonkin transform) developed in [20, 21] to establish a one to one correspondence between the solution to (1.4) and to the solution to the following SDE \[\tilde{X}_{t}=\zeta+\int_{0}^{t}\tilde{b}_{1}(s,\tilde{X}_{s})\mathrm{d}s+\int_ {0}^{t}\tilde{\sigma}_{1}(s,\tilde{X}_{s})\mathrm{d}W_{s}. \tag{1.5}\] Oberve that the above equation has better regular coefficients and therefore the solution \(\tilde{X}_{t}\) admits an absolute continuous density with respect to the Lebesgue measure. The latter property is then transferred to the solution \(X\) for the SDE (1.4) via the above correspondence between both solutions. Therefore the density property can be at last transfer to the \((Y,Z)\) solution to the backward SDEs in (1.3), thanks to the non-linear Feynman-Kac relation (3.3). Although this technique was used in [43] and [47] to establish the existence of density of solutions to SDEs when the drift coefficient is beyond the Cauchy-Lipschitz framework, as far as we know it has not yet been implemented in the context of fully coupled FBSDEs (1.3) (or even for decoupled FBSDEs) with rough coefficients. Below is a summary of this paper's main contributions: 1. We extend the existence and uniqueness results in [45] (and [13]) to the case of more general quadratic drivers. In particular, in one dimensional case and when \(\sigma\) is \(\alpha\)-Holder continuous with \(\alpha\geq 1/2\), we show the existence of a unique strong solution of the QFBSDEs (see Theorem 3.4). 2. We derive comonotonicity property of coupled QFBSDE when the drift is merely bounded and measurable in the forward variable. More precisely, we provide conditions that guarantee the non-negativity of the control process Z component solution to the coupled FBSDE (3.1) with discontinuous drift, thus extending the results derived in [10, 46]. 3. We show that the solution \((X,Y)\) of the system (1.3), has an absolutely continuous distribution with respect to the Lebesgue measure. We also provide Gaussian type bounds and tail estimates that are satisfied by probability densities of \(X\) and \(Y\) respectively. Finally, we prove that the control variable \(Z\) has an absolute continuous law with respect to the Lebesgue measure on \(\mathbb{R}\). The second order Malliavin differentiability result (Theorem 5.13, therein) which is important in this analysis, is novel for coupled FBSDEs with quadratic drivers and Lipschitz continuous drifts. 4. We refine the results obtained in the third point by assuming that the drift coefficient is discontinuous and the diffusion is identity. More explicitly, we obtain a density result for \(X\) and \(Y\) when the drift \(b\) and the driver \(g\) are only bounded and measurable in space and time. 5. As an application, we prove that the density of each components solution \((X,Y,Z)\) to (1.3) is Holder-continuous, when the diffusive coefficient is identity and the drift is weakly differentiable in \(x\), Lipschitz continuous in \(y\) and \(z\). To our knowledge, such a smoothing result on the density of solutions to coupled FBSDEs has not been derived before. As such, our result constitutes a major improvement of those in [1, 2, 37, 42]. Moreover, an explicit representation of the density of \(Y\) (resp. \(X\)) in the sense of [15] is also obtained, that is, if \(\rho_{Y_{t}}\) denotes the density of \(Y_{t}\), then for any point \(x_{0}\) in the interior of \(\mathrm{supp}(\rho_{Y_{t}})\) \[\rho_{Y_{t}}(x)=\rho_{Y_{t}}(x_{0})\exp\Big{(}-\int_{x_{0}}^{x}\mathbb{E} \Big{[}\delta\Big{(}\int_{0}^{T}D_{s}Y_{t}\mathrm{d}s\Big{)}^{-1}\big{|}Y_{t}= z\Big{]}\mathrm{d}z\Big{)},\,x\in\mathrm{supp}(\rho_{Y_{t}}).\] The rest of the paper is outlined as follows: some notations and a non-exhaustive presentation on the theory of Malliavin calculus and density analysis are given in Section 2. The Section 3 is devoted to the solvability of coupled FBSDE (3.1) via its corresponding quasi-linear parabolic PDE (3.2). In Section 4, we use the results obtained in Section 3 to derive a comonotonicity theorem for the system (3.1). Section 5 is the main core of this paper, we provide conditions under which the law of solutions to coupled quadratic FBSDEs with Holder continuous drift and with bounded and measurable drift admit respectively an absolute continuous density. At last, the Section 6 deals with some applications on the regularity of densities for solution to coupled FBSDEs. ## 2. Preliminaries In this section, we introduce the framework which will serve throughout this paper. We work on a filtered probability space \((\Omega,\mathfrak{F},\{\mathfrak{F}_{t}\}_{t\geq 0},\mathbb{P}),\) on which a \(d\)-dimensional Brownian motion \(\{W_{t}\}_{t\geq 0}\) is defined and \(\mathbb{F}=\{\mathfrak{F}_{t}\}_{t\geq 0}\) is the standard filtration generated by \(\{W_{t}\}_{t\geq 0}\) augmented by all \(\mathbb{P}\)-null sets of \(\mathfrak{F}.\) The expectations will be denoted by \(\mathbb{E}\) and \(\mathbb{E}^{\mathbb{Q}}\) respectively under \(\mathbb{P}\) and under any other probability measure \(\mathbb{Q}\). The maturity time \(T\in(0,\infty)\) is fixed and \(d\in\mathbb{N},\)\(p\in[2,\infty).\) ### Spaces of stochastic processes and spaces of functions Below we define some known space of stochastic processes and functions. * \(L^{p}\) denotes the space of \(\mathfrak{F}_{T}\)-adapted random variables \(X\) such that \(\|X\|_{L^{p}}^{p}:=\mathbb{E}|X|^{p}<\infty\) * \(L^{\infty}\) denotes the space of bounded random variables with norm \(\|X\|_{L^{\infty}}:=\operatorname{essup}_{\omega\in\Omega}|X(\omega)|.\) * \(\mathcal{S}^{p}(\mathbb{R}^{d})\) is the space of all adapted continuous \(\mathbb{R}^{d}\)-valued processes \(X\) such that \(\|X\|_{\mathcal{S}^{\infty}(\mathbb{R}^{d})}^{p}:=\mathbb{E}\sup_{t\in[0,T]}| X_{t}|^{p}<\infty\) * \(\mathcal{S}^{\infty}(\mathbb{R}^{d})\) the space of continuous \(\{\mathfrak{F}_{t}\}_{0\leq t\leq T}\)-adapted processes \(Y:\Omega\times[0,T]\to\mathbb{R}^{d}\) such that \(\|Y\|_{\infty}:=\operatorname{essup}_{\omega\in\Omega}\sup_{t\in[0,T]}|Y_{t}( \omega)|<\infty.\) * \(\mathcal{H}^{p}(\mathbb{R}^{d})\) stands for the space of all predictable \(\mathbb{R}^{d}\)-valued processes \(Z\) such that \(\|Z\|_{\mathcal{H}^{p}(\mathbb{R}^{d})}^{p}:=\mathbb{E}(\int_{0}^{T}|Z_{s}|^{ 2}\mathrm{d}s)^{p/2}<\infty.\) * \(C^{1,2}_{b}([0,T]\times\mathbb{R}^{d},\mathbb{R})\) is the usual space of maps \(v:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) that are continuously differentiable in the first variable and twice continuously differentiable w.r.t to its second variable with bounded derivatives. * \(W^{1,2,d+1}_{\mathrm{loc}}([0,T[\times\mathbb{R}^{d},\mathbb{R})\) denotes the Sobolev space of classes of functions \(v:[0,T[\times\mathbb{R}^{d}\to\mathbb{R},\) such that \(|v|,|\partial_{t}v|,|\nabla_{x}v|,|\nabla_{x,x}^{2}v|\in L^{d+1}_{\mathrm{loc} }([0,T[\times\mathbb{R}^{d},\mathbb{R}).\) * Fix \(\beta\in(0,1]\), \(C^{0,\beta}\) is the space of Holder continuous functions \(\phi:[0,T]\times\mathbb{R}^{d}\to\mathbb{R}\) endowed with the norm \[\|\phi\|_{C^{0,\beta}}=\sup_{x\neq y\in\mathbb{R}^{d}}\frac{|\phi(x)-\phi(y)| }{|x-y|^{\beta}}\] ### The BMO space The space \(\mathrm{BMO}(\mathbb{P})\) is the Banach space of continuous and square integrable martingales \(M\) starting at \(0\) under the probability measure \(\mathbb{P}\) endowed with the norm \[\|M\|_{\mathrm{BMO}(\mathbb{P})}=\sup_{\tau\in[0,T]}\|\mathbb{E}[(M)_{T}- \langle M\rangle_{\tau}]/\mathfrak{F}_{\tau}\|_{\infty}^{1/2},\] where the supremum is taken over all stopping times \(\tau\in[0,T].\) While, \(\mathcal{H}_{\mathrm{BMO}}\) stands for the space of \(\mathbb{R}^{d}\)- valued \(\mathcal{H}^{p}\)-integrable predictable processes \((Z_{t})_{t\in[0,T]}\) such that \(Z*B:=\int_{0}Z_{s}\mathrm{d}B_{s}\) belongs to \(\mathrm{BMO}(\mathbb{P}).\) We define \(\|Z\|_{\mathcal{H}_{\mathrm{BMO}}}:=\|\int Z\mathrm{d}B\|_{\mathrm{BMO}( \mathbb{P})}\). For every martingale \(M\in\mathrm{BMO}(\mathbb{P})\), the exponential Doleans-Dade process \((\mathcal{E}(M)_{t})_{0\leq t\leq T}\) is defined by \[\mathcal{E}(M)_{t}:=\exp\Big{(}M_{t}-\frac{1}{2}\langle M\rangle_{t}\Big{)},\] such that \(\mathbb{E}(\mathcal{E}(M)_{T})=1\). Moreover, the measure \(\mathbb{Q}\) with density \[\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\Big{|}_{\mathfrak{F}_{t}}:= \mathcal{E}(M)_{t},\] defines an equivalent probability measure to \(\mathbb{P}\). The continuous martingale \(M\in\mathrm{BMO}(\mathbb{P})\) satisfies the following properties: (We refer the reader to [28] for more details on the subject.) ### Basic notions on Malliavin calculus and density analysis Here, we briefly introduce some elements in Malliavin calculus and the analysis of densities. We refer any interested reader to [41], for clear exposition on the subject. Let \(\mathcal{S}\) be the space of random variables \(\xi\) of the form \[\xi=F\Big{(}(\int_{0}^{T}h_{s}^{1,i}\mathrm{d}W_{s}^{1})_{1\leq i\leq n}, \cdots,(\int_{0}^{T}h_{s}^{d,i}\mathrm{d}W_{s}^{d})_{1\leq i\leq n}\Big{)},\] where \(F\in C^{\infty}_{b}(\mathbb{R}^{n\times d}),h^{1},\ldots,h^{n}\in L^{2}([0,T]; \mathbb{R}^{d})\) and \(n\in\mathbb{N}.\) For simplicity, we assume that all \(h^{j}\) are written as row vectors. For \(\xi\in\mathcal{S},\) let \(D\) be the operator defined by \(D=(D^{1},\cdots,D^{d}):\mathcal{S}\to L^{2}(\Omega\times[0,T])^{d}\) by \[D^{i}_{\theta}\xi:=\sum_{j=1}^{n}\frac{\partial F}{\partial x_{i,j}}\Big{(} \int_{0}^{T}h^{1}_{t}\mathrm{d}W_{t},\cdots,\int_{0}^{T}h^{n}_{t}\mathrm{d}W_{ t}\Big{)}h^{i,j}_{\theta},0\leq\theta\leq T,1\leq i\leq d,\] and for \(k\in\mathbb{N}\) and \(\theta=(\theta_{1},\cdots,\theta_{k})\in[0,T]^{k}\) its \(k\)-fold iteration is given by \[D^{(k)}=D^{i_{1}}\cdots D^{i_{k}}_{1\leq i_{1},\cdots,i_{k}\leq d}.\] For \(k\in\mathbb{N}\) and \(p\geq 1,\) let \(\mathbb{D}^{k,p}\) be the closure of \(\mathcal{S}\) with respect to the norm \[\|\xi\|_{k,p}^{p}=\|\xi\|_{L^{p}}^{p}+\sum_{i=1}^{k}\||D^{(i)}\xi\|\|_{( \mathcal{H}^{p})^{i}}^{p}.\] The operator \(D^{(k)}\) is a closed linear operator on the space \(\mathbb{D}^{k,p}.\) Observe that if \(\xi\in\mathbb{D}^{1,2}\) is \(\mathfrak{F}_{t}\)-measurable then \(D_{\theta}\xi=0\) for \(\theta\in(t,T].\) Further denote \(\mathbb{D}^{k,\infty}=\bigcap_{p>1}\mathbb{D}^{k,p}.\) The following result is the chain rule for Malliavin derivative (see for example [41, Proposition 1.2.4]). **Proposition 2.1**.: _Let \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) be a globally Lipschitz continuous function. Suppose \(\xi=(\xi^{1},\cdots,\xi^{d})\) is a random vector whose components belong to the space \(\mathbb{D}^{1,2}\). Then \(\varphi(\xi)\in\mathbb{D}^{1,2}\) and there exists a random vector \(\varpi=(\varpi^{1},\cdots,\varpi^{d})\) bounded by the Lipschitz constant of \(\varphi\) such that \(D\varphi(\xi)=\sum_{i=1}^{d}\varpi^{i}D\xi^{i}\). In particular, if the gradient of \(\varphi\) denoted by \(\varphi^{\prime}\) exists, then \(\varpi=\varphi^{\prime}(\xi).\)_ For a stochastic process \(\mathcal{X}\in\mathrm{Dom}(\delta)\) (not necessarily adapted to \(\{\mathfrak{F}_{t}\}_{t\in[0,T]}\)) we define the Skorohod integral of \(\mathcal{X}\) by \[\delta(\mathcal{X})=\int_{0}^{T}\mathcal{X}_{t}\delta W_{t}, \tag{2.1}\] which is an anticipative stochastic integral. It turns out that all \(\{\mathfrak{F}_{t}\}_{t\in[0,T]}\)-adapted processes in \(L^{2}(\Omega\times[0,T])\) are in the domain of \(\delta\) and can be written as (2.1). Thus, the Skorohod integral can be viewed as an extension of the Ito integral to non-adapted integrand. The following equation known as the duality formula, gives the dual relation between the Malliavin derivative and the Skorohod integral: \[\mathbb{E}\Big{[}\xi\int_{0}^{T}\mathcal{X}_{t}\delta W_{t}\Big{]}=\mathbb{E} \Big{[}\int_{0}^{T}\mathcal{X}_{t}D_{t}\xi\mathrm{d}t\Big{]}\] for any \(\xi\in\mathbb{D}^{1,2}\) and \(\mathcal{X}\in\mathrm{Dom}(\delta)\). For \(F=(F^{1},\cdot,F^{d})\in(\mathbb{D}^{1,2})^{d},\) we define the Malliavin covariance \(d\times d\)-matrix \(\Gamma_{F}\) by \[\Gamma_{F}^{ij}:=\langle DF^{i},DF^{j}\rangle_{L^{2}[0,T]}.\] The random vector \(F\) is said to be non-degenerate if for all \(p\geq 1\) \[\mathbb{E}\Big{[}\Big{(}\det\Gamma_{F}\Big{)}^{-p}\Big{]}<\infty.\] For \(k\in\mathbb{N},p\geq 1,\) denote by \(\mathbb{L}_{k,p}(\mathbb{R}^{m})\) the set of \(\mathbb{R}^{d}\)-valued progressively measurable processes \(\zeta=(\zeta^{1},\cdots,\zeta^{m})\) on \([0,T]\times\Omega\) such that 1. For Lebesgue a.a. \(t\in[0,T],\zeta(t,\cdot)\in(\mathbb{D}^{k,p})^{m};\) 2. \((t,\omega)\to D^{k}_{s}\zeta(t,\omega)\in(L^{2}([0,T]^{1+k}))^{d\times m}\) admits a progressively measurable version; 3. \(\|\zeta\|_{k,p}^{p}=\||\zeta\||_{\mathcal{H}^{p}}^{p}+\sum_{i=1}^{k}\||D^{(i)} \zeta\||_{(\mathcal{H}^{p})^{1+i}}^{p}<\infty.\) For example if a process \(\zeta\in\mathbb{L}_{2,2}(\mathbb{R})\), we have \[\|\zeta\|_{\mathbb{L}_{1,2}}^{2} =\mathbb{E}\Big{[}\int_{0}^{T}|\zeta_{t}|^{2}\mathrm{d}t+\int_{0} ^{T}\int_{0}^{T}|D_{s}\zeta_{t}|^{2}\mathrm{d}s\mathrm{d}t\Big{]},\] \[\|\zeta\|_{\mathbb{L}_{2,2}}^{2} =\|\zeta\|_{\mathbb{L}_{1,2}}^{2}+\mathbb{E}\Big{[}\int_{0}^{T}\int _{0}^{T}\int_{0}^{T}|D_{s^{\prime}}D_{s}\zeta_{t}|^{2}\mathrm{d}s^{\prime} \mathrm{d}s\mathrm{d}t\Big{]}.\] Moreover, the following follows from Jensen's inequality \[\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}\int_{0}^{T}|D_{s}\zeta_{t}|^{2}\mathrm{d}s \mathrm{d}t\Big{)}^{\frac{p}{2}}\Big{]}\leq T^{\frac{p}{2}-1}\int_{0}^{T}\|D_{s }\zeta_{t}\|_{\mathcal{H}^{p}}^{p}\mathrm{d}s,\forall p\geq 2,\] and provide an alternative way to control the term of the left-hand side. Below, we recall the criterion for absolute continuity of the law of a random variable \(F\) with respect to the Lebesgue measure. **Theorem 2.2** (Bouleau-Hirsch, Theorem 2.1.2 in [41]).: _Let \(F\) be in \(\mathbb{D}^{1,2}.\) Assume that \(\|DF\|_{\mathcal{H}}>0,\)\(\mathbb{P}\)-a.s.. Then \(F\) has a probability distribution which is absolutely continuous with respect to the Lebesgue measure on \(\mathbb{R}.\)_ In one dimension, it is possible to give a representation of the density in the sense of Nourdin and Viens. If \(\rho_{F}\) denotes the density of a random variable \(F\in\mathbb{D}^{1,2}\) we set \[h_{F}(x):=\int_{0}^{\infty}e^{-\theta}\mathbb{E}\left[\mathbb{E}^{*}[\langle DF,\bar{D}\bar{F}^{\theta}\rangle_{\mathcal{H}}]|F-\mathbb{E}(F)=x\right] \mathrm{d}\theta, \tag{2.2}\] where for any random variable \(X\), we define \(\tilde{X}^{\theta}:=X(e^{-\theta}W+\sqrt{1-e^{-2\theta}}W^{*})\) with \(W^{*}\) an independent copy of \(W\) given on a probability space \((\Omega^{*},\mathfrak{F}^{*},\mathbb{P}^{*}),\) and \(\mathbb{E}^{*}\) denotes the expectation under \(\mathbb{P}^{*}.\) We have **Corollary 2.3** (Nourdin-Viens' formula).: _Under assumptions in Theorem 2.2, the support of \(\rho_{F},\) denoted by \(\mathrm{supp}(\rho_{F})\) is a closed interval of \(\mathbb{R}\) and for all \(x\in\mathrm{supp}(\rho_{F})\)_ \[\rho_{F}(x):=\frac{\mathbb{E}(|F-\mathbb{E}(F)|)}{2h_{F}(x-\mathbb{E}(F))} \exp\Big{(}-\int_{0}^{x-\mathbb{E}(F)}\frac{\theta\mathrm{d}\theta}{h_{F}( \theta)}\Big{)}.\] We will end this section by recalling the following useful result that can be found respectively in [16]. **Theorem 2.4** (Theorem 2.4 in [16]).: _Let \(F\in\mathbb{D}^{1,2}\) be a random variable such that:_ \[0<l\leq\int_{0}^{\infty}D_{s}F\mathbb{E}\left[D_{s}F|\mathfrak{F}_{s}\right] \mathrm{d}s\leq L\quad\text{ a.s.}, \tag{2.3}\] _where, \(l\) and \(L\) are positive constants. Then, \(F\) possesses a density \(\rho_{F}\) with respect to the Lebesgue measure. Moreover, for almost all \(x\in\mathbb{R}\), the density satisfies_ \[\frac{\mathbb{E}|F-\mathbb{E}[F]|}{2L}\exp\Big{(}-\frac{(x-\mathbb{E}[F])^{2}} {2l}\Big{)}\leq\rho_{F}(x)\leq\frac{\mathbb{E}|F-\mathbb{E}[F]|}{2l}\exp \Big{(}-\frac{(x-\mathbb{E}[F])^{2}}{2L}\Big{)}. \tag{2.4}\] _Furthermore for all \(x>0\), the tail probabilities satisfy_ \[\mathbb{P}(F\geq x)\leq\exp\Big{(}-\frac{(x-\mathbb{E}[F])^{2}}{2L}\Big{)}\text { and }\mathbb{P}(F\leq-x)\leq\exp\Big{(}-\frac{(x+\mathbb{E}[F])^{2}}{2L}\Big{)} \tag{2.5}\] ## 3. Existence and Uniqueness of solution to quadratic FBSDEs Let \((\Omega,\mathfrak{F},\mathbb{F}=\{\mathfrak{F}_{s}\}_{s\geq 0},\mathbb{P},W_{s})\), be a stochastic basis, satisfying the usual conditions. In this section,we wish to show that the following FBSDE admits a unique solution. \[\begin{cases}X_{s}^{t,x}=x+\int_{t}^{s}b(r,X_{r}^{t,x},Y_{r}^{t,x},Z_{r}^{t,x}) \mathrm{d}r+\int_{t}^{s}\sigma(r,X_{r}^{t,x},Y_{r}^{t,x})\mathrm{d}W_{r},\\ Y_{s}^{t,x}=\phi(X_{T}^{t,x})+\int_{s}^{T}f(r,X_{r}^{t,x},Y_{r}^{t,x},Z_{r}^{t,x })\mathrm{d}s-\int_{s}^{T}Z_{r}^{t,x}\mathrm{d}M_{r}^{X},\end{cases} \tag{3.1}\] where \(b,\sigma,\phi\) and \(f\) are deterministic functions of appropriate dimensions taking values in appropriate spaces that will be made precise below, \(\mathrm{d}M_{r}^{X}:=\sigma(r,X_{r}^{t,x},Y_{r}^{t,x})^{\mathbf{T}}\mathrm{d}W_ {r}\) stands for the martingale part of the semi-martingale \(X^{t,x}\) and the superscript \((t,x)\) will refer to its initial condition. It is also known that, such an FBSDE provides a probabilistic interpretation for the following terminal value problem of quasi-linear partial differential equation (PDE, for short) (see for example [35]) \[\frac{\partial v}{\partial t}(t,x)+\mathcal{L}v(t,x)+f(t,x,v(t,x),\nabla_{x}v(t,x))=0,\quad v(T,x)=\phi(x), \tag{3.2}\] where the differential operator \(\mathcal{L}\) is given by \[\mathcal{L}v(t,x):=\frac{1}{2}\sum_{i,j}^{d}(\sigma\sigma^{\mathbf{T}})_{ij}(t,x, v(t,x))\frac{\partial^{2}v}{\partial x_{i}\partial x_{j}}(t,x)+\sum_{i}^{d}b_{i}(t,x,v( t,x),\nabla_{x}v(t,x))\frac{\partial v}{\partial x_{i}}(t,x).\] Moreover, both solutions for the FBSDE (3.1) and the PDE (3.2), respectively denoted by \((X^{t,x},Y^{t,x},Z^{t,x})\) and \(v\in C^{1,2}([0,T]\times\mathbb{R}^{d};\mathbb{R})\) are linked by the so called nonlinear Feynman-Kac formula: \[Y_{s}^{t,x}=v(s,X_{s}^{t,x});\qquad Z_{s}^{t,x}=\nabla_{x}v(s,X_{s}^{t,x}), \tag{3.3}\] in this case, \(v\) is called the "_decoupling field_ " since it breaks the mutual influence between the forward and backward components in equation (3.1). **Definition 3.1**.: _We say that, a triple \((X^{t,x},Y^{t,x},Z^{t,x})\) is a weak solution to equation (3.1), if there exists a standard setup \((\Omega,\mathfrak{F},\mathbb{F}=\{\mathfrak{F}\},\mathbb{P},W),\) such that \((X^{t,x},Y^{t,x},Z^{t,x})\in\mathcal{S}^{2}(\mathbb{R}^{d})\times\mathcal{S}^ {\infty}(\mathbb{R})\times\mathcal{H}_{\text{BMO}}(\mathbb{R}^{d})\) and \(\mathbb{P}\)-a.s. (3.1) is satisfied. This weak solution is called a strong solution if \(\mathbb{F}=\mathbb{F}^{W}.\)_ We say that the coefficients \(b,\sigma,f\) and \(\phi\) satisfy Assumption 3.2 if there exist nonnegative constants \(\Lambda,\lambda,K,K_{0},\) and \(\alpha_{0},\beta\in(0,1)\) such that **Assumption 3.2**.: * \(\forall t\in[0,T],\forall(x,y,z)\in\mathbb{R}^{d}\times\mathbb{R}\times \mathbb{R}^{d},\)__\(\forall\xi\in\mathbb{R}^{d},\)__ \[|b(t,x,y,z)|\leq\Lambda(1+|y|+|z|),\quad|\sigma(t,x,y)|\leq\Lambda(1+|y|),\] \[\langle\xi,a(t,x,y)\xi\rangle\geq\lambda|\xi|^{2},\quad a=\sigma \sigma^{\mathbf{T}}.\] * _For all_ \((t,x,y,z),(t,x,y^{\prime},z^{\prime})\in[0,T]\times\mathbb{R}^{d}\times \mathbb{R}\times\mathbb{R}^{d}\)_,_ \[|b(t,x,y,z)-b(t,x,y^{\prime},z^{\prime})|\leq K(|y-y^{\prime}|+|z-z^{\prime}|),\] \[|a(t,x,y)-a(t,x,y^{\prime})|\leq K|y-y^{\prime}|,\quad|a(t,x,y)-a (t,x^{\prime},y)|\leq K_{0}|x-x^{\prime}|^{\alpha_{0}}.\] * 1. \(\forall t\in[0,T],\forall(x,y,z)\in\mathbb{R}^{d}\times\mathbb{R}\times \mathbb{R}^{d},\)__ \[|f(t,x,y,z)|\leq\Lambda(1+|y|+\ell(y)|z|^{2}),\] _where_ \(\ell\in L^{1}_{loc}(\mathbb{R},\mathbb{R}_{+})\) _is locally bounded and increasing._ 2. _For all_ \((t,x,y,z),(t,x,y^{\prime},z^{\prime})\in[0,T]\times\mathbb{R}^{d}\times \mathbb{R}\times\mathbb{R}^{d}\)__ \[|f(t,x,y,z)-f(t,x,y^{\prime},z^{\prime})|\leq K\Big{(}1+\ell(|y-y^{\prime}|^{2})(| z|+|z^{\prime}|)\Big{)}\Big{(}|y-y^{\prime}|+|z-z^{\prime}|\Big{)}\] 3. _For all_ \((x,x^{\prime})\in\mathbb{R}^{d}\times\mathbb{R}^{d}\)__ \[|\phi(x)|\leq\Lambda,\quad|\phi(x)-\phi(x^{\prime})|\leq K_{0}|x-x^{\prime}|^{ \beta}.\] **Remark 3.3**.: _When the function \(\ell\) is a constant, the solvability of the FBSDE (3.1) is derived in [13]. The condition (AY)(ii) will be useful to establish both the uniqueness in law and the pathwise uniqueness of solutions to the FBSDE (3.1)._ Below we state the main result of this section **Theorem 3.4**.: _Let Assumption 3.2 be in force. Then the PDE (3.2) has a unique solution \(v\in W^{1,2,d+1}_{\text{Loc}}([0,T[\times\mathbb{R}^{d},\mathbb{R})\) such that \((t,x)\mapsto v(t,x)\) and \((t,x)\mapsto\nabla_{x}v(t,x)\) are continuous._ _Furthermore, the FBSDE (3.1) admits a unique weak solution \((\Omega,\{\mathfrak{F}\},\mathbb{P}),(B,\{\mathfrak{F}_{t}\}_{0\leq t\leq T})\), \((X,Y,Z)\in\mathcal{S}^{2}(\mathbb{R}^{d})\times\mathcal{S}^{\infty}(\mathbb{R} )\times\mathcal{H}_{\text{BMO}}(\mathbb{R}^{d})\) with initial value \((t,x)\in[0,T]\times\mathbb{R}^{d}\) and the uniqueness is understood in the sense of probability law._ _Assume that \(d=1\) and \(\alpha_{0}\geq 1/2\), for any \(\delta\in(0,T)\) the FBSDE (3.1) has a unique strong solution for all \(t\in[0,T-\delta]\)._ Proof.: The first statement of the Theorem requires several auxiliary results both from PDE theory and stochastic analysis. For the sake of simplicity, we are not going to reproduce the details here. We refer the reader to [13] (see also [45]) for a similar treatment. We start with the existence result for the PDE (3.2) (i). **Solvability of PDE (3.2):** The result is proved in three steps. In the first step, we approximate the coefficients \((b,\sigma,\phi,g)\) by smooth functions (via mollifiers for instance) and derive the solution \((v_{n})_{n\geq 1}\) of the regularized PDE (3.2). In the second step we provide uniform bounds of \((v_{n})_{n\geq 1}\) and those of their derivatives. In the last step, we use a compactness argument to construct the decouple field of the original PDE. **Step 1:** Approximation of coefficients \((b,\sigma,f,\phi)\) by smooth functions \((b_{n},\sigma_{n},f_{n},\phi_{n})\). Let us denote by \((b_{n})_{n\in\mathbb{N}},(\sigma_{n})_{n\in\mathbb{N}},(\phi_{n})_{n\in \mathbb{N}}\) and \((g_{n})_{n\in\mathbb{N}}\) the approximating sequences of \(b,\sigma,\phi\) and \(g\) respectively that can be obtained via the usual mollifiers (see for instance [45]). Then \(b_{n},\sigma_{n},\phi_{n}\) and \(g_{n}\) are infinitely differentiable with respect to each of their respective components, bounded with bounded derivatives of any order with compact support. Thus, from [32] the regularized PDE (3.2) with coefficients \((b_{n},\sigma_{n},f_{n},\phi_{n})\) in lieu of \((b,\sigma,f,\phi)\) admits a unique bounded classical solution \((v_{n})_{n\geq 1}\in C^{1,2}([0,T]\times\mathbb{R}^{d},\mathbb{R})\) such that \((\nabla_{x}v_{n})_{n\geq 1}\), \((\partial v_{n})_{n\geq 1}\) and \((\nabla^{2}_{xx}v_{n})_{n\geq 1}\) are also bounded and Holder continuous on \([0,T]\times\mathbb{R}^{d}\). In addition for any prescribed probability space \((\Omega,\mathfrak{F},\mathbb{P})\) the following FBSDE: \[\begin{cases}X^{t,x}_{s}=x+\int_{t}^{s}b_{n}(r,X^{t,x}_{r},Y^{t,x}_{r},Z^{t,x} _{r})\mathrm{d}r+\int_{t}^{s}\sigma_{n}(r,X^{t,x}_{t},Y^{t,x}_{r})\mathrm{d}W_ {r},\\ Y^{t,x}_{s}=\phi_{n}(X^{t,x}_{r})+\int_{s}^{T}f_{n}(r,X^{t,x}_{r},Y^{t,x}_{r},Z ^{t,x}_{r})\mathrm{d}s-\int_{s}^{T}Z^{t,x}_{r}\mathrm{d}M^{X}_{r},\end{cases} \tag{3.4}\] has a unique strong solution \((X^{t,x,n},Y^{t,x,n},Z^{t,x,n})\in\mathcal{S}^{2}(\mathbb{R}^{d})\times \mathcal{S}^{\infty}(\mathbb{R})\times\mathcal{H}_{\mathrm{BMO}}(\mathbb{R}^{d})\) such that the following relations hold (see [34]) \[Y^{t,x,n}_{\cdot}=v_{n}(\cdot,X^{t,x,n}_{\cdot}),\ Z^{t,x,n}=\nabla_{x}v_{n}( \cdot,X^{t,x,n}_{\cdot}). \tag{3.5}\] **Step 2:** A-priori estimates for \((v_{n})_{n\geq 1}\) and for its derivatives We will start by establishing the following uniform bound of the solution \((v_{n})_{n\in\mathbb{N}}\) \[\sup_{n\in\mathbb{N}}\sup_{(t,x)\in[0,T]\times\mathbb{R}^{d}}|v_{n}(t,x)|<\infty. \tag{3.6}\] Thanks to the probabilistic representation given by (3.5), then it is enough to prove that \[\mathbb{P}\text{-a.s., }\sup_{n\in\mathbb{N}}(\|Y^{t,x,n}\|_{\infty}+\|Z^{t,x,n} \|_{\mathcal{H}_{BMO}})<\infty. \tag{3.7}\] To ease the notation, in the following we will omit the superscript \((t,x)\). Let \((f_{n})_{n\geq 1}\) be the usual mollifier of the function \(f\). Then from Assumption 3.2 (AY) (see [45]), there exists \(C>0\) such that for all \((t,x,y,y^{\prime},z,z^{\prime})\in\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{ R}\times\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}\) \[|f_{n}(t,x,y,z)| \leq C\Lambda(1+|y|+\ell_{n}(y)|z|^{2}),\] \[|f_{n}(t,x,y,z)-f_{n}(t,x,y^{\prime},z^{\prime})| \leq C(1+\ell(|y-y^{\prime}|^{2})(1+|z|+|z^{\prime}|))(|y-y^{ \prime}|+|z-z^{\prime}|),\] where \((\ell_{n})_{n\geq 1}\) stands for the mollifier of the function \(\ell\). Therefore, the process \(A^{n}_{t}\) defined by: \[|Z^{n}_{t}|^{2}A^{n}_{t}=\sigma_{n}^{-1}(t,X^{n}_{t},Y^{n}_{t})(f_{n}(t,X^{n} _{t},Y^{n}_{t},Z^{n}_{t})-f_{n}(t,X^{n}_{t},Y^{n}_{t},0))Z^{n}_{t}1_{\{Z^{n}_{t }\neq 0\}}\] belongs to the space \(\mathcal{H}_{\mathrm{BMO}}(\mathbb{R}^{d})\), since \((Y^{n},Z^{n})\in\mathcal{S}^{\infty}(\mathbb{R})\times\mathcal{H}_{\mathrm{BMO }}(\mathbb{R}^{d})\) for each \(n\in\mathbb{N}\). The Girsanov's theorem ensures that \(W^{n}_{t}=W_{t}-\int_{0}^{t}A^{n}_{s}\mathrm{d}s\) is a Brownian motion under the probability measure \(\mathbb{P}^{n}\) defined by \(\mathrm{d}\mathbb{P}^{n}:=\mathcal{E}(\int_{0}A^{n}_{t}\mathrm{d}W_{t}) \mathrm{d}\mathbb{P}\). Therefore, \[Y^{n}_{t}=\phi_{n}(X^{n}_{T})+\int_{t}^{T}f_{n}(s,X^{n}_{s},Y^{n}_{s},0) \mathrm{d}s-\int_{t}^{T}Z^{n}_{s}\sigma_{n}^{\mathbf{T}}(s,X^{n}_{s},Y^{n}_{s}) \mathrm{d}W^{n}_{s}.\] By taking the conditional expectation on both sides of the above equation, we deduce that \[|Y^{n}_{t}| \leq\mathbb{E}^{\mathbb{P}^{n}}\Big{[}|\phi_{n}(X^{n}_{T})|+\int_ {t}^{T}|f_{n}(s,X^{n}_{s},Y^{n}_{s},0)|\mathrm{d}s/\mathfrak{F}_{t}\Big{]}\] \[\leq\Lambda+C\Lambda T+C\Lambda\mathbb{E}^{\mathbb{P}^{n}}[\int_{t} ^{T}|Y^{n}_{s}|\mathrm{d}s/\mathfrak{F}_{t}].\] The first bound in (3.7) then follows by applying the stochastic Gronwall's lemma (see for instance [49]). Let us turn now to the second estimate in (3.7). For this, we consider the \(W^{1,2}_{\rm loc}(\mathbb{R})\)-function \[I(x):=\int_{0}^{x}K(y)\exp\Big{(}2\int_{0}^{y}\ell(u){\rm d}u\Big{)}{\rm d}y,\] where \(K(y):=\int_{0}^{y}\exp\Big{(}-2\int_{0}^{z}\ell(u){\rm d}u\Big{)}{\rm d}z\) and the function \(\ell\) satisfies almost everywhere: \(1/2I^{\prime\prime}(x)-\ell(x)I^{\prime}(x)=1/2\) (see [3]). Then, using the Ito-Krylov formula for BSDE (see [4, Theorem 2.1]) we obtain that \[I(|Y_{\tau}^{n}|) =I(|Y_{T}^{n}|)+\int_{\tau}^{T}{\rm sgn}(Y_{u}^{n})v^{\prime}(|Y_ {u}^{n}|)f(u,X_{u}^{n},Y_{u}^{n},Z_{u}^{n})-1/2I^{\prime\prime}(|Y_{u}^{n}|)|Z _{u}^{n}|^{2}{\rm d}u+M_{\tau}^{n}\] \[\leq I(|Y_{T}^{n}|)-\frac{1}{2}\int_{\tau}^{T}|Z_{u}^{n}|^{2}{ \rm d}u+\Lambda\int_{\tau}^{T}(1+|Y_{u}^{n}|)I^{\prime}(|Y_{u}^{n}|){\rm d}u+ M_{\tau}^{n}\] for any stopping time \(\tau\) and \(M_{\tau}^{n}\) represents the martingale part. By taking the conditional expectation with respect to \(\mathfrak{F}_{\tau}\), we deduce that \[\frac{1}{2}\mathbb{E}\Big{[}\int_{\tau}^{T}|Z_{u}^{n}|^{2}{\rm d}u/\mathfrak{F }_{\tau}\Big{]}\leq\mathbb{E}\Big{[}I(|Y_{T}^{n}|)+\Lambda\int_{\tau}^{T}(1+ |Y_{u}^{n}|)I^{\prime}(|Y_{u}^{n}|){\rm d}u/\mathfrak{F}_{\tau}\Big{]}.\] Then, the sought estimate follows from the uniform boundedness of \(Y^{n}\). Moreover, by using the classical machinery from the theory of SDEs along with Assumption 3.2 and (3.7), one can obtain the following \[\mathbb{P}\text{-a.s.,}\ \sup_{n\in\mathbb{N}}\|X^{t,x,n}\|_{\mathcal{S}^{2}( \mathbb{R}^{d})}<\infty. \tag{3.8}\] On the other hand, from (3.6), one can derive the uniform Holder estimate for \((v_{n})_{n\geq 1}\) in the same way as in [45, Lemma 2.2] i.e., there exist \(\beta_{0}\in(0,\beta],C>0\) such that for all \((t,x),(s,y)\in[0,T]\times\mathbb{R}^{d}\), it holds \[\sup_{n\in\mathbb{N}}|v_{n}(t,x)-v_{n}(s,y)|\leq C(|t-s|^{\frac{\beta_{0}}{2} }+|x-y|^{\beta_{0}}). \tag{3.9}\] In the same spirit, there exists \(\gamma>0\), such that \[\sup_{n\in\mathbb{N}}\sup_{(t,x)\in[0,T[\times\mathbb{R}^{d}}(T-t)^{1-\gamma} |\nabla_{x}v_{n}(t,x)|<\infty \tag{3.10}\] follows and as in [45, Lemma 2.4 and Lemma 2.5 ]), there exists universal constants \(C\) and \(\beta^{\prime}\) such that \[\sup_{n\in\mathbb{N}}|\nabla_{x}v_{n}(t,x)-\nabla_{x}v_{n}(s,y)|\leq C(T-t)^ {(-1+\beta^{\prime})}(|t-s|^{\frac{\beta^{\prime}}{2}}+|x-y|^{\beta^{\prime}}), \tag{3.11}\] Note that the coefficients \(b,\sigma,g\) of the FBSDE (3.1) are not smooth in the time variable and the terminal condition is only Holder continuous, thus (3.10) and (3.11) cannot be obtained for \((\partial_{t}v_{n})_{n\geq 1}\) and \((\nabla_{x,x}^{2}v_{n})_{n\geq 1}\). Nevertheless we can derive as in [13, 45]), the following Calderon-Zygmung types estimate: there exists \(\alpha>0\), such that \[\sup_{n\in\mathbb{N}}\int_{T-\delta}^{T}\int_{B(\zeta,R)}[(T-s)^{1-\alpha}(| \partial_{t}v_{n}(s,y)|+|\nabla_{x,x}^{2}v_{n}(s,y)|)]^{p}{\rm d}s{\rm d}y<\infty, \tag{3.12}\] where \(B(\zeta,R)\) denotes the \(\mathbb{R}^{d}\) ball of centre \(\zeta\) and radius \(R\). **Step 3:** Arzela-Ascoli arguments From (3.6), (3.9), (3.10) and (3.11), we observe that the sequences \((v_{n})_{n\geq 1}\) and \((\nabla_{x}v_{n})_{n\geq 1}\) are uniformly bounded and equicontinuous respectively on \([0,T]\times\mathbb{R}^{d}\) and \([0,T-\delta]\times\mathbb{R}^{d}\) for every \(\delta>0\). Thus, by Arzela-Ascoli theorem, there exist converging subsequences respectively denoted by \((v_{n})_{n\geq 1}\) and \((\nabla_{x}v_{n})_{n\geq 1}\) (still indexed by \(n\)) such that their limits \(v\) and \(v^{\prime}\) respectively are continuous. From the uniqueness of the limit we note that \(v^{\prime}=\nabla_{x}v\). On the other hand, (3.12) suggests that both \((\partial_{t}v_{n})_{n\geq 1}\) and \((\nabla_{x}^{2}v_{n})_{n\geq 1}\) are respectively converging in distribution for subsequences to \(\partial_{t}v\) and \(\nabla^{2}_{xx}v\). Therefore, \(v\) is continuous and satisfies almost everywhere the PDE(3.2). (ii). **Weak solution to FBSDE** (3.1): This follows by combining the arguments developed in [13, section 4.2] along with bounds (3.6) and (3.10). We denote by \((\Omega,\{\mathfrak{F}\},\mathbb{P}),(W,\{\mathfrak{F}_{t}\}_{0\leq t\leq T}), (X,Y,Z)\) this solution with initial condition \((t,x)\) such that \(X_{t}\) is the unique weak solution to the equation \[X_{t}=x+\int_{0}^{t}\tilde{b}(s,X_{s})\mathrm{d}s+\int_{0}^{t}\tilde{\sigma}(s,X_{s})\mathrm{d}W_{s}, \tag{3.13}\] where \(\tilde{b}(t,x)=b(t,x,v(t,x),\nabla_{x}v(t,x))\) and \(\tilde{\sigma}(t,x)=\sigma(t,x,v(t,x))\) and the relationship (3.3) holds. Moreover, due to the bounds (3.7), we have that \(Z\in\mathcal{H}_{BMO}\). (iii). **Uniqueness in probability law:** Let \((\bar{\Omega},(\bar{\mathfrak{F}}),\bar{\mathbb{P}}),(\tilde{W},\{\bar{ \mathfrak{F}}_{t}\}_{0\leq t\leq T})),(\bar{X},\bar{Y},\bar{Z})\) be another solution to FBSDE (3.1) with initial condition given by \((0,x),x\in\mathbb{R}^{d}\) i.e.: \[\begin{cases}\bar{X}_{t}=x+\int_{0}^{t}b(s,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_{s} )\mathrm{d}s+\int_{0}^{t}\sigma(s,\bar{X}_{s},\bar{Y}_{s})\mathrm{d}\tilde{W} _{s},\\ \bar{Y}_{t}=\phi(\bar{X}_{T})+\int_{t}^{T}f(s,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_{ s})\mathrm{d}s-\int_{t}^{T}\bar{Z}_{s}\mathrm{d}M_{s}^{\bar{X}}.\end{cases} \tag{3.14}\] Our aim is to show that \(\mathbb{P}\circ(X,Y,Z,W)^{-1}=\bar{\mathbb{P}}\circ(\bar{X},\bar{Y},\bar{Z}, \tilde{W})^{-1}\), where \((X,Y,Z)\) stands for the solution we obtained in point (ii) above, with the same initial condition \((0,x)\), for all \(x\in\mathbb{R}^{d}\). More precisely, following the weak formulation of the _"four step scheme"_ developed in [13, section 4.3], it is enough to prove that \(\bar{\mathbb{P}}\)-a.s. \((\bar{Y},\bar{Z})\equiv(\mathcal{Y},\mathcal{Z})\), where the processes \(\mathcal{Y}\) and \(\mathcal{Z}\) given by \[\mathcal{Y}_{t}:=v(t,\bar{X}_{t}),\quad\mathcal{Z}_{t}:=\nabla_{x}v(t,\bar{X}_ {t}),\forall t\in[0,T[.\] Next, we first need to write \(\mathcal{Y}\) as the solution of a backward equation. We recall that the function \(v\) solution to the PDE (3.2) belongs to \(W^{1,2,d+1}_{\mathrm{loc}}([0,T[\times\mathbb{R}^{d},\mathbb{R})\) then the Ito-Krylov formula can be applied in order to derive the desired backward equation, provided that the drift term from the forward equation is uniformly bounded(see [30, Chapter 2, Section 10, Theorem 1] ). Unfortunately, here the drift \(b\) does not satisfied this requirement under Assumption 3.2 (AX), since the process \(\bar{Z}\) is only bounded in the \(\mathcal{H}_{\mathrm{BMO}}\) norm. To overcome this situation, we will use the fact that the Doleans-Dade exponential of a BMO martingale is uniformly integrable and it defines a proper density, in contrast to [13], where the authors rather used a localization arguments. More precisely, thanks to Assumption 3.2 (AX) and the boundedness of \(\bar{Y}\) (since \(\bar{Y}\in\mathcal{S}^{\infty}\) by Definition 3.1 ) we observe that: \[|\sigma^{-1}(s,\bar{X}_{s},\bar{Y}_{s})b(s,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_{s} )|\leq C\left(1+|\bar{Z}_{s}|\right)\in\mathcal{H}_{\mathrm{BMO}}.\] Thus, the measure \(\bar{\mathbb{Q}}\) defined by \[\frac{\mathrm{d}\bar{\mathbb{Q}}}{\mathrm{d}\bar{\mathbb{P}}}:=\exp\Big{(}- \int_{0}^{t}\langle\sigma^{-1}(s,\bar{X}_{s},\bar{Y}_{s})b(s,\bar{X}_{s},\bar{ Y}_{s},\bar{Z}_{s}),\mathrm{d}\tilde{W}_{s}\rangle-\frac{1}{2}\int_{0}^{t} \big{|}\sigma^{-1}(s,\bar{X}_{s},\bar{Y}_{s})b(s,\bar{X}_{s},\bar{Y}_{s},\bar{ Z}_{s})\big{|}^{2}\mathrm{d}s\Big{)},\] is uniformly integrable for all \(t\in[0,T]\). Moreover, from the Girsanov theorem the process \(\bar{W}\) defined for all \(t\in[0,T]\) by \[\bar{W}_{t}:=\tilde{W}_{t}+\int_{0}^{t}\sigma^{-1}(s,\bar{X}_{s},\bar{Y}_{s})b (s,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_{s})\mathrm{d}s \tag{3.15}\] is an \(\{\bar{\mathfrak{F}}\}_{0\leq t\leq T}\)-Brownian motion under the measure \(\bar{\mathbb{Q}}.\) Hence, the FBSDE (3.14) takes the following form under the measure \(\bar{\mathbb{Q}}\): \[\begin{cases}\bar{X}_{t}=x+\int_{0}^{t}\sigma(s,\bar{X}_{s},\bar{Y}_{s}) \mathrm{d}\bar{W}_{s},\\ \bar{Y}_{t}=\phi(\bar{X}_{T})+\int_{t}^{T}G(s,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_ {s})\mathrm{d}s-\int_{t}^{T}\langle\bar{Z}_{s},\sigma(s,\bar{X}_{s},\bar{Y}_{s })\mathrm{d}\bar{W}_{s}\rangle,\end{cases} \tag{3.16}\] where \(G(t,x,y,z)=\langle b(t,x,y),z\rangle+f(t,x,y,z)\). Using once more Assumption 3.2, one can show that, the latter satisfies for all \((t,x,y,y^{\prime},z,z^{\prime})\in[0,T]\times\mathbb{R}^{d}\times\mathbb{R} \times\mathbb{R}\times\mathbb{R}^{d}\times\mathbb{R}^{d}\) \[|G(t,x,y,z)|\leq C\Big{(}1+|y|+(1+\ell(y))|z|^{2}\Big{)}, \tag{3.17}\] \[|G(t,x,y,z)-G(t,x,y^{\prime},z^{\prime})|\] \[\leq C\Big{(}1+|y|+2|z|+|z-z^{\prime}|+\ell(|y-y^{\prime}|^{2}) \big{(}2|z|+|z^{\prime}-z|\big{)}\Big{)}\times(|y-y^{\prime}|+|z-z^{\prime}|). \tag{3.18}\] We further apply the Ito-Krylov formula to the semimartingale \(v(t,\bar{X}_{t})\), to obtain for all \(t\in[0,\tau]\) \[\mathrm{d}\mathcal{Y}_{t}=\left(\frac{\partial v}{\partial t}+\frac{1}{2}a \nabla_{xx}^{2}v\right)(t,\bar{X}_{t})\mathrm{d}t+\langle\nabla_{x}v(t,\bar{ X}_{t}),\sigma(t,\bar{X},\mathcal{Y}_{t})\mathrm{d}\bar{W}_{t}\rangle\quad \bar{\mathbb{Q}}\text{-a.s.}, \tag{3.19}\] where \(\tau\) is an \(\bar{\mathfrak{F}}_{t}\) stopping time. Then, by using equation (3.2) with the fact that \(\mathcal{Z}_{t}:=\nabla_{x}v(t,\bar{X}_{t})\), the dynamics of the process \((\bar{Y}_{t}-\mathcal{Y}_{t})_{0\leq t\leq\tau}\) is given by \[\mathrm{d}(\bar{Y}-\mathcal{Y})_{t}= -\frac{1}{2}\left(a(t,\bar{X}_{t},\bar{Y}_{t})-a(t,\bar{X}_{t}, \mathcal{Y}_{t})\right)\nabla_{xx}^{2}v(t,\bar{X}_{t})\mathrm{d}t\] \[-\Big{(}G(t,\bar{X}_{t},\bar{Y}_{t},\bar{Z}_{t})-G(t,\bar{X}_{t}, \mathcal{Y}_{t},\mathcal{Z}_{t})\Big{)}\mathrm{d}t+\langle\bar{Z}_{t}- \mathcal{Z}_{t},\sigma(t,\bar{X}_{t},\bar{Y}_{t})\mathrm{d}\bar{W}_{t}\rangle. \tag{3.20}\] We recall that the processes \(\mathcal{Y}\) and \(\bar{Y}\) are both uniformly bounded, hence we define \(L_{0}\) by \(L_{0}:=2(\|\mathcal{Y}\|_{\infty}^{2}+\|\bar{Y}\|_{\infty}^{2}).\) Let us consider the following positive and locally bounded function \(h\in L^{1}_{\mathrm{Loc}}(\mathbb{R})\): \[h(y):=\kappa_{1}+\kappa_{2}\ell(y),\ \ \kappa_{1},\kappa_{2}\in\mathbb{R}_{+}.\] For all \(y\in[0,L_{0}]\), consider the function \(\Phi_{h}\in W^{1,2}_{\mathrm{loc}}(\mathbb{R})\) defined by: \[\Phi_{h}(z):=\int_{0}^{z}\!\!\exp\left(\kappa\int_{0}^{y}\!\!h(t)\mathrm{d}t \right)\mathrm{d}y,\] with \(\kappa\) a free nonnegative parameter. The function \(\Phi_{h}\) satisfies almost everywhere the differential equation \(\Phi_{h}^{\prime\prime}(z)-\kappa h(z)\Phi_{h}^{\prime}(z)=0\). Moreover for \(|z|\leq L_{0}\), we have \(0\leq z\Phi_{h}^{\prime}(z)\leq\exp\left(\kappa\|h\|_{L^{1}([0,L_{0}])}\right) \Phi_{h}(z)\). Then from the Ito's formula applied the function \(\Phi_{h}\) (see for example [4, Theorem 2.1]) we deduce for all \(t\in[0,\tau]\): \[\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})\] \[= \Phi_{h}(|\bar{Y}_{\tau}-\mathcal{Y}_{\tau}|^{2})+\int_{t}^{\tau} \Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})(\bar{Y}_{s}-\mathcal{Y}_{ s})\left(a(s,\bar{X}_{s},\bar{Y}_{s})-a(s,\bar{X}_{s},\mathcal{Y}_{s}) \right)\nabla_{xx}^{2}v(s,\bar{X}_{s})\mathrm{d}s\] \[+2\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{ 2})(\bar{Y}_{s}-\mathcal{Y}_{s})\left(G(t,\bar{X}_{s},\bar{Y}_{s},\bar{Z}_{s}) -G(s,\bar{X}_{s},\mathcal{Y}_{s},\mathcal{Z}_{s})\right)\mathrm{d}s\] \[-2\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^ {2})(\bar{Y}_{s}-\mathcal{Y}_{s})\langle\bar{Z}_{s}-\mathcal{Z}_{s},\sigma(s, \bar{X}_{s},\bar{Y}_{s})\mathrm{d}\bar{W}_{s}\rangle\] \[-\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{ 2})\langle\bar{Z}_{s}-\mathcal{Z}_{s},a(s,\bar{X}_{s},\bar{Y}_{s})(\bar{Z}_{s}- \mathcal{Z}_{s})\rangle\mathrm{d}s\] \[-2\int_{t}^{\tau}\Phi_{h}^{\prime\prime}(|\bar{Y}_{s}-\mathcal{Y} _{s}|^{2})\langle\bar{Z}_{s}-\mathcal{Z}_{s},a(s,\bar{X}_{s},\bar{Y}_{s})(\bar{Z} _{s}-\mathcal{Z}_{s})\rangle\mathrm{d}s-\int_{t}^{\tau}\mathrm{d}\mathcal{M}_{s}, \tag{3.21}\] where \(\mathrm{d}\mathcal{M}_{s}=21_{\{s\leq\tau\}}\Phi_{h}^{\prime}(|\bar{Y}_{s}- \mathcal{Y}_{s}|^{2})(\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Y}_{s}-\mathcal{Z} _{s},\sigma(s,\bar{X}_{s},\bar{Y}_{s})\mathrm{d}\bar{W}_{s}\rangle\) is a square integrable martingale. Using Assumption 3.2 and (3.18), we obtain that \[\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})\] \[\leq \Phi_{h}(|\bar{Y}_{\tau}-\mathcal{Y}_{\tau}|^{2})+K\int_{t}^{\tau }\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Y}_{s}-\mathcal{Y}_ {s}|^{2}|\nabla^{2}_{xx}v(s,\bar{X}_{s})|\mathrm{d}s\] \[+2C\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^ {2})|\bar{Y}_{s}-\mathcal{Y}_{s}|\Big{(}1+|\mathcal{Y}_{s}|+2|\mathcal{Z}_{s} |+|\bar{Z}_{s}-\mathcal{Z}_{s}|\] \[+\ell(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})\Big{\{}2|\mathcal{Z}_{s }|+|\bar{Z}_{s}-\mathcal{Z}_{s}|\Big{\}}\Big{)}\Big{(}|\bar{Y}_{s}-\mathcal{Y }_{s}|+|\bar{Z}_{s}-\mathcal{Z}_{s}|\Big{)}\mathrm{d}s\] \[-\lambda\int_{t}^{\tau}2\kappa|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}| \bar{Z}_{s}-\mathcal{Z}_{s}|^{2}h(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})\Phi_{h}^ {\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})\mathrm{d}s\] \[-\lambda\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y} _{s}|^{2})|\bar{Z}_{s}-\mathcal{Z}_{s}|^{2}\mathrm{d}s-\int_{t}^{\tau}\mathrm{ d}\mathcal{M}_{s}. \tag{3.22}\] From the local boundedness and positivity of \(\ell\), there exists \(M_{1}>0\) such that \(\ell(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})\leq M_{1}\). Using (3.10) and applying repeatedly the Young inequality, (3.22) becomes \[\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})+\lambda\int_{t}^{\tau }\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Z}_{s}-\mathcal{Z} _{s}|^{2}\Big{(}1+2\kappa\kappa_{1}|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}+2\kappa \kappa_{2}|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}\ell(|\bar{Y}_{s}-\mathcal{Y}_{s}| ^{2})\Big{)}\mathrm{d}s\] \[\leq \Phi_{h}(|\bar{Y}_{\tau}-\mathcal{Y}_{\tau}|^{2})+K_{\epsilon_{1},\epsilon_{2},M_{1},\epsilon_{5},L_{0}}(1+L_{0})\int_{t}^{\tau}\Phi_{h}^{ \prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}(1+ (T-s)^{-1+\gamma}+|\nabla^{2}_{xx}v(s,\bar{X}_{s})|)\mathrm{d}s\] \[+K(\epsilon_{1}+\frac{1}{\epsilon_{3}})\int_{t}^{\tau}\Phi_{h}^{ \prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}| \bar{Z}_{s}-\mathcal{Z}_{s}|^{2}\mathrm{d}s\] \[+K(\epsilon_{2}+\epsilon_{3}+M\epsilon_{5}+M\epsilon_{6})\int_{t}^{ \tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Z}_{s}-\mathcal{ Z}_{s}|^{2}\mathrm{d}s\] \[+K(\epsilon_{4}+\frac{1}{\epsilon_{6}})\int_{t}^{\tau}\Phi_{h}^{ \prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2}\ell (|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Z}_{s}-\mathcal{Z}_{s}|^{2}\mathrm{d} s-\int_{t}^{\tau}\mathrm{d}\mathcal{M}_{s}, \tag{3.23}\] where \(K\) is a positive constant that may change from line to line. Choosing \(\epsilon_{2}=\epsilon_{3}=\frac{\lambda}{16K},\epsilon_{5}=\epsilon_{6}=\frac{ 1}{16K}M_{1}\), \(\kappa,\kappa_{1}\) and \(\kappa_{2}\) such that \(2\kappa\kappa_{1}=\epsilon_{1}+\frac{1}{\epsilon_{3}}\) and \(2\kappa\kappa_{2}=\epsilon_{4}+\frac{1}{\epsilon_{6}}\) and using the relation \(z\Phi_{h}^{\prime}(z)\leqslant\exp(\kappa||h||_{L^{1}([0,L_{0}])})\Phi_{h}(z)\) for all \(z\in[0,L_{0}]\), we get \[\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})+\frac{3\lambda}{4} \lambda\int_{t}^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})| \bar{Z}_{s}-\mathcal{Z}_{s}|^{2}\mathrm{d}s\] \[\leq \Phi_{h}(|\bar{Y}_{\tau}-\mathcal{Y}_{\tau}|^{2})+K\int_{t}^{\tau} \Phi_{h}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})(1+(T-s)^{-1+\gamma}+|\nabla^{2}_{xx}v (s,\bar{X}_{s})|)\mathrm{d}s-\int_{t}^{\tau}\mathrm{d}\mathcal{M}_{s}, \tag{3.24}\] where \(K\) is a constant depending on \(\epsilon_{1},\epsilon_{2},M_{1},\epsilon_{5},L_{0}\) and \(\|h\|_{L^{1}([0,L_{0}])}\). Taking the conditional expectation on both sides of (3.24), we have \[1_{\{t\leq\tau\}}\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})+ \frac{3\lambda}{4}\bar{\mathbb{E}}^{\bar{\mathbb{Q}}}\Big{[}1_{\{t\leq\tau\}}\int_{t }^{\tau}\Phi_{h}^{\prime}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})|\bar{Z}_{s}- \mathcal{Z}_{s}|^{2}\mathrm{d}s\Big{|}\bar{\mathfrak{F}}_{t}\Big{]}\] \[\leq \bar{\mathbb{E}}^{\bar{\mathbb{Q}}}\Big{[}1_{\{t\leq\tau\}}\Phi_{h} (|\bar{Y}_{\tau}-\mathcal{Y}_{\tau}|^{2})\Big{|}\bar{\mathfrak{F}}_{t}\Big{]}\] \[+K\bar{\mathbb{E}}^{\bar{\mathbb{Q}}}\Big{[}1_{\{t\leq\tau\}}\int_ {t}^{\tau}\Phi_{h}(|\bar{Y}_{s}-\mathcal{Y}_{s}|^{2})(1+(T-s)^{-1+\gamma}+| \nabla^{2}_{xx}v(s,\bar{X}_{s})|)\mathrm{d}s\Big{|}\bar{\mathfrak{F}}_{t}\Big{]}. \tag{3.25}\] The bound (3.25) is analogous to the one obtained in [13, Section 4.3.5, equation(4.14)]. We underline the fact that the second order derivatives of \(v\) appearing on the right hand side of (3.25) are not uniformly controlled on \([0,T]\times\mathbb{R}^{d}\) (see (3.12)) and this prevent one to directly applying the classical Gronwall inequality in (3.25). Thus, one needs to apply the Krylov's inequality to control efficiently the latter (see [13, Lemma 4.1]) and then using the non trivial discrete Gronwall's lemma developed in [13, Section 4.3.5] to derive : \[\mathrm{essusp}_{\omega\in\Omega}\Phi_{h}(|\bar{Y}_{t}-\mathcal{Y}_{t}|^{2})=0, \text{ for all }t\in[0,T].\] Since \(\Phi_{h}\) is nonnegative, and thanks to the continuity of \(\bar{Y}\) and \(\mathcal{Y}\) we deduce \(\bar{\mathbb{P}}\)-a.s. \(\bar{Y}_{t}=\mathcal{Y}_{t}\) for all \(t\in[0,T]\). Moreover, using the fact \(\Phi_{h}^{\prime}\) is bounded from below by a positive constant and from (3.25) we derive \[\bar{\mathbb{E}}^{\bar{\mathbb{Q}}}\Big{[}1_{\{t\leq\tau\}}\int_{t}^{\tau}| \bar{Z}_{s}-\mathcal{Z}_{s}|^{2}\mathrm{d}s\Big{|}\widetilde{\mathfrak{F}}_{t }\Big{]}=0,\] which implies that \(\bar{Z}_{t}=\mathcal{Z}_{t}\ \mathrm{d}t\otimes\bar{\mathbb{P}}\)-a.s. This completes the proof. (iv). **Strong solution to FBSDE** (3.1) We follow the idea developed in [45]. We restrict ourselves to \(d=1\) and set \(\alpha_{0}\geq 1/2\). Using properties of local time, we will prove that the solution to SDE (3.13) is pathwise unique. Let \(X_{t}\) and \(X_{t}^{\prime}\) denote two weak solutions to SDE (3.13) with the same underlying Brownian motion \(\{W_{t}\}_{t\geq 0}\) for all \(t\in[0,T]\) we can show as in [45, Theorem 4.1] that \(L^{0}_{t}(X-X^{\prime})=0\) (where \(L^{0}_{t}\) stands for the local time at level \(0\)). For the sake of completeness, we will briefly reproduce the proof here. Suppose there is \(t\in[0,T],\varepsilon>0\) and a set \(\mathcal{A}\in\mathfrak{F}\), with \(\mathbb{P}(\mathcal{A})>0\) such that \(L^{0}_{t}(X^{1}-X^{2})(\omega)>\varepsilon\) for \(\omega\in\mathcal{A}\), where \(L^{0}_{t}\) denotes the local time at time \(t\) and level \(0\). Since the map \(\alpha\mapsto L^{\alpha}_{t}(X^{1}-X^{2})\) is right continuous, then there exists \(\tilde{\delta}>0\) such that for all \(\alpha\in[0,\tilde{\delta}],\ L^{\alpha}_{t}(X^{1}-X^{2})\geq\varepsilon/2\), on \(\mathcal{A}\). Therefore, using the occupation-times formula we deduce that: \[\int_{0}^{t}\frac{\mathrm{d}\langle X^{1}-X^{2}\rangle_{s}}{|X_{s}^{1}-X_{s}^{ 2}|^{2\alpha_{0}}}=\int_{0}^{+\infty}\frac{1}{|\alpha|^{2\alpha_{0}}}L^{\alpha }_{t}(X^{1}-X^{2})\mathrm{d}\alpha\geq\frac{\varepsilon}{2}\int_{0}^{\tilde{ \delta}}\frac{1}{\alpha^{2\alpha_{0}}}\mathrm{d}\alpha=+\infty,\ \ \text{on}\ \mathcal{A}. \tag{3.26}\] On the other hand: \[\int_{0}^{t}\frac{\mathrm{d}\langle X^{1}-X^{2}\rangle_{s}}{|X_{s}^{1}-X_{s}^{ 2}|^{2\alpha_{0}}}=\int_{0}^{t}\frac{(\tilde{\sigma}(s,X_{s}^{1})-\tilde{ \sigma}(s,X_{s}^{2}))^{2}}{|X_{s}^{1}-X_{s}^{2}|^{2\alpha_{0}}}\mathrm{d}s,\ \ \text{on}\ \mathcal{A}, \tag{3.27}\] the latter is bounded on \(\mathcal{A}\) provided \[|\tilde{\sigma}(t,X_{t}^{1})-\tilde{\sigma}(t,X_{t}^{2})|\leq C|X_{t}^{1}-X_{ t}^{2}|^{\alpha_{0}},\] and this is true only for all \(t\in[0,T-\delta]\). Thus, \(\mathbb{P}(\mathcal{A})=0\), which is a contradiction. Therefore, the pathwise uniqueness holds for SDE (3.13) (see [45, Theorem 4.2]). Let \((X,Y,Z)\) and \((X^{\prime},Y^{\prime},Z^{\prime})\) be two weak solutions to FBSDE (3.1) with the same underlying stochastic basis \((\Omega,\mathfrak{F},(\mathfrak{F})_{s\leq t\leq T},W,\mathbb{P}).\) Since, the solution is unique in law (see the above subsection), it holds \(\mathbb{P}\circ(X,Y,Z,W)^{-1}=\mathbb{P}\circ(X^{\prime},Y^{\prime},Z^{\prime },W)^{-1}.\) Moreover, using the pathwise uniqueness of (3.13), we have that \(X_{t}=X_{t}^{\prime},\forall t\in[0,T]\)\(\mathbb{P}\)-a.s. The continuity of the function \(v\) and relation (3.3), gives \[\begin{cases}Y_{t}^{\prime}=v(t,X_{t}^{\prime})=v(t,X_{t})=Y_{t}\quad\forall t \in[0,T],\mathbb{P}\text{-a.s.}\\ Z_{t}^{\prime}=\nabla_{x}v(t,X_{t}^{\prime})=\nabla_{x}v(t,X_{t})=Z_{t}\quad \mathrm{d}\mathbb{P}\otimes\mathrm{d}t\text{-a.e.}\end{cases} \tag{3.28}\] This concludes the proof. **Remark 3.5**.: _In the remainder of the paper, one can relax the uniform Lipschitz condition of the drift \(b\) in \(y\) in Assumption 3.2 to a locally locally Lipschitz condition that is, a Lipschitz condition in the region \([0,T]\times\mathbb{R}^{d}\times\{|y|\leq\Gamma\}\times\mathbb{R}^{d}\), where the constant \(\Gamma>0\) can be taken as the uniform bound coming from (3.7)._ **Remark 3.6**.: _Note that, as in [12, Theorem 2.9], under Assumption 3.2 with \(\beta=1\) and \(\sigma=\mathbb{I}_{d\times d}\), there exists \(\Upsilon>0\) only depending on the coefficients appearing in the assumptions such that the gradient of \(v\) solution to PDE (3.2) satisfies:_ \[\forall(t,x)\in[0,T]\times\mathbb{R}^{d},\,|\nabla_{x}v(t,x)|\leq\Upsilon\] ## 4. A comonotonicity theorem for FBSDE (3.1) In this section we establish a type of comparison theorem for the control component of the solution to the coupled FBSDE (3.1). This is known as the comonotonicity theorem. It was first introduced in [10] in the Lipschitz framework and then extended in the quadratic framework in [46] for decoupled FBSDEs. The comonotonicty theorem finds interesting applications in the context of economic models of equilibrium pricing in the framework of forward-backward SDEs ([10],[46]). Below, we recall the meaning of two functions being comonotonic. **Definition 4.1**.: _Two functions \(h_{1}\) and \(h_{2}\) are said to be comonotonic, if both \(h_{1}\) and \(h_{2}\) are of the same monotonicity, that is, if \(h_{1}\) is increasing(or decreasing), so is \(h_{2}.\) Moreover, \(h_{1}\) and \(h_{2}\) are said to be strictly comonotonic if \(h_{1}\) and \(h_{2}\) are strictly monotonic._ In this section, we assume that \(d=1\) and we recall that from Theorem 3.4, the equation (3.1) admits a unique strong solution \((X^{x},Y^{x},Z^{x})\) such that the function \(v\in W^{1,2,2}_{\rm loc}([0,T[\times\mathbb{R},\mathbb{R})\) solves the quasi-linear PDE (3.2) and the following relation hold \[Y^{t,x}_{s}=v(s,X^{t,x}_{s}),\quad Z^{t,x}_{s}=v^{\prime}(s,X^{t,x}_{s}), \tag{4.1}\] Moreover, the FBSDE (3.1) can be written as \[\begin{cases}X^{t,x}_{s}=x+\int_{t}^{s}\tilde{b}(r,X^{t,x}_{r}) \mathrm{d}r+\int_{t}^{s}\tilde{\sigma}(r,X^{t,x}_{r})\mathrm{d}W_{r},\\ Y^{t,x}_{s}=\phi(X^{t,x}_{T})+\int_{s}^{T}f(r,X^{t,x}_{r},Y^{t,x}_{r},Z^{t,x}_ {r})\mathrm{d}r-\int_{s}^{T}Z^{t,x}_{r}\mathrm{d}M^{X}_{r},\end{cases} \tag{4.2}\] where, \(\tilde{b}(\cdot,\cdot)=b(\cdot,\cdot,v(\cdot,\cdot),v^{\prime}(\cdot,\cdot)), \quad\tilde{\sigma}(\cdot,\cdot)=\sigma(\cdot,\cdot,v(\cdot,\cdot)).\) We are now in position to give the main result of this section. It can be viewed as an extension of those in [10, 46] to the case of coupled FBSDEs under weaker assumptions on the coefficients and more general type of quadratic drivers. We will consider the following additional assumption: * The function \((t,x,\cdot,\cdot)\mapsto b(t,x,\cdot,\cdot)\) is continuous on \([0,T]\times\mathbb{R}^{d}\) for all \(y\) and \(z\). **Theorem 4.2**.: _Let assumptions of Theorem 3.4 hold for \(d=1\) and \(\alpha_{0}\geq 1/2\). Assume further that (AX') is valid. For all \((t,x)\in[0,T)\times\mathbb{R},\) let \((X^{t,x,i},Y^{t,x,i},Z^{t,x,i})\) be the solution to FBSDE (3.1) with drift \(b_{i}\), generators \(f_{i},\) terminal value \(\phi_{i}\) and the same dispersion coefficient \(\sigma\), \(i\in\{1,2\}.\) Suppose that \(x\mapsto\phi_{i}(x)\) and \(x\mapsto f_{i}(\cdot,x,\cdot,\cdot)\) are comonotonic for all \(i\in\{1,2\}\). Then for any \((t,x)\in[0,T)\times\mathbb{R}\) and \(s\in[t,T)\)_ \[Z^{t,x,1}_{s}\cdot Z^{t,x,2}_{s}\geq 0,\quad\text{$\mathbb{P}$-a.s.} \tag{4.3}\] The proof of the theorem is based on a careful application of the comparison theorem both for SDEs and quadratic BSDEs, since it is known that such results amount to a kind of monotonicity of the solutions \(X\) and \(Y\) respectively. One of the difficulties here, comes from the non-Lipschitz continuity of the drift \(b\) and a comparison theorem result in this frame is still an open question in the literature. However, thanks to the additional assumptions (AX') and the continuity of the process \(X\) in \(T\)(see Theorem 3.4), we can invoke the refine comparison theorem for SDEs stated in [[27], Proposition 5.2.18], to claim that, for fixed \(t\) and \(T\) the mapping \(x\mapsto X^{t,x}_{T}\) is increasing. Proof.: Since \(\phi_{1}\) and \(\phi_{2}\) are comonotonic, thus for fixed \(t\) and \(T,\) the functions \(\phi_{1}(X^{t,x,1}_{T})\) and \(\phi_{2}(X^{t,x,2}_{T})\) are almost surely comonotonic. A same reasoning yields \(x\mapsto f_{1}(\cdot,X^{t,x,1},\cdot,\cdot)\) and \(x\mapsto f_{2}(\cdot,X^{t,x,2},\cdot,\cdot)\) are almost surely comonotonic. Under the assumptions of the theorem, a comparison theorem for quadratic BSDEs is valid (see Theorem B.2). Combining this with the monotonocity (and comonotonicity) of \(x\mapsto\phi_{i}(X^{t,x,i}_{T})\) and \(x\mapsto f_{i}(\cdot,X^{t,x,i}_{T},\cdot,\cdot)\) we obtain that \(x\mapsto Y^{t,x,i}\) is almost surely monotone. From the comonotonicity of \(x\mapsto\phi_{1}(X^{t,x,1}_{T}),\)\(x\mapsto\phi_{2}(X^{t,x,2}_{T}),\)\(x\mapsto f_{1}(\cdot,X^{t,x,1},\cdot,\cdot),\) and \(x\mapsto f_{2}(\cdot,X^{t,x,2},\cdot,\cdot)\), we invoke once more Theorem B.2 to conclude that the mappings \(x\mapsto Y^{t,x,1}\) and \(x\mapsto Y^{t,x,2}\) are also comonotonic a.s. Therefore from (4.1), we deduce that \[Z^{t,x,1}_{s}\cdot Z^{t,x,2}_{s}=(\partial_{x}v_{1})(s,X^{t,x,1}_{s})\cdot( \partial_{x}v_{2})(s,X^{t,x,2}_{s})\geq 0,\ \mathbb{P}-a.s.\] This conclude the proof. The next result is a refinement of the preceding one. It is derived without the additional assumption (AX') (continuity of the drift \(b\) in \(t\) and \(x\)) and assuming further that the diffusion coefficient \(\sigma=1\). In this context, the system of interest is given by \[\begin{cases}X_{s}^{t,x}=x+\int_{t}^{s}\tilde{b}(r,X_{r}^{t,x})\mathrm{d}r+W_{s}-W_ {t},\\ Y_{s}^{t,x}=\phi(X_{T}^{t,x})+\int_{s}^{T}f(r,X_{r}^{t,x},Y_{r}^{t,x},Z_{r}^{t,x })\mathrm{d}r-\int_{s}^{T}Z_{r}^{t,x}\mathrm{d}M_{r}^{X},\end{cases} \tag{4.4}\] and the drift coefficient \(\tilde{b}\) is now allowed to be discontinuous both in its temporal and spatial variables. **Theorem 4.3**.: _Let Assumption 3.2 be in force for \(d=1\). Define for all \((t,x)\in[0,T)\times\mathbb{R},\) the solution to FBSDE (4.4) by \((X^{t,x,i},Y^{t,x,i},Z^{t,x,i})\) for \(i\in\{1,2\}.\) Suppose that \(x\mapsto\phi_{i}(x)\) and \(x\mapsto f_{i}(\cdot,x,\cdot,\cdot)\) are comonotonic for all \(i\in\{1,2\},\) and furthermore, that \(\phi_{1},\phi_{2}\) are also comonotonic. Then for any \((t,x)\in[0,T)\times\mathbb{R}\) and \(s\in[t,T),\)_ \[Z_{s}^{t,x,1}\cdot Z_{s}^{t,x,2}\geq 0,\quad\mathbb{P}\text{-a.s.} \tag{4.5}\] Proof.: Using the comparison theorem for SDEs provided in [48, Theorem 2], we can then apply the machinery developed in the proof of Theorem 4.2. This ends the proof. The next result provides conditions under which one can conclude on the positivity or negativity of the control process \(Z\) solution to a scalar coupled FBSDE with quadratic growth. The main ingredient in the proof of this result concerns the differentiability with respect to the starting point \(x\) of the forward process. In our frame, this is provided in Section 5, Proposition 5.23. The complete proof of this corollary can be performed as in [48, Corollary 3.5], we will not reproduce it here. **Corollary 4.4**.: _Let Assumptions of Theorem 4.3 be in force. Let \((X^{x},Y^{x},Z^{x})\) be the solution to FBSDE (4.4) with initial value \((t,x)\in[0,T)\times\mathbb{R}\). Then, if \(x\mapsto\phi(x)\) and \(x\mapsto f(\cdot,x,\cdot,\cdot)\) are increasing (resp. decreasing) functions, then \(Z_{t}\geq 0\)\(\mathbb{P}\)-a.s.(resp. \(Z_{t}\leq 0\)\(\mathbb{P}\)-a.s. ) for all \(t\in[0,T)\)._ As pointed out in [37, Remark 4.3], a strict comonotonicity condition on the data in Corollary 4.4 is not sufficient to conclude that \(Z_{t}>0\)\(\mathbb{P}\)-a.s., hence \(Y_{t}\) will have an absolute continuous density with respect to the Lebesgue measure from Bouleau-Hirsch's criterion (Theorem 2.2). Finding the right conditions to ensure the latter result will constitute the main objective in the following section. ## 5. Density analysis for quadratic FBSDEs with rough coefficients In this section we analyse the density of the solution \(\mathbf{X}_{r}^{x}=(X_{r}^{x},Y_{r}^{x},Z_{r}^{x})\) to the equation (3.1). The notation \(\mathcal{L}(X_{T}^{x})\) will stand for the law of the process \(X_{T}^{x}\). Sometimes, we will omit the superscript. ### The Holder continuous case In this subsection, we carry out the analysis of densities of the FBSDE (3.1), by assuming further that the coefficients are Holder continuous in their time and space variables respectively and the dispersion \(\sigma\) will only depend on the forward process \(X\)(see equation (5.1) below). We are going to provide sufficient conditions under which the forward component \(X\) and the first component solution of the Backward one \(Y\) admit respectively an absolutely continuous law with respect to the Lebesgue measure, despite the roughness of the drift \(b\), the driver \(f\) and of the terminal value function \(\phi\). An important ingredient in this analysis relies on the Ito-Tanaka trick, as developed in [20] and further extended in [21] and [26]. This method (originated from [50]) consists in constructing a one to one transformation of a phase space that allows one to pass from an SDE with bad drift coefficient to a diffusion process with smooth coefficients. More precisely, we will consider the following system \[\begin{cases}X_{s}^{t,x}=x+\int_{t}^{s}b(r,X_{r}^{t,x},Y_{r}^{t,x},\sigma^{-1}( r,X_{r}^{t,x})Z_{r}^{t,x})\mathrm{d}r+\int_{t}^{s}\sigma(r,X_{r}^{t,x})\mathrm{d}W_{r},\\ Y_{s}^{t,x}=\phi(X_{T}^{t,x})+\int_{s}^{T}f(r,X_{r}^{t,x},Y_{r}^{t,x},\sigma^{- 1}(r,X_{r}^{t,x})Z_{r}^{t,x})\mathrm{d}r-\int_{s}^{T}Z_{r}^{t,x}\mathrm{d}W_{r },\end{cases} \tag{5.1}\] which is an equivalent version of (3.1) (via an obvious change of variables) and we will assume further that the coefficients \(b,\sigma,\Phi\) and \(f\) satisfy the the following set of assumptions: **Assumption 5.1**.: 1. _Assumption_ _3.2_ _is valid with_ \(\sigma(t,x,y)\equiv\sigma(t,x)\)_. The coefficients_ \(b,\sigma,f\) _are Holder continuous in_ \((t,x)\in[0,T]\times\mathbb{R}^{d}\) _uniformly in_ \(y\) _and_ \(z\)_. In particular, there exists a constant_ \(\theta\in(0,1)\) _such that_ \(b(t,\cdot,y,z)\in C^{\theta}(\mathbb{R}^{d};\mathbb{R})\)_,_ \((t,y,z)\in[0,T]\times\mathbb{R}\times\mathbb{R}^{d}\)_._ 2. _In addition,_ \(\sigma(t,\cdot)\in C_{b}^{3}(\mathbb{R}^{d},\mathbb{R})\)_,_ \(t\in[0,T]\)_,_ \(\sup_{t\in[0,T]}\|\sigma(t,\cdot)\|_{C_{b}^{3}(\mathbb{R},\mathbb{R})}<\infty\) _and for all_ \((t,x)\in[0,T]\times\mathbb{R}\)_,_ \(\|\sigma^{-2}\|_{0}:=\sup_{x\in\mathbb{R}^{d},t\in[0,T]}|\sigma^{-2}(t,x)|<\infty\)_._ In the above context, the FBSDE (5.1) has a weak solution which is unique in probability law (see Theorem 3.4) and the relation (3.3) writes \[Y_{s}^{t,x}=v(s,X_{s}^{t,x});\qquad Z_{s}^{t,x}=\sigma(t,X_{s}^{t,x})\nabla_{x }v(s,X_{s}^{t,x}), \tag{5.2}\] where we will maintain \(v\) as the unique solution to the PDE(3.2)1 Footnote 1: this holds with a slightly different second order operator \(\mathcal{L}\), \(f(t,x,y,\sigma^{-1}z)\) instead of \(f(t,x,y,z)\) and \(b(t,x,y,\sigma^{-1}z)\) in lieu of \(b(t,x,y,z)\). It is worth noting that, under the aforementioned assumptions, the weak solution \(v\) to the PDE (3.2) turns out to be a classical one i.e. \(v\in C^{1,2}([0,T]\times\mathbb{R}^{d};\mathbb{R})\) (see Section 8 in [13]). Moreover, there is a suitable constant \(\gamma>0\) such that the following pointwise estimate of \(\nabla_{xx}^{2}v(t,x)\) holds \[\forall(t,x)\in[0,T[\times\mathbb{R}^{d},\,(T-t)^{1-\gamma}|\nabla_{x,x}^{2} v(t,x)|<\infty. \tag{5.3}\] In addition, the FBSDE (5.1) can be written as follows \[\begin{cases}X_{s}^{t,x}=x+\int_{t}^{s}\tilde{b}(r,X_{r}^{t,x})\mathrm{d}r+ \int_{t}^{s}\sigma(r,X_{r}^{t,x})\mathrm{d}W_{r},\\ Y_{s}^{t,x}=\phi(X_{T}^{t,x})+\int_{s}^{T}f(r,X_{r}^{t,x},Y_{r}^{t,x},\sigma^{- 1}(r,X_{r}^{t,x})Z_{r}^{t,x})\mathrm{d}r-\int_{s}^{T}Z_{r}^{t,x}\mathrm{d}W_{r },\end{cases} \tag{5.4}\] where, \(\tilde{b}(t,x)=b(t,x,v(t,x),\sigma^{-1}(t,x)\nabla_{x}v(t,x))\) and \(\nabla_{x}v(t,x)\) denotes the gradient of the classical solution \(v\) to the PDE (3.2). The following lemma, provides a smoothness result gains by the transformed drift \(\tilde{b}\) under the additional assumption 5.1. **Lemma 5.2**.: _Under Assumption 5.1, the transformed drift \(\tilde{b}\) is \(\theta^{\prime}\)-Holder continuous for all \(t<T\)._ Proof.: Let \((t,x,x^{\prime},y,z)\in[0,T)\times\mathbb{R}^{d}\times\mathbb{R}^{d}\times \mathbb{R}\times\mathbb{R}^{d}\). Then from the assumptions of the Lemma we obtain that \[|\tilde{b}(t,x)-\tilde{b}(t,x^{\prime})|\] \[=|b(t,x,v(t,x),\sigma^{-1}(t,x)v^{\prime}(t,x))-b(t,x^{\prime},v(t,x^{\prime}),\sigma^{-1}(t,x^{\prime})\nabla_{x}v(t,x^{\prime}))|\] \[\leq\Lambda_{b}|x-x^{\prime}|^{\theta}+K|v(t,x)-v(t,x^{\prime})|+ K|\sigma^{-1}(t,x)\nabla_{x}v(t,x)-\sigma^{-1}(t,x^{\prime})\nabla_{x}v(t,x^{ \prime})|\] \[\leq\Lambda_{b}|x-x^{\prime}|^{\theta}+K|x-x^{\prime}|^{\beta_{0}} +K|\sigma^{-1}(t,x)||x-x^{\prime}|^{\beta^{\prime}}+K|\nabla_{x}v(t,x^{ \prime})||\sigma^{-1}(t,x)-\sigma^{-1}(t,x^{\prime})|\] \[\leq 4\max(\Lambda_{b},K)|x-x^{\prime}|^{\theta^{\prime}},\] where we used (3.9) and (3.11) to derive the third inequality above and \(\theta^{\prime}=\min(\theta,\beta_{0},\beta^{\prime})\). #### 5.1.1. Strong Solvability and Regularity of FBSDE (5.1) From Assumption 5.1, we notice that the transformed drift \(\tilde{b}\) is Holder continuous and bounded and the diffusion coefficient \(\sigma\) is Lipschitz continuous and bounded. Then from [50, Theorem 4] the forward equation in (5.4) has a unique strong solution. Hence, the solution of the FBSDE built in Theorem 3.4 is also strong for all \(d\geq 1\). We further notice that, the transformed drift \(\tilde{b}\) and the diffusion \(\sigma\) fulfill the requirements of [20, Remark 10]. Then, from [21, Theorem 7] (see also [20, Theorem 5]), the forward equation in (5.4) admits a stochastic flow of diffeomorphisms of class \(C^{1,\theta^{\prime\prime}}\) where \(\theta^{\prime\prime}\in(0,\theta^{\prime})\). So, the variational differentiability in the sense of Malliavin of the solution process \(X^{t,x}\) follows as well. Indeed, following the same strategy as in [20] and [21], we know that \(X^{t,x}\) is linked to the following process \(\tilde{X}^{t,\zeta}\) solution of the SDE \[\tilde{X}^{t,\zeta}_{s}=\zeta+\int_{t}^{s}\tilde{b}_{1}(r,\tilde{X}^{t,\zeta}_{r })\mathrm{d}r+\int_{t}^{s}\tilde{\sigma}_{1}(r,\tilde{X}^{t,\zeta}_{r})\mathrm{ d}W_{r}, \tag{5.5}\] via the transformation \[X^{t,x}_{s}=\Psi^{-1}(s,\tilde{X}^{t,\zeta}_{s}), \tag{5.6}\] where \(\Psi^{-1}\) denotes the inverse of the non-singular diffeomorphisms \(\Psi\) of class \(C^{2}\) defined by (see [20, Lemma 6]) \[\Psi(t,x)=x+U(t,x), \tag{5.7}\] where \(U\) is the solution to the following Backward Kolmogorov equation \[\begin{cases}\partial_{t}U(t,x)+\frac{1}{2}\Delta U(t,x)+(\tilde{b}_{1}\cdot DU )(t,x)-\mu U(t,x)=-\tilde{b}_{1}(t,x),\\ U(T,x)=0,\quad(t,x)\in[0,T)\times\mathbb{R}^{d}.\end{cases} \tag{5.8}\] with \(\mu>0\) is fixed. The drift coefficient \(\tilde{b}_{1}(t,\zeta)=-\mu U(t,\Psi^{-1}(t,\zeta))\) belongs to \(L^{\infty}(0,T;C^{1+\theta}_{b}(\mathbb{R};\mathbb{R}))\), the diffusion \(\tilde{\sigma}_{1}(t,\zeta)=D\Psi(t,\Psi^{-1}(t,\zeta))\sigma(t,\Psi^{-1}(t, \zeta))\) obviously inherits some of the smoothness properties of \(\sigma\) and \(U\). Thus \(\tilde{b}_{1}\) and \(\tilde{\sigma}_{1}\) are both Lipschitz continuous, we can then invoke the result from [41] to claim that the SDE (5.5) is Malliavin differentiable and so is also \(X^{t,x}\) via the above representation and a straightforward application of the chain rule for Malliavin calculus. Thanks to the relation (5.2) and using the chain rules (classical and Malliavin calculus), we deduce that the couple solution \((Y,Z)\) is differentiable in the classical and Malliavin sense. Moreover, the following representations hold for all \(0\leq s\leq t<T\) \[D_{s}Y_{t}=\nabla_{x}v(t,X_{t})D_{s}X_{t},\quad D_{s}Z_{t}=\Big{(}\sigma(t,X_{ t})\nabla_{x,x}^{2}v(t,X_{t})+\sigma_{x}(t,X_{t})\nabla_{x}v(t,X_{t})\Big{)}D_{ s}X_{t}. \tag{5.9}\] **Remark 5.3**.: _Due to the lack of regularity of the drift \(b\), the equation satisfied by the Malliavin derivative or the classical derivative with respect to the initial value for the triple solution \((X,Y,Z)\) still missing in the literature. However, in the one dimensional case, we can obtain the following for all \(0\leq s\leq t<T\) we obtain that_ \[D_{s}X_{t} =D\Psi^{-1}(t,\tilde{X}_{t})\tilde{\sigma}_{1}(s,\tilde{X}_{s}) \exp\left(\int_{s}^{t}\Big{\{}\tilde{b}_{1}(r,\tilde{X}_{r})-\frac{1}{2}( \tilde{\sigma}^{\prime}_{1}(r,\tilde{X}_{r}))^{2}\Big{\}}\mathrm{d}r+\int_{s}^ {t}\tilde{\sigma}^{\prime}_{1}(r,\tilde{X}_{r})\mathrm{d}W_{r}\right)\] \[=D\Psi^{-1}(t,\tilde{X}_{t})\tilde{\sigma}_{1}(s,\tilde{X}_{s}) \tilde{\xi}_{t}(\tilde{\xi}_{s})^{-1} \tag{5.10}\] #### 5.1.2. Existence of Density of the forward \(X\) **Theorem 5.4**.: _Let Assumption 5.1 be in force. Then, the law of \(X_{t}\) solution to FBSDE (5.1) has an absolutely continuous law with respect to the Lebesgue measure._ Proof.: We first note that the diffusion \(\tilde{a}:=\tilde{\sigma}_{1}\tilde{\sigma}_{1}^{\mathbf{T}}\) is non degenerate, i.e., there is a constant \(C>0\) such that \(\forall\xi\in\mathbb{R}^{d}\), \(\langle\xi,\tilde{a}(t,\zeta)\xi\rangle\geq C|\xi|^{2}\) where \(\tilde{\sigma}_{1}(t,\zeta)=D\Psi(t,\Psi^{-1}(t,\zeta))\sigma(t,\Psi^{-1}(t, \zeta))\). Indeed, by choosing \(\mu\) as in [20, Lemma 4], such that \(|DU|\leq 1/2\) we obtain from Assumption 3.2 and the triangle inequality that \[\langle\xi,\tilde{a}(t,\zeta)\xi\rangle =\langle\xi,\big{(}D\Psi(t,\Psi^{-1}(t,\zeta))\big{)}^{2}\sigma \sigma^{\mathbf{T}}(t,\Psi^{-1}(t,\zeta))\xi\rangle\] \[\geq\lambda|\big{(}D\Psi(t,\Psi^{-1}(t,\zeta))\big{)}^{2}||\xi|^{2}\] \[=\lambda|I_{d}+DU(t,\Psi^{-1}(t,\zeta))|^{2}|\xi|^{2}\] \[\geq\lambda(1-|DU(t,\Psi^{-1}(t,\zeta))|)^{2}|\xi|^{2}\geq \lambda/2|\xi|^{2}.\] Therefore, from [41], the solution process \(\tilde{X}^{t,\zeta}\) to equation (5.5) has a continuous law with respect to the Lebesgue measure that will be denoted by \(\rho_{\tilde{X}_{t}(\zeta)}\). Hence, for every bounded and continuous function \(\varphi:\mathbb{R}^{d}\to\mathbb{R}\) the following makes is well defined \[\mathbb{E}\varphi(X_{t}^{x}) :=\mathbb{E}\varphi\big{(}\Psi^{-1}(t,\tilde{X}_{t}^{\zeta})\big{)} =\int_{\mathbb{R}^{d}}\varphi\big{(}\Psi^{-1}(t,z)\big{)}\cdot\rho_{\tilde{X}_{t }(\zeta)}(z)\mathrm{d}z\] \[=\int_{\mathbb{R}^{d}}\varphi(z)\big{(}\det J\Psi(t,z)\big{)}\cdot \rho_{\tilde{X}_{t}(\zeta)}(\Psi(t,z))\mathrm{d}z,\] where \(J\) stands for the Jacobian with respect to the spatial variable \(x\). Hence, the process \(X^{x}\) has a density \(\rho_{X.(x)}\) with respect to the Lebesgue measure given by \[\rho_{X_{t}(x)}=\big{(}\det J\Psi\big{)}\cdot\rho_{\tilde{X}_{t}(\zeta)}(\Psi).\] #### 5.1.3. Density of the backward component \(Y\) In this subsection, we provide some conditions under which the backward component \(Y\) has an absolute continuous density with respect to the Lebesgue measure. To do this, we consider the following additional set of assumptions **Assumption 5.5**.: 1. _Assumption_ _5.1_ _is valid in one dimension(d=1) and_ \(\beta=1\) _i.e., the terminal value_ \(\phi\) _is Lipschitz continuous. There exist_ \(\Lambda,\lambda>0\) _such that for all_ \((t,x)\in[0,T]\times\mathbb{R}\)_,_ \(\lambda\leq\sigma(t,x)\leq\Lambda\)_._ 2. \((x,y,z)\mapsto f(t,x,y,z)\) _is continuously differentiable for every_ \(t\in[0,T]\) _and for all_ \((t,x,y,z)\in[0,T]\times\mathbb{R}^{3}\) _and_ \(\alpha\in(0,1)\)__ \[|f^{\prime}_{x}(t,x,y,z)| \leq\Lambda(1+|y|+\ell(|y|)|z|^{\alpha})\] \[|f^{\prime}_{y}(t,x,y,z)| \leq\Lambda(1+|z|^{\alpha})\] \[|f^{\prime}_{z}(t,x,y,z)| \leq\Lambda(1+\ell(y)|z|)\] 3. \(\phi\) _is differentiable_ \(\mathcal{L}(X_{T}^{x})\)_-a.e._ The method consists on proving that the Malliavin covariance matrix of \(Y_{t}\) is invertible, i.e. for all \(0\leq s\leq t<T\) \[\Gamma_{Y_{t}}=\langle D_{s}Y_{t},D_{s}Y_{t}\rangle_{L^{2}([0,t])}>0,\quad \mathbb{P}\text{-a.s.}\] For this, one needs first to derive the linear equation (BSDE) satisfied by a version \((D_{s}Y_{t},D_{s}Z_{t})\) of the Malliavin derivatives of the solution to equation (5.27). **Proposition 5.6**.: _Under Assumption 5.5, a version of \((D_{s}Y_{t},D_{s}Z_{t})\) satisfies_ \[D_{s}Y_{t} =0,\quad D_{s}Y_{t}=0,\quad t<s\leq T,\] \[D_{s}Y_{t} =\phi^{\prime}(X_{T})D_{s}X_{T}+\int_{t}^{T}f^{\prime}_{x}(r, \mathbf{X}_{r})D_{s}X_{r}+f^{\prime}_{y}(r,\mathbf{X}_{r})D_{s}Y_{r}\mathrm{d}r\] \[\quad+\int_{t}^{T}\Big{(}Z_{r}\sigma_{x}^{-1}(r,X_{r})D_{s}X_{r}+ \sigma^{-1}(r,X_{r})D_{s}Z_{r}\Big{)}f^{\prime}_{z}(r,\mathbf{X}_{r})\mathrm{d }r-\int_{t}^{T}D_{s}Z_{r}\mathrm{d}W_{r} \tag{5.11}\] _here \(\mathbf{X}_{\cdot}=(X_{\cdot},Y_{\cdot},\sigma^{-1}(\cdot,X_{\cdot})Z_{\cdot})\) and \(D_{s}X_{t}^{x}\) is given by (5.10). Moreover, \((D_{t}Y_{t})_{0\leq t\leq T}\) is a continuous version of \((Z_{t})_{0\leq t\leq T}\)._ Proof.: For the representation, we appeal the result from [23, Theorem 5.12]2, since \(\tilde{b}\) is uniformly bounded and Holder continuous for all \(t\in[0,T)\). The proof is completed. Footnote 2: Note that the result is still valid with non constant \(\sigma\) satisfying the conditions (A1) and (A2) In the sequel, we will adopt the following notations as in [2] (see also [37] ): For any \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\), and \(t\in[0,T]\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}|\mathfrak{F}_{t})>0:\) \[\begin{cases}\overline{\phi}:=\sup_{x\in\mathbb{R}}\phi^{\prime}(x),&\overline{ \phi}^{\mathcal{A}}:=\sup_{x\in\mathcal{A}}\phi^{\prime}(x),\\ \underline{\phi}:=\inf_{x\in\mathcal{A}}\phi^{\prime}(x),&\underline{\phi}^{ \mathcal{A}}:=\inf_{x\in\mathcal{A}}\phi^{\prime}(x),\end{cases} \tag{5.12}\] \[\overline{f}(t):=\sup_{s\in[t,T],(x,y,z)\in\mathbb{R}^{3}}f^{\prime}_{x}(s,x,y, z),\qquad\underline{f}(t):=\inf_{s\in[t,T],(x,y,z)\in\mathbb{R}^{3}}f^{ \prime}_{x}(s,x,y,z). \tag{5.13}\] Below is stated the main result of this subsection. **Theorem 5.7**.: _Suppose Assumption 5.5 holds. Suppose in addition there is \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}|\mathfrak{F}_{t})>0\) and one of the following assumptions holds_ * \(\phi^{\prime}\geq 0\)_,_ \(\phi^{\prime}_{|\mathcal{A}}>0,\mathcal{L}(X_{T})\)_-a.e. and_ \(\underline{f}\geq 0\)_,_ \(\sigma^{-1}_{x}f^{\prime}_{z}\geq 0\)_,_ * \(\phi^{\prime}\leq 0\)_,_ \(\phi^{\prime}_{|\mathcal{A}}<0,\mathcal{L}(X_{T})\)_-a.e. and_ \(\overline{\overline{f}}\leq 0\)_,_ \(\sigma^{-1}_{x}f^{\prime}_{z}\leq 0\)_._ _Then, \(Y_{t}\) possesses an absolute continuous law with respect to the Lebesgue measure on \(\mathbb{R}\)._ Proof.: From Assumption 5.5, we deduce that \[|\sigma^{-1}(s,X_{s})f^{\prime}_{z}(s,\mathbf{X}_{s})|\leq C\big{(}1+\ell(Y_{s })|Z_{s}|\big{)}\in\mathcal{H}_{\mathrm{BMO}} \tag{5.14}\] Let \(\tilde{\mathbb{P}}\) be the probability measure defined by \[\frac{\mathrm{d}\tilde{\mathbb{P}}}{\mathrm{d}\mathbb{P}}\Big{|}_{\mathfrak{F} _{t}}=\mathcal{E}\Big{(}\sigma^{-1}(t,X_{t})f^{\prime}_{z}(t,\mathbf{X}_{t})* W\Big{)}_{t}=\mathcal{E}_{t}. \tag{5.15}\] It follows from (5.14) that \(\mathcal{E}_{t}\) is uniformly integrable. Therefore, taking the conditioning expectation with respect to \(\tilde{\mathbb{P}}\) on both sides of (5.11) gives for all \(0\leq s\leq t<T\) \[D_{s}Y_{t}= \mathbb{E}^{\tilde{\mathbb{P}}}\Big{(}\phi^{\prime}(X_{T})D_{s}X _{T}\] \[+\int_{t}^{T}\Big{[}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r} \sigma^{-1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D_{s}X_{r}+f^{ \prime}_{y}(r,\mathbf{X}_{r})D_{s}Y_{r}\Big{]}\,\mathrm{d}r\Big{|}\mathfrak{F} _{t}\Big{)} \tag{5.16}\] Using (5.10) and the well known linearisation method, \(D_{s}Y_{t}\) can be rewritten: \[D_{s}Y_{t}\] \[= \mathbb{E}^{\tilde{\mathbb{P}}}\Big{[}e^{\int_{t}^{T}f^{\prime}_{ y}(u,\mathbf{X}_{u})\mathrm{d}u}\phi^{\prime}(X_{T})D\Psi^{-1}(T,\tilde{X}_{T}) \tilde{\sigma}_{1}(s,\tilde{X}_{s})\tilde{\xi}_{T}(\tilde{\xi}_{s})^{-1}\] \[+\int_{t}^{T}e^{\int_{t}^{T}f^{\prime}_{y}(u,\mathbf{X}_{u}) \mathrm{d}u}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{-1}_{x}(r,X_ {r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{r})\tilde{ \sigma}_{1}(s,\tilde{X}_{s})\tilde{\xi}_{r}(\tilde{\xi}_{s})^{-1}\mathrm{d}r \Big{|}\mathfrak{F}_{t}\Big{]}\] \[= \mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T})\tilde{\xi }_{T}\] \[+\int_{t}^{T}\tilde{\psi}_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r })+Z_{r}\sigma^{-1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{ -1}(r,\tilde{X}_{r})\tilde{\xi}_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{|}( \tilde{\psi}_{t})^{-1}(\tilde{\xi}_{s})^{-1}\tilde{\sigma}_{1}(s,\tilde{X}_{s}),\] where the last equality is due to Bayes' rule with \(\tilde{\psi}\) given by \[\tilde{\psi}_{t}:=\exp\Big{(}\int_{0}^{t}(f^{\prime}_{y}(u,\mathbf{X}_{u})- \frac{1}{2}|\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u})|^{2})\mathrm{d }u+\int_{0}^{t}\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u})\mathrm{d} W_{u}\Big{)},\] and the expression of \(\tilde{\xi}\) is given by (5.11). Therefore, from the above computations, the Malliavin covariance \(\Gamma_{Y_{t}}\) of \(Y_{t}\) is given by \[\Gamma_{Y_{t}}=(\tilde{\psi}_{t}^{-1})^{2}\int_{0}^{t}(\tilde{ \sigma}_{1}(s,\tilde{X}_{s})\tilde{\xi}_{s}^{-1})^{2}\mathrm{d}s\] \[\quad\Big{(}\mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T}) \tilde{\xi}_{T}+\int_{t}^{T}\tilde{\psi}_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_ {r})+Z_{r}\sigma^{-1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{ -1}(r,\tilde{X}_{r})\tilde{\xi}_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]} \Big{)}^{2}.\] To obtain the desired result, we show under assumption (A+) that \[\mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T})\tilde{\xi}_{T}+\int_{t}^{ T}\tilde{\psi}_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{-1}_{x}(r,X_ {r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{r})\tilde{\xi}_ {r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}\neq 0.\] Using Ito's formula, the product \(\tilde{\psi}_{t}\tilde{\xi}_{t}\) can be rewritten as \[\tilde{\psi}_{t}\tilde{\xi}_{t} =\exp\Big{\{}\int_{0}^{t}\big{[}\tilde{b}_{1}^{\prime}(u,\tilde{X}_ {u})\mathrm{d}u+f^{\prime}_{y}(u,\mathbf{X}_{u})+\tilde{\sigma}^{\prime}_{1}(u, \tilde{X}_{u})\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u})\big{]} \mathrm{d}u\Big{\}}\] \[\quad\times\exp\Big{\{}\int_{0}^{t}\Big{(}\tilde{\sigma}^{\prime} _{1}(u,\tilde{X}_{u})+\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u}) \Big{)}\mathrm{d}W_{u}\] \[\qquad\qquad-\frac{1}{2}\int_{0}^{t}\Big{(}\tilde{\sigma}^{\prime }_{1}(u,\tilde{X}_{u})+\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u}) \Big{)}^{2}\mathrm{d}u\Big{\}}=\tilde{H}_{t}\times\tilde{M}_{t}\] From the assumptions of the theorem, the stochastic integral \(\int_{0}^{T}\Big{(}\tilde{\sigma}^{\prime}_{1}(u,\tilde{X}_{u})+\sigma^{-1}(u,X_{u})f^{\prime}_{z}(u,\mathbf{X}_{u})\Big{)}\mathrm{d}W_{u}\) is a BMO martingale. Hence, the measure \(\mathbb{Q}\) defined by \(\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{F}}\Big{|}_{\tilde{\mathcal{F}}_ {t}}=\tilde{M}_{t}\) is an equivalent probability measure to \(\mathbb{P}\). Therefore, using the the Bayes' rule once more we obtain that \[\mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T})\tilde{\xi}_ {T}+\int_{t}^{T}\psi_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{- 1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{ r})\tilde{\xi}_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}\] \[= \mathbb{E}^{\mathbb{Q}}\Big{[}\phi^{\prime}(X_{T})\tilde{H}_{T}+ \int_{t}^{T}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{-1}_{x}(r,X_{ r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{r})\tilde{H}_{r} \mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}\tilde{M}_{t}.\] Let us remark that \(D\Psi^{-1}(t,\xi)=I+\sum_{k\geq 1}(-DU(t,\Psi^{-1}\xi))^{k}\geq I+\sum_{k\geq 1 }(-1/2)^{k}\geq 0\) for all \(\xi\in\mathbb{R}^{d}\) and \(I\) represents the identity matrix in \(\mathbb{R}^{d}\). On the other hand, we also notice from the assumptions of the theorem and thanks to [46, Corollary 3.5] that \(Z_{r}\tilde{\sigma}_{1}(r,\tilde{X}_{r})\geq 0\), where \(\tilde{X}\) stands for the solution to equation (5.5). Hence, \[\mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T})\tilde{\xi}_ {T}+\int_{t}^{T}\psi_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{- 1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{ r})\tilde{\xi}_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}\] \[\geq\mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}} \Big{(}\underline{\phi}\tilde{H}_{T}+\int_{t}^{T}\Big{(}\underline{f}+Z_{r} \tilde{\sigma}_{1}(r,\tilde{X}_{r})\tilde{\sigma}^{-1}_{1}(r,\tilde{X}_{r}) \sigma^{-1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r, \tilde{X}_{r})\tilde{H}_{r}\mathrm{d}r\Big{)}\Big{|}\mathfrak{F}_{t}\Big{]} \tilde{M}_{t}\] \[\geq\mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}} \Big{(}\underline{\phi}\tilde{H}_{T}+\int_{t}^{T}\Big{(}\underline{f}+Z_{r} \tilde{\sigma}_{1}(r,\tilde{X}_{r})\tilde{\sigma}^{-1}_{1}(r,\tilde{X}_{r}) \sigma^{-1}_{x}(r,X_{r})f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r, \tilde{X}_{r})\tilde{H}_{r}\mathrm{d}r\Big{)}\Big{|}\mathfrak{F}_{t}\Big{]} \tilde{M}_{t}\] \[\geq\mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}} \Big{(}\underline{e}^{-KT}\underline{\phi}^{\mathcal{A}}e^{-\Lambda\int_{0}^{T}( 2+|Z_{u}|^{2})\mathrm{d}u}e^{-K\sqrt{T}\sqrt{T}\int_{0}^{T}|f^{\prime}_{z}(u, \mathbf{X}_{u})|^{2}\mathrm{d}u}\] \[\quad+\int_{t}^{T}\Big{(}\underline{f}+Z_{r}\tilde{\sigma}_{1}(r, \tilde{X}_{r})\tilde{\sigma}^{-1}_{1}(r,\tilde{X}_{r})\sigma^{-1}_{x}(r,X_{r}) f^{\prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{r})\tilde{H}_{r} \mathrm{d}r\Big{)}\Big{|}\mathfrak{F}_{t}\Big{]}\tilde{M}_{t}\] where we used the bound \(|f^{\prime}_{y}(s,\mathbf{X}_{s})|\leq\Lambda(1+|Z_{s}|^{\alpha})\), the Cauchy-Schwarz inequality and the fact that \(\tilde{\sigma}_{1}(r,\tilde{X}_{r})\geq 0\) (from its definition and the fact that \(D\Psi>0\)). Provided that the law of \(X_{T}\) is absolutely continuous with respect to the Lebesgue measure, we deduce that \[\mathbb{E}\Big{[}\tilde{\psi}_{T}\phi^{\prime}(X_{T})\tilde{\xi}_{T}+\int_{t}^{T} \psi_{r}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})+Z_{r}\sigma^{-1}_{x}(r,X_{r})f^{ \prime}_{z}(r,\mathbf{X}_{r})\Big{)}D\Psi^{-1}(r,\tilde{X}_{r})\tilde{\xi}_{r} \mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}>0.\] This concludes the proof. **Remark 5.8**.: * _Even though we consider a fully coupled FBSDE under much weaker conditions, the strict monotinicity of the terminal condition and the driver guarantee the existence of a density for the backward component solution_ \(Y\) _of the equation (compare with_ _[_37_]_ _for the case of decoupled FBSDE under Cauchy-Lipschitz drift)._ * _Note that, the extra assumption_ \(\sigma^{-1}_{x}f^{\prime}_{z}\geq 0\) _or_ \(\sigma^{-1}_{x}f^{\prime}_{z}\leq 0\) _here follows from of the special structure of the backward equation considered._ _Theses assumptions naturally disappeared when one considers FBSDE with the same structure as in for instance_ _[_2_, _37_]__._ * _A similar result was obtained in_ _[_42_]_ _for coupled FBSDEs with non constant diffusion coefficient but under rather strong smoothness requirements on the coefficients. For instance, the drift is assumed Holder continuous in_ \(t\)_, differentiable and uniformly Lipschitz continuous in_ \(x\) _with the derivative in_ \(x\) _been bounded and Holder continuous in all its components_ \(t,x,y\) _and_ \(z\)_._ #### 5.1.4. Gaussian-type bounds of densities for FBSDE (5.1) Here, we will provide upper and lower bounds for densities of the forward \(X\) and the backward \(Y\) components solution to (5.1), respectively. We will further assume that * Assumption 5.5 holds and the drift \(b:=b(t,x,y,z)\) is weakly differentiable in \(x\) such that \(b^{\prime}_{x}(t,\cdot,y,z)\in L^{\infty}(\mathbb{R})\) for all \(t,y\) and \(z\), \(\partial_{t}\sigma\) exists and it is uniformly bounded. **Lemma 5.9**.: _Let assumption_ **(A6)** _be in force. Then for all \((t,x)\in[0,T)\times\mathbb{R}\) the transformed drift \(\tilde{b}\) is bounded and Lipschitz continuous in \(x\)._ Proof.: Let \((t,x,x^{\prime})\in[0,T)\times\mathbb{R}\times\mathbb{R}\) we have: \[|\tilde{b}(t,x)-\tilde{b}(t,x^{\prime})| =|b(t,x,v(t,x),v^{\prime}(t,x))-b(t,x^{\prime},v(t,x^{\prime}),v ^{\prime}(t,x^{\prime}))|\] \[\leq|b^{\prime}_{x}||x-x^{\prime}|+K\sup_{x}|v^{\prime}(t,x)||x- x^{\prime}|+K\sup_{x}|v^{\prime\prime}(t,x)||x-x^{\prime}|\] \[\leq\Lambda_{\tilde{b}}|x-x^{\prime}|,\] where, we used the mean value theorem, the bounds of \(\nabla_{x}b\in L^{\infty}(\mathbb{R})\), \(v^{\prime}\) (see (3.12)), \(v^{\prime\prime}\) (see (5.3)) and \(\Lambda_{\tilde{b}}:=\max\{\|b^{\prime}_{x}\|_{\infty},K\|v^{\prime}\|_{ \infty},K\|v^{\prime\prime}\|_{\infty}\}\). **Remark 5.10**.: _Under the Assumption_ **(A6)**_, the FBSDE (5.1) has a unique strong classical and Malliavin differentiable solution such that from [14, Theorem 2.1], the Malliavin derivative of the forward \(X_{t}\) is given explicitly for all \(0\leq s\leq t\) by_ \[D_{s}X_{t}=\sigma(s,X_{s})\exp\Bigg{[}\int_{s}^{t}\Big{(}\partial_{x}\tilde{b }-\frac{\tilde{b}\partial_{x}\sigma+\partial_{t}\sigma}{\sigma}-\frac{1}{2}( \partial_{xx}^{2}\sigma)\sigma\Big{)}(r,X_{r})\mathrm{d}r\Bigg{]}. \tag{5.17}\] _Moreover, for all \(p\geq 1\)_ \[\mathbb{E}\Big{[}\sup_{0\leq s\leq t\leq T}\big{(}|\nabla_{x}X_{t}|^{p}+|( \nabla_{x}X_{t})^{-1}|^{p}+|D_{s}X_{t}|^{p}\big{)}\Big{]}<\infty.\] _In particular, using the well-known representation \(\nabla_{x}Y_{t}\nabla_{x}X_{s}=D_{s}Y_{t}\) for all \(0<s\leq t\), one can obtain for all \(p\geq 1\)_ \[\mathbb{E}\big{[}\sup_{0\leq t\leq T}|Z_{t}|^{p}\big{]}<\infty.\] **Theorem 5.11**.: _Let assumption_ **(A6)** _and one of the assumptions (A+) or (A-) be in force. If there is \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}|\mathfrak{F}_{t})>0\). Then:_ * _The probability density and the tail probabilities of the forward component_ \(X_{t}\) _solution to FBSDE (_5.1_) satisfy for all_ \(t\in[0,T)\) _and_ \(x>0\)__ \[\frac{\mathbb{E}|X_{t}-\mathbb{E}[X_{t}]|}{2C(\Lambda)te^{2Kt}}\exp\Big{(}- \frac{(x-\mathbb{E}[X_{t}])^{2}}{2C(\Lambda,\lambda)te^{-2Kt}}\Big{)}\leq\rho_ {X_{t}}(x)\leq\frac{\mathbb{E}|X_{t}-\mathbb{E}[X_{t}]|}{2C(\Lambda,\lambda)te^ {-2Kt}}\exp\Big{(}-\frac{(x-\mathbb{E}[X_{t}])^{2}}{2C(\Lambda)te^{2Kt}}\Big{)},\] (5.18) _and_ \[\mathbb{P}(X_{t}\geq x)\leq\exp\Big{(}-\frac{(x-\mathbb{E}[X_{t}])^{2}}{2C( \Lambda)te^{2Kt}}\Big{)}\text{ and }\mathbb{P}(X\leq-x)\leq\exp\left(-\frac{(x+\mathbb{E}[X_{t}])^{2}}{2C( \Lambda,\lambda)te^{2Kt}}\right),\] (5.19) _respectively._ * _The density of the process_ \(Y_{t}\) _solution to FBSDE (_5.1_), satisfy for all_ \(t\in[0,T)\) _and_ \(y>0\)__ \[\frac{\mathbb{E}|Y_{t}-\mathbb{E}[Y_{t}]|}{2t\left(\Upsilon e^{Kt}\right)^{2}} \exp\Big{(}-\frac{(x-\mathbb{E}[Y_{t}])^{2}}{2t\left(\alpha(t)e^{-Kt}\right)^{2} }\Big{)}\leq\rho_{Y_{t}}(y)\leq\frac{\mathbb{E}|Y_{t}-\mathbb{E}[Y_{t}]|}{2t \left(\alpha(t)e^{-Kt}\right)^{2}}\exp\Big{(}-\frac{(y-\mathbb{E}[Y_{t}])^{2}}{ 2t\left(\Upsilon e^{Kt}\right)^{2}}\Big{)}.\] (5.20) _Furthermore for all \(y>0\) the tail probabilities satisfy_ \[\mathbb{P}(Y_{t}\geq y)\leq\exp\Big{(}-\frac{(y-\mathbb{E}[Y_{t}])^{2}}{2C(\Lambda )t\left(\Upsilon e^{Kt}\right)^{2}}\Big{)}\text{ and }\mathbb{P}(Y_{t}\leq-y)\leq\exp\Big{(}-\frac{(y+\mathbb{E}[Y_{t}])^{2}}{2C( \Lambda)t\left(\Upsilon e^{Kt}\right)^{2}}\Big{)}, \tag{5.21}\] _where the constant \(\Upsilon\) can be derived from (3.10)._ Proof.: It is enough to establish a similar bound as (2.3) for each of the processes \(X_{t}\) and \(Y_{t}\) for all \(t\in[0,T)\). We will start with the forward process \(X\). Under the assumption of the theorem, \(X_{t}\) is Malliavin differentiable and thanks to Lemma 5.9, we deduce that there is a constant \(K:=K(\Lambda_{\tilde{b}},\|\partial_{x}\sigma\|_{\infty},\|\partial_{t}\sigma \|_{\infty},\|\partial_{xx}^{2}\sigma\|_{\infty})\) \[0<\frac{\lambda}{\Lambda}e^{-Kt}\leq D_{s}X_{t}\leq\Lambda e^{Kt},\text{ for all }s\leq t. \tag{5.22}\] Therefore, \[0<C(\lambda,\Lambda)te^{-2Kt}\leq\int_{0}^{t}D_{s}X_{t}\mathbb{E}[D_{s}X_{t}| \mathfrak{F}_{s}]\mathrm{d}s\leq C(\Lambda)te^{2Kt}.\] This implies that the probability density and the tail probability of \(X_{t}\) satisfy (5.18) and (2.5) respectively. Let us turn now to the backward process \(Y\), and we recall that for all \(t\in[0,T]\), \(Y_{t}=v(t,X_{t})\), where \(v\) still denotes the classical solution to PDE (3.2). Under the assumptions of the theorem, we deduce that the process \(Y_{t}\) has an absolute continuous law with respect to the Lebesgue measure (see Theorem 5.7). Then for all \(0\leq s\leq t<T\) \[\Gamma_{Y_{t}}=\langle D_{s}Y_{t},D_{s}Y_{t}\rangle_{L^{2}([0,t])}=(v^{\prime }(t,X_{t}))^{2}\langle D_{s}X_{t},D_{s}X_{t}\rangle_{L^{2}([0,t])}>0\quad \mathbb{P}\text{-a.s.},\] which imply that \((v^{\prime}(t,X_{t}))^{2}>0\) and \(\langle D_{s}X_{t},D_{s}X_{t}\rangle_{L^{2}([0,t])}>0\quad\mathbb{P}\text{-a.s.}\) But the latter is always true, since all the terms inside the exponential in (5.17) are finite and \(\sigma\) is bounded from below by \(\lambda>0\). Then, there is a function \(\alpha(t)>0\) such that the following holds \[v^{\prime}(t,X_{t})\geq\alpha(t)\text{ or }v^{\prime}(t,X_{t})\leq-\alpha(t) \quad\mathbb{P}\text{-a.s. for all }t\in[0,T).\] First let us assume that \(v^{\prime}(t,X_{t})\geq\alpha(t)\,\mathbb{P}\text{-a.s.}\). From (5.22) we deduce that \[0<\frac{\lambda}{\Lambda}\alpha(t)e^{-Kt}\leq D_{s}Y_{t}=v^{\prime}(t,X_{t})D _{s}X_{t}\leq\Lambda\Upsilon e^{Kt}, \tag{5.23}\] where the constant \(\Upsilon\geq 0\) is an upper bound of \(v^{\prime}\) that can be derived from (3.10). Therefore, \[0<C(\lambda,\Lambda)t\left(\alpha(t)e^{-Kt}\right)^{2}\leq\int_{0}^{t}D_{s}Y_{ t}\mathbb{E}[D_{s}Y_{t}|\mathfrak{F}_{s}]\mathrm{d}s\leq C(\Lambda)t\left( \Upsilon e^{Kt}\right)^{2}. \tag{5.24}\] On the other hand, let us assume now \(v^{\prime}(t,X_{t})\leq-\alpha(t)\quad\mathbb{P}\text{-a.s.}\). Using the bound (5.22) once more, we deduce that \[0>-\alpha(t)e^{-Kt}\geq D_{s}Y_{t}=v^{\prime}(t,X_{t})D_{s}X_{t}\geq v^{ \prime}(t,X_{t})e^{Kt}. \tag{5.25}\] From the linearity of the expected value, we have \[0>-\alpha(t)e^{-Kt}\geq\mathbb{E}[D_{s}Y_{t}|\mathfrak{F}_{s}]\geq v^{\prime }(t,X_{t})e^{Kt}. \tag{5.26}\] Combining (5.25) and (5.26) give (5.24). Then, one can show that the density and the tail probability of the backward component \(Y_{t}\) satisfy the bounds (5.20) and (5.21). The proof is completed. ### Existence of density for \(Z\) with weakly differentiable drift The major aim in this part is to establish a result on the existence of a density for the control process \(Z\) under minimal regularity assumptions on the drift. The first conclusion in this approach was achieved in [37, Theorem 4.2] for decoupled FBSDEs with quadratic drivers (assuming the correct smoothness constraints on the drift and diffusion of the forward equation as given for instance in [24]). In the context of coupled FBSDEs, this result was obtained in [42] by assuming that the drift is twice continuously differentiable with bounded second derivatives in all spatial variables with non trivial diffusive coefficient. We claim that, the result obtained in [42, Theorem 3.6] is quite optimal in the sense that it will be difficult to obtain a refinement of this result when the diffusion coefficient is non constant. We consider in particular the following system \[\begin{cases}X_{t}^{x}=x+\int_{s}^{t}b(r,X_{r}^{x},Y_{r}^{x},Z_{r}^{x})\mathrm{d}r +W_{t}-W_{s},\\ Y_{t}^{x}=\phi(X_{T}^{x})+\int_{t}^{T}f(r,X_{r}^{x},Y_{r}^{x},Z_{r}^{x}) \mathrm{d}r-\int_{t}^{T}Z_{r}^{x}\mathrm{d}W_{r},\end{cases} \tag{5.27}\] and we assume the drift \(b\) is weakly differentiable in its forward component \(x\) such that the weak derivative \(b_{x}^{\prime}(t,\cdot,y,z)\) is bounded uniformly in \(t,y\) and \(z\). Note that, this case is not covered by [42, Theorem 3.6]. In this context, the associated PDE (3.2) (which remain a quasi-linear parabolic PDE) takes the particular form \[\frac{\partial v}{\partial t}+\frac{1}{2}\sum_{i,j}^{d}\frac{\partial^{2}v}{ \partial x_{i}\partial x_{j}}+\sum_{i}^{d}b_{i}(t,x,v,\nabla_{x}v)\frac{ \partial v}{\partial x_{i}}+f(t,x,v,\nabla_{x}v)=0, \tag{5.28}\] where \(v:=v(t,x)\) and \(v(T,x)=\phi(x)\) for all \((t,x)\in[0,T]\times\mathbb{R}^{d}\). We will assume further that: **Assumption 5.12**.: 1. \((x,y,z)\mapsto f(t,x,y,z)\) _is twice continuously differentiable for every_ \(t\in[0,T]\)_. There exists an adapted process_ \((\mathcal{K}_{t})_{0\leq t\leq T}\) _belonging to_ \(\mathcal{S}^{2p}(\mathbb{R})\) _for all_ \(p\geq 1\) _such that for any_ \(t\in[0,T]\) _all second order derivatives of_ \(f\) _at_ \((t,\mathbf{X}_{t})=(t,X_{t},Y_{t},Z_{t})\) _are bounded by_ \(\mathcal{K}_{t}\) _a.s._ 2. \(\phi\) _is twice continuously differentiable_ \(L(X_{T}^{x})\)_-a.e. and_ \(\phi^{\prime\prime}\) _has a polynomial growth._ We start with the second order Malliavin differentiability of coupled FBSDEs with quadratic type drivers. The setup proposed here covers the one in [24], since the coefficients satisfy weaker assumptions. **Theorem 5.13**.: _Let assumptions_ **(A6)**_,_**(A7)** _and_ **(A8)** _hold. Then, the solution process \(\mathbf{X}=(X,Y,Z)\) of the FBSDE (5.27) is twice Malliavin differentiable, i.e., for each \(s,t\in[0,T]\) the processes \((D_{s}X_{t},D_{s}Y_{t},D_{s}Z_{t})\in\mathbb{D}^{2,p}\times\mathbb{L}_{1,2} \times\mathbb{L}_{1,2}\), for all \(p\geq 1\). Moreover, a version of \((D_{s^{\prime}}D_{s}Y_{t},D_{s^{\prime}}D_{s}Z_{t})_{0\leq s^{\prime}\leq s \leq t\leq T}\) satisfies:_ \[D_{s^{\prime}}D_{s}Y_{t}= D_{s^{\prime}}D_{s}\xi-\int_{t}^{T}D_{s^{\prime}}D_{s}Z_{r} \mathrm{d}W_{r}\] \[+\int_{t}^{T}\left([D_{s^{\prime}}\mathbf{X}_{r}]^{\mathbf{T}}[Hf ](r,\mathbf{X}_{r})D_{s}\mathbf{X}_{r}+\langle\nabla f(r,\mathbf{X}_{r}),D_{ s^{\prime}}D_{s}\mathbf{X}_{r}\rangle\right)\mathrm{d}r, \tag{5.29}\] _where \(A^{\mathbf{T}}\) is the transpose of the matrix \(A\), the symbol \([Hf]\) denotes the Hessian matrix of the function \(f\), \(\xi=\phi(X_{T})\), \(D_{s^{\prime}}D_{s}\mathbf{X}_{r}=(D_{s^{\prime}}D_{s}X_{r},D_{s^{\prime}}D_{ s}Y_{r},D_{s^{\prime}}D_{s}Z_{r})\) and \(D_{s^{\prime}}D_{s}X_{r}\) stands for the second order Malliavin derivative of the forward process \(X\)._ _Moreover, \((D_{t}D_{s}Y_{t})_{0\leq s\leq t\leq T}\) is a version of \((D_{s}Z_{t})_{0\leq s\leq t\leq T}\)._ **Remark 5.14**.: _From_ **(A6)** _and Lemma 5.9, the transformed drift \(\tilde{b}\) is bounded and Lipschitz continuous, thus satisfies the linear growth condition with bounded weak derivative. Hence, the second order Malliavin differentiability of the forward process \(X_{t}\) follows by similar technique as in [8, Theorem 3.4]. This result can be viewed as an extension of the latter theorem to the case of forward SDEs coupled with solution to a BSDE._ _Moreover, by means of compactness arguments, one can show that the second order Malliavin derivative of \(X_{t}\) has moments of any order i.e. for all \(p>1\)_ \[\sup_{0\leq s^{\prime},s\leq t\leq T}\mathbb{E}|D_{s^{\prime}}D_{s}X_{t}|^{p}<\infty. \tag{5.30}\] _Furthermore, Theorem 5.13 is an optimal result in the sense that if the drift \(b\) is bounded in the space variable \(x\), Lipschitz continuous in \(y\) and \(z\) with bounded (weak) derivative in \(x\), the driver \(f\) and the terminal value \(\phi\) satisfy assumptions (A7) and (A8) respectively, then \((X_{t},Y_{t},Z_{t})\in\mathbb{D}^{2,p}\times\mathbb{L}_{2,2}\times\mathbb{L}_{2,2}\) for all \(p\geq 1\) and \((X_{t},Y_{t},Z_{t})\notin\mathbb{D}^{3,p}\times\mathbb{L}_{2,2}\times\mathbb{L}_ {2,2}\) for all \(p\geq 1\)._ **Remark 5.15**.: _Assuming further that the drift \(b\) is continuously differentiable in \(t\) with bounded derivatives, the following explicit representation of the second order Malliavin derivative of the forward process \(X_{t}\) is valid for all \(0\leq s^{\prime}\leq s\leq t\)_ \[D_{s^{\prime}}D_{s}X_{t}= 2D_{s}X_{t}\Big{(}\tilde{b}(t,X_{t})D_{s^{\prime}}X_{t}-\tilde{b }(s,X_{s})D_{s^{\prime}}X_{s}-\tilde{b}(s^{\prime},X_{s^{\prime}})-\int_{s^{ \prime}\neq s^{\prime}}^{t}\partial_{u}\tilde{b}(u,X_{u})D_{s^{\prime}}X_{u} \mathrm{d}u\] \[-\int_{s\lor s^{\prime}}^{t}\partial_{x}\tilde{b}(u,X_{u})\cdot b (u,X_{u})D_{s^{\prime}}X_{u}\mathrm{d}u-\int_{s\lor s^{\prime}}^{t}\partial_{ x}\tilde{b}(u,X_{u})D_{s^{\prime}}X_{u}\mathrm{d}W_{u}\Big{)}.\] _Indeed, let_ \[F(t,x)=\int_{0}^{x}\tilde{b}(t,y)\mathrm{d}y-\tilde{b}(t,0).\] _Then,_ \[\partial_{t}F(t,x)=\int_{0}^{x}\partial_{t}\tilde{b}(t,y)\mathrm{d}y-\frac{ \partial}{\partial t}\tilde{b}(t,0).\] _Notice that for all \(0\leq s\leq t\) the Malliavin derivative \(D_{s}X_{t}\) of \(X_{t}\) writes_ \[D_{s}X_{t}=\exp\left(\int_{s}^{t}\partial_{x}\tilde{b}(u,X_{u})\mathrm{d}u\right)\] _By the Ito's formula:_ \[\int_{s}^{t}\tilde{\partial}_{x}\tilde{b}(u,X_{u})\mathrm{d}u=2(F(t,X_{t})-F( s,X_{s}))-2\int_{s}^{t}\partial_{u}F(u,X_{u})\mathrm{d}u-2\int_{s}^{t}\tilde{b}(u,X_{u})\mathrm{d}X_{u}.\] _Hence the sought equation is derived by a direct computation._ To prove Theorem 5.13 we follow the strategy developed in [24, Theorem 4.1] which is based on a careful application of [24, Lemma 2.1] with a slight modification. In contrast to [24], here we consider a more general setting. The system (5.27) is (weakly) coupled (whereas [24] considered a decoupled FBSDE), the drift \(b\) has a bounded weak derivative in the spatial variable \(x\) instead of being twice continuously differentiable as in [24] and \(f^{\prime}_{y}\) and \(f^{\prime}_{z}\) are bounded by \(\Lambda(1+|z|^{\alpha})\) and \(\Lambda(1+\ell(y)|z|)\), respectively (and uniformly bounded by a constant \(C>0\) in [24] ). The latter means that the linear BSDE (5.48) has non-uniform Lipschitz coefficients on both the backward and the control components. We then approximate the BSDE (5.48) by truncating the stochastic constants appearing in both components. Note in addition that the second Malliavin derivative of the forward process \(X\) is not continuous and thus does not satisfy the standard requirement \(\mathbb{E}[\sup_{0\leq s^{\prime},s\leq T}|D_{s^{\prime}}D_{s}X_{t}|^{p}]<\infty\) as that is often in the literature. Subsequently, let us consider the following family of truncated BSDEs: For \(0\leq s\leq t\leq T\) and \(n\in\mathbb{N}\) \[U^{n}_{s,t}=D_{s}\xi+\int_{t}^{T}G^{n}(r,\Theta^{n}_{s,r})\mathrm{d}r-\int_{ t}^{T}V^{n}_{s,r}\mathrm{d}W_{r},\quad\Theta^{n}_{s,r}=(D_{s}X_{r},U^{n}_{s,r},V^ {n}_{s,r}), \tag{5.31}\] where, \(G^{n}:[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R}\to\mathbb{R}\) is given by \[G^{n}(t,x,u,v)=f^{\prime}_{x}(t,\mathbf{X}_{t})x+f^{\prime}_{y}(t,\mathbf{X} ^{n}_{t})u+f^{\prime}_{z}(t,\mathbf{X}^{n}_{t})v,\quad\mathbf{X}^{n}_{t}=(X_{ t},Y_{t},\rho_{n}(Z_{t})), \tag{5.32}\] and \(\rho_{n}:\mathbb{R}\to\mathbb{R}\) is continuously differentiable and satisfies the following properties3 Footnote 3: Note that, it is always possible to build such functions see for instance [24] * \((\rho_{n})_{n\in\mathbb{N}}\) converges uniformly to the identity. For all \(n\in\mathbb{N}\) and \(x\in\mathbb{R}\) \[\rho_{n}(x)=\begin{cases}n+1,&x>n+2,\\ x,&|x|\leq n,\\ -(n+1),&x<-(n+2).\end{cases}\] (5.33) In addition \(|\rho_{n}(x)|\leq|x|\) and \(|\rho_{n}(x)|\leq n+1\). 2. The derivative \(\nabla\rho_{n}\) is absolutely uniformly bounded by \(1\), and converges to \(1\) locally uniformly. **Remark 5.16**.: _One can show using the properties of the functions \(\rho_{n}\) and \(\ell\) that:_ \[|f^{\prime}_{y}(X_{t},Y_{t},\rho_{n}(Z_{t}))| \leq\Lambda(1+|Z_{t}|^{\alpha})\] \[|f^{\prime}_{z}(X_{t},Y_{t},\rho_{n}(Z_{t}))| \leq\Lambda(1+\ell(|Y_{t}|)|Z_{t}|).\] _Moreover, the boundedness of \(Y_{t}\) gives_ \[\sup_{n\in\mathbb{N}}\|f^{\prime}_{z}(t,\mathbf{X}^{n}_{t})|*W\|_{BMO}\leq \Lambda\|(1+\ell(|Y_{t}|)|Z_{t}|)*W\|_{BMO}<\infty.\] _Hence, the family of drivers \((G_{n})_{n\in\mathbb{N}}\) given by (5.32) satisfies the same conditions as the one of the BSDE (5.48)._ We have the following a-priori estimates **Lemma 5.17**.: _Suppose Assumption 5.12 is valid and \((U^{n},V^{n})\) solves the BSDE (5.31). Then for any \(p>1\) we have_ \[\sup_{n\in\mathbb{N}}\sup_{0\leq s\leq T}\mathbb{E}\Big{(}\int_{0}^{T}|U^{n}_ {s,t}|^{2}+|V^{n}_{s,t}|^{2}\mathrm{d}t\Big{)}^{p}<\infty. \tag{5.34}\] Proof.: From Remark 5.16, we know that the family of drivers \((G^{n})_{n\in\mathbb{N}}\) satisfies a stochastic Lipschitz type condition. Then, from [23, Lemma 3.8] there exist a constant \(q\in(1,\infty)\) which only depends on the BMO norm of \(Z*W\) and a constant \(C\) independent of \(n\in\mathbb{N}\) such that \[\sup_{n\in\mathbb{N}}\sup_{0\leq s\leq T}\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}| U^{n}_{s,t}|^{2}\mathrm{d}t\Big{)}^{p}+\Big{(}\int_{0}^{T}|V^{n}_{s,t}|^{2} \mathrm{d}t\Big{)}^{p}\Big{]}\leq C\mathbb{E}\Big{[}|D_{s}\xi|^{2pq}+\Big{(} \int_{0}^{T}|f^{\prime}_{x}(t,\mathbf{X}_{t})D_{s}X_{t}|\mathrm{d}t\Big{)}^{2 pq}\Big{]}^{\frac{1}{q}}.\] For any \(\gamma>2\), we have \[\mathbb{E}|D_{s}\xi|^{\gamma}=\mathbb{E}|\phi^{\prime}(X_{T})D_{s}X_{T}|^{ \gamma}\leq C(1+\mathbb{E}|X_{T}|^{2\gamma})+C\mathbb{E}|D_{s}X_{T}|^{2\gamma }<\infty.\] On the other hand, using **(A2)**, Remark 5.10 and the fact that \(Y\) is uniformly bounded and \(\ell\) is locally bounded, we deduce that \[\mathbb{E}\Big{(}\int_{0}^{T}|f^{\prime}_{x}(t,\mathbf{X}_{t})D_{s}X_{t}| \mathrm{d}t\Big{)}^{\gamma}\leq C\mathbb{E}\Big{(}\sup_{t\in[0,T]}|D_{s}X_{t}| ^{2\gamma}+\Big{(}\int_{0}^{T}(1+|Z_{t}|^{2})\mathrm{d}t\Big{)}^{2\gamma} \Big{)}<\infty.\] Combining the above bounds, provide the desired result. The next lemma gives an existence, uniqueness and Malliavin differentiability result of solution processes to BSDE (5.31). It is as an extension of [24, Lemma 4.2] **Lemma 5.18**.: _For each \(n\in\mathbb{N}\), the BSDE (5.31) has a unique solution \((U^{n},V^{n})\in\mathcal{S}^{2p}([0,T]\times[0,T])\times\mathcal{H}^{2p}([0,T ]\times[0,T])\) for any \(p>1\). Moreover for \(0\leq s\leq t\leq T\) the random variables \((U^{n}_{s,t},V^{n}_{s,t})\) are Malliavin differentiable and a version \((D_{s^{\prime}}U^{n}_{s,t},D_{s^{\prime}}V^{n}_{s,t})_{0\leq s^{\prime}\leq s \leq t\leq T}\) satisfies_ \[D_{s^{\prime}}U^{n}_{s,t} =D_{s^{\prime}}D_{s}\xi-\int_{t}^{T}D_{s^{\prime}}V^{n}_{s,r} \mathrm{d}W_{r}\] \[+\int_{t}^{T}\Big{[}\mathcal{R}^{n}_{s^{\prime},s,r}+f^{\prime}_{ y}(r,\mathbf{X}^{n}_{r})D_{s^{\prime}}U^{n}_{s,r}+f^{\prime}_{z}(r,\mathbf{X}^{n}_{r})D_{s^ {\prime}}V^{n}_{s,r}\Big{]}\mathrm{d}r, \tag{5.35}\] _where \(\mathcal{R}^{n}_{s^{\prime},s,t}=(D_{s^{\prime}}G^{n})(t,\Theta^{n}_{s,t})+f^{ \prime}_{x}(t,\mathbf{X}_{t})D_{s^{\prime}}D_{s}X_{t}\) and \(\Theta^{n}_{s,r}=(D_{s}X_{r},U^{n}_{s,r},V^{n}_{s,r})\). Furthermore for any \(p>1\)_ \[\sup_{n\in\mathbb{N}}\int_{0}^{T}\mathbb{E}\left\{\|D_{s^{\prime}}U^{n}\|^{2p}_ {\mathcal{S}^{2p}([0,T]\times[0,T])}+\|D_{s^{\prime}}V^{n}\|^{2p}_{\mathcal{H} ^{2p}([0,T]\times[0,T])}\right\}\mathrm{d}s^{\prime}<\infty. \tag{5.36}\] The first term in the expression of \(\mathcal{R}^{n}_{s^{\prime},s,t}\) writes: \[(D_{s^{\prime}}G^{n})(t,\Theta^{n}_{s,t})=D_{s^{\prime}}[f^{\prime}_{x}(t,\mathbf{ X}_{t})]D_{s}X_{t}+D_{s^{\prime}}[f^{\prime}_{y}(t,\mathbf{X}_{t}^{n})]U^{n}_{s,t}+D_ {s^{\prime}}[f^{\prime}_{z}(t,\mathbf{X}_{t}^{n})]V^{n}_{s,t},\] and each term is explicitly given by: \[\begin{cases}D_{s^{\prime}}[f^{\prime}_{x}(t,\mathbf{X}_{t})]D_{s}X_{t}=f^{{} ^{\prime\prime}}_{xx}(t,\mathbf{X}_{t})D_{s^{\prime}}X_{t}D_{s}X_{t}+f^{{}^{ \prime\prime}}_{xy}(t,\mathbf{X}_{t})D_{s^{\prime}}Y_{t}D_{s}X_{t}\\ \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad of the process \((U^{n}_{s,\cdot},V^{n}_{s,\cdot})\) to \((D_{s}Y_{\cdot},D_{s}Z_{\cdot})\). Using [23, Lemma 3.8] we deduce that, there exist \(q\in(1,\infty)\) only depending on \(\sup_{n\in\mathbb{N}}\|f^{\prime}_{z}(\cdot,\cdot,\cdot,\rho_{n}(Z_{\cdot}))*W\|_ {BMO}\) such that: \[\sup_{0\leq s\leq T}\mathbb{E}\Big{[}\int_{0}^{T}|D_{s}Y_{t}-U^{n }_{s,t}|^{2}\mathrm{d}t+\int_{0}^{T}|D_{s}Z_{t}-V^{n}_{s,t}|^{2}\mathrm{d}t\Big{]}\] \[\leq C\sup_{0\leq s\leq T}\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}|f^ {\prime}_{y}(t,\mathbf{X}_{t})-f^{\prime}_{y}(t,\mathbf{X}^{n}_{t})||U^{n}_{s, t}|+|f^{\prime}_{z}(t,\mathbf{X}_{t})-f^{\prime}_{z}(t,\mathbf{X}^{n}_{t})||V^{n}_{s, t}|\mathrm{d}t\Big{)}^{2pq}\Big{]}^{\frac{1}{q}}\] \[\leq C\sup_{0\leq s\leq T}(II_{1}+II_{2}).\] We only show that \(II_{1}\) is bounded uniformly in \(n\) and the boundedness of \(II_{2}\) follows similarly. Using Holder's inequality, we have \[II_{1} =\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}|f^{\prime}_{y}(t,\mathbf{X }_{t})-f^{\prime}_{y}(t,\mathbf{X}^{n}_{t})||U^{n}_{s,t}|\mathrm{d}t\Big{)}^{ 2pq}\Big{]}^{\frac{1}{q}}\] \[\leq\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}|f^{\prime}_{y}(t, \mathbf{X}_{t})-f^{\prime}_{y}(t,\mathbf{X}^{n}_{t})|^{2}\mathrm{d}t\Big{)}^{ 2pq}\Big{]}^{\frac{1}{2q}}\sup_{0\leq s\leq T}\mathbb{E}\Big{[}\Big{(}\int_{0} ^{T}|U^{n}_{s,t}|^{2}\mathrm{d}t\Big{)}^{2pq}\Big{]}^{\frac{1}{2q}}.\] Recall that \(\mathbf{X}_{\cdot}=(X_{\cdot},Y_{\cdot},Z_{\cdot})\) and \(\mathbf{X}^{n}_{\cdot}=(X_{\cdot},Y_{\cdot},\rho_{n}(Z_{\cdot}))\). From the assumptions of the theorem(continuity of \(f^{\prime}_{y}\)) combined with the boundedness of \(Y_{t}\), Remark 5.10 and Remark 5.16, we deduce that the first term of the above inequality is finite, while the second one is finite thanks to (5.34). The result follows by applying the Dominated convergence theorem. Then, similar arguments as in [24] can be applied here to show that \((D_{s^{\prime}}D_{s}Y,D_{s^{\prime}}D_{s}Z)\) is the unique solution to the linear BSDE (5.29) as well for \(D_{t}D_{s}Y_{t}\) being a version of \(D_{s}Z_{t}\). We omit the details. The proof is completed. We are now in position to establish the main result of this section **Theorem 5.19**.: _Suppose that assumptions of Theorem 5.13 are valid. Then the law of the control component \(Z_{t}\) solution to FBSDE (5.27) has a density which is continuous with respect to the Lebesgue measure if :_ 1. _There exists a s.t.,_ \(0\leq D_{s^{\prime}}D_{s}X_{t}\leq c\)_, for all_ \(0<s^{\prime},s<t\leq T.\)__ 2. \(f^{\prime}_{x},f^{\prime}_{y},f^{\prime^{\prime}}_{xx},f^{\prime^{\prime}}_{yy },f^{\prime^{\prime}}_{zz}\geq 0\) _and_ \(f^{\prime^{\prime}}_{xz}=f^{\prime^{\prime}}_{yz}=0\)_._ 3. \(f^{{}^{\prime\prime}}_{xy}=0\) _or_ \((f^{{}^{\prime\prime}}_{xy}\geq 0\) _and_ \(\phi^{\prime}\geq 0,\mathcal{L}(X_{T})\)_a.e.). If there exists_ \(A\in\mathcal{B}(\mathbb{R})\) _such that_ \[(1_{\{\underline{\phi^{\prime\prime}}\prec 0\}}\underline{\phi^{\prime\prime}}e^{2 \Lambda_{b}t}+\underline{\phi^{\prime}}1_{\{\underline{\phi^{\prime}}<0\}}c)+ (1_{\{\underline{\phi^{\prime\prime}}\supset 0\}}\underline{\phi^{\prime\prime}}+ \underline{f^{{}^{\prime\prime}}_{xx}}(t)(T-t))e^{-2\Lambda_{b}t}\geq 0,\] _and_ \[1_{\{\underline{\phi^{\prime\prime}}\prec 0\}}\underline{\phi^{\prime \prime}}^{A}e^{2\Lambda_{b}t}+\underline{\phi^{\prime}}^{A}1_{\{\underline{ \phi^{\prime\prime}}\prec 0\}}c+(1_{\{\underline{\phi^{\prime\prime}}\preceq 0\}} \underline{\phi^{\prime\prime}}^{A}+\underline{f^{{}^{\prime\prime}}_{xx}}(t)(T-t ))e^{-2\Lambda_{b}t}>0.\] Proof.: Under the assumptions of the Theorem and by the standard linear technique we obtain \[D_{s^{\prime}}D_{s}Y_{t}= \mathbb{E}^{\mathbb{Q}}\Big{[}e^{f_{t}^{\prime}\,f^{\prime}_{y}(u,\mathbf{X}_{\cdot})\mathrm{d}u}\,(\phi^{\prime\prime}(X_{T})D_{s^{\prime}}X_{T }D_{s}X_{T}+\phi^{\prime}(X_{T})D_{s^{\prime}}D_{s}X_{T})\] \[+\int_{t}^{T}e^{f_{t}^{\prime}\,\nabla_{y}f(u,\mathbf{X}_{\cdot}) \mathrm{d}u}\Big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})D_{s^{\prime}}D_{s}X_{r}+f^{ {}^{\prime\prime}}_{xx}(r,\mathbf{X}_{r})D_{s^{\prime}}X_{r}D_{s}X_{r}\] \[+f^{{}^{\prime\prime}}_{xy}(r,\mathbf{X}_{r})D_{s^{\prime}}X_{r}D_{ s}X_{r}+f^{{}^{\prime\prime}}_{xy}(r,\mathbf{X}_{r})D_{s^{\prime}}X_{r}D_{s}Y_{r}\] \[+f^{{}^{\prime\prime}}_{yy}(r,\mathbf{X}_{r})D_{s^{\prime}}Y_{r}D_ {s}Y_{r}+f^{{}^{\prime\prime}}_{zz}(r,\mathbf{X}_{r})D_{s^{\prime}}Z_{r}D_{s}Z_{ r}\Big{)}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{|},\] where the probability measure \(\mathbb{Q}\) is given by (5.50). Then, one can use a similar technique as in the proof of [37, Theorem 4.2] to deduce the result. This ends the proof. **Corollary 5.20**.: _Under the assumptions of Theorem 5.19, the probability density of the control component \(Z_{t}\) solution to FBSDE (5.27) satisfy for all \(t\in[0,T)\) and \(z>0\)_ \[\frac{\mathbb{E}|Z_{t}-\mathbb{E}[Z_{t}]|}{2t(\Upsilon^{(1)}e^{-Kt})^{2}}\exp \Big{(}-\frac{(z-\mathbb{E}[Z_{t}])^{2}}{2te^{-2Kt}}\Big{)}\leq\rho_{Z_{t}}(z) \leq\frac{\mathbb{E}|Z_{t}-\mathbb{E}[Z_{t}]|}{2t(\omega(t)e^{-Kt})^{2}}\exp \Big{(}-\frac{(z-\mathbb{E}[Z_{t}])^{2}}{2te^{2Kt}}\Big{)}.\] _Moreover, its tail probabilities satisfy the following Gaussian-type bounds_ \[\mathbb{P}(Z_{t}\geq z)\leq\exp\Big{(}-\frac{(z-\mathbb{E}[Z_{t}])^{2}}{2te^{2Kt}} \Big{)},\ \mathbb{P}(Z_{t}\leq-z)\leq\exp\left(-\frac{(z+\mathbb{E}[Z_{t}])^{2}}{2te^{2Kt}} \right).\] Proof.: It is enough to establish similar bound as (2.3). We recall that \(Z_{t}=v^{\prime}(t,X_{t})\), for all \(t\in[0,T)\) where \(v^{\prime}\) stands for the derivative with respect to the spatial component of the classical solution \(v\) to PDE (3.2). From the above theorem, we also know that the law of \(Z_{t}\) has an absolute continuous density with respect to the Lebesgue measure, since \(\Gamma_{Z_{t}}>0\)\(\mathbb{P}\)-a.s. and this implies that \((v^{\prime\prime}(t,X_{t}))^{2}>0\) and \(\langle D_{s}X_{t},D_{s}X_{t}\rangle_{L^{2}([0,t])}>0\quad\mathbb{P}\)-a.s. The latter is always true then, there is a function \(\omega(t)>0\) such that \[v^{\prime\prime}(t,X_{t})\geq\omega(t)\text{ or }v^{\prime\prime}(t,X_{t}) \leq-\omega(t)\quad\mathbb{P}\text{-a.s. for all }t\in[0,T).\] We assume that \(v^{\prime\prime}(t,X_{t})\geq\omega(t)\,\mathbb{P}\)-a.s., since the other case can be treated similarly. From (5.22) with \(\sigma=1\), we deduce that \[0<\omega(t)e^{-Kt}\leq D_{s}Z_{t}=v^{\prime\prime}(t,X_{t})D_{s}X_{t}\leq \Lambda\Upsilon^{(1)}e^{Kt},\] where the constant \(\Upsilon^{(1)}\geq 0\) is an upper bound of \(v^{\prime\prime}\) that can be derived from (5.3). Therefore, \[0<t\left(\omega(t)e^{-Kt}\right)^{2}\leq\int_{0}^{t}D_{s}Z_{t}\mathbb{E}[D_{s }Z_{t}|\mathfrak{F}_{s}]\mathrm{d}s\leq t\Big{(}\Upsilon^{(1)}e^{Kt}\Big{)}^{2}.\] **Remark 5.21**.: 1. _It is worth mentioning that the results in Section_ 5.1 _and_ 5.2 _can be recovered if we remove the Holder continuity of the drift_ \(b\) _in_ \((t,x)\)_. However the price to pay will be to further assume that_ \(\sigma=1\)_, the drift_ \(b\) _does not depend on the control variable_ \(z\) _i.e._ \(b(t,x,y,z)=b(t,x,y)\) _and all the other assumptions remaining the same._ 2. _Comparing our results to those obtained in_ _[_1, 2_]__, the bounds exhibit a dependence with respect to driver of the BSDE and the solution to the quasi linear PDE associated to the FBSDE._ _Example_.: Let \(b(t,x,y,z)=0,\,\sigma(t,x)=1,\,f(t,x,y,z)=y+z\) and \(\phi(x)=\frac{e^{x}}{1+e^{x}}\). Then the conditions of Theorem 5.11 are satisfied and thus the corresponding FBSDEs has a unique strong solution. In addition the desnsity of the solution have the following bounds: for all \(t\in[0,T)\) and \(y>0\) it holds: \[\frac{\mathbb{E}|Y_{t}-\mathbb{E}[Y_{t}]|}{2t\left(\Upsilon e^{\Lambda_{b}t} \right)^{2}}\exp\Big{(}-\frac{(x-\mathbb{E}[Y_{t}])^{2}}{2t\left(\alpha(t)e^{ -\Lambda_{2,b}t}\right)^{2}}\Big{)}\leq\rho_{Y_{t}}(y)\leq\frac{\mathbb{E}|Y_ {t}-\mathbb{E}[Y_{t}]|}{2t\left(\alpha(t)e^{-\Lambda_{2,b}t}\right)^{2}}\exp \Big{(}-\frac{(y-\mathbb{E}[Y_{t}])^{2}}{2t\left(\Upsilon e^{\Lambda_{2,b}t} \right)^{2}}\Big{)},\] where \(\Lambda_{2,b}=\max(1,\|v^{\prime}\|_{\infty},\|v^{\prime\prime}\|_{\infty})\) and \(v\) is the solution to the following quasilinear PDE \[\begin{cases}\frac{\partial v}{\partial t}(t,x)+\frac{1}{2}\frac{\partial^{2}v }{\partial x^{2}}(t,x)=-v(t,x)-\nabla_{x}v(t,x)\\ v(T,x)=\phi(x).\end{cases} \tag{5.40}\] ### The discontinuous case In this subsection, by considering a trivial diffusive coefficient, the goal is to relax **(A1)** by removing the Holder continuity assumptions on the coefficients. We propose a refinement of Theorem 5.4 and Theorem 5.7 as well, when the drift coefficient is allowed to be merely discontinuous in the first and second component (see Assumption 3.2 (AX)). The FBSDE of interest is given by \[\begin{cases}X_{t}^{x}=x+\int_{s}^{t}b(r,X_{r}^{x},Y_{r}^{x},Z_{r}^{x}) \mathrm{d}r+W_{t}-W_{s},\\ Y_{t}^{x}=\phi(X_{T}^{x})+\int_{t}^{T}f(r,X_{r}^{x},Y_{r}^{x},Z_{r}^{x}) \mathrm{d}r-\int_{t}^{T}Z_{r}^{x}\mathrm{d}W_{r}.\end{cases} \tag{5.41}\] We first start by studying the regularities of solution to (5.41) in this context. #### 5.3.1. Smoothness of solutions of QFBSDEs with discontinuous coefficients This subsection is devoted to the study of both the classical and the Malliavin differentiability of solutions to equations (5.41). For simplicity, we assume that equation (5.41) has initial value given by \((0,x)\in[0,T]\times\mathbb{R}^{d}.\) **Proposition 5.22** (Malliavin differentiability).: _Let Assumption (3.2) be in force with \(\sigma(t,x,y)\equiv\mathbb{I}_{d\times d}\). Then for every \(\delta>0,\) the FBSDE (5.41) has a unique strong Malliavin differentiable solution for all \(t\in[0,T-\delta],T>0.\)_ _In particular, for \(d=1\), and for all \(0<s\leq t<T,\) a version of \((D_{s}X_{t}^{x},D_{s}Y_{t}^{x})\) has the following explicit representation_ \[D_{s}X_{t}^{x} =\exp\Big{(}-\int_{s}^{t}\int_{\mathbb{R}}\tilde{b}\left(u,z \right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)\Big{)}, \tag{5.42}\] \[D_{s}Y_{t}^{x} =v^{\prime}(t,X_{t}^{x})D_{s}X_{t}^{x}, \tag{5.43}\] _where \(L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)\) stands for the integration in space and time with respect to the local time of \(X^{x}\) and \(v^{\prime}\) is the spatial partial derivative of \(v\) solution to the PDE (5.28)_ Proof.: The proof follows as in [45, Theorem 4.4] by using the result of Malliavin differentiability of SDEs with irregular drift from [38]. The explicit representation (5.42) is well known (see for example [7, 44]), while (5.43) follows by applying the chain rule for Malliavin calculus (see [41, Proposition 1.2.4]) by noticing that \(v\) is Lipschitz continuous for all \(t<T\) (see (3.10)). This ends the proof. Next, we study the regularity of \(x\mapsto(X_{t}^{x},Y_{t}^{x})\) in the Sobolev sense. Let us consider the weighted Sobolev space \(W^{1,p}(\mathbb{R}^{d},\nu)\) which consists of functions \(\mu\in L^{p}(\mathbb{R}^{d},\nu)\) admitting weak derivative of first order denoted by \(\partial_{x_{i}}\mu\). We equip this space with the complete norm \[\|\mu\|_{W^{1,p}(\mathbb{R}^{d},\nu)}^{p}:=\|\mu\|_{L^{p}(\mathbb{R}^{d},\nu) }^{p}+\sum_{i=1}^{d}\|\partial_{x_{i}}\mu\|_{L^{p}(\mathbb{R}^{d},\nu)}^{p}.\] The space \(L^{p}(\mathbb{R}^{d},\nu)\) is the weighted Lebesgue space of measurable functions \(\mu:\mathbb{R}^{d}\to\mathbb{R}^{d}\) such that \(\|\mu\|_{L^{p}(\mathbb{R}^{d},\nu)}^{p}:=\int_{\mathbb{R}^{d}}|\mu(x)|^{p}\nu (x)\mathrm{d}x<\infty\) and the function \(\nu\) stands for the weight possessing a \(p\)th amount with respect to the Lebesgue measure on \(\mathbb{R}^{d},\) i.e., \(\nu:\mathbb{R}^{d}\to[0,\infty)\) such that \(\int_{\mathbb{R}^{d}}(1+|x|^{p})\nu(x)\mathrm{d}x<\infty\) for some \(p>1.\) **Proposition 5.23**.: _Let assumptions of Proposition 5.22 be in force. Then for every \(\delta>0,\) and for any bounded open set \(\mathcal{O}\subset\mathbb{R}^{d}\), the solution \((X^{x},Y^{x},Z^{x})\) to equation (5.41) satisfies:_ \[(X_{t}^{x},Y_{t}^{x})\in L^{2}(\Omega,W^{1,p}(\mathbb{R}^{d},\nu))\times L^{2 }(\Omega,W^{1,p}(\mathcal{O}))\text{ for all }t\in[0,T-\delta]. \tag{5.44}\] _Moreover for \(d=1\), and for all \(0\leq t<T,\) the following explicit representations hold \(\mathrm{d}t\otimes\mathrm{d}\mathbb{P}\)-a.s._ \[\xi_{t}^{x}:=\frac{\partial}{\partial x}X_{t}^{x} =\exp\Big{(}-\int_{0}^{t}\int_{\mathbb{R}}\tilde{b}\left(u,z\right) L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)\Big{)}, \tag{5.45}\] \[\eta_{t}^{x}:=\frac{\partial}{\partial x}Y_{t}^{x} =v^{\prime}(t,X_{t}^{x})\xi_{t}^{x}, \tag{5.46}\] \(L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)\) _stands for the integration in space and time with respect to the local time of \(X^{x}\) and \(v^{\prime}\) is the spatial partial derivative of \(v\) solution to PDE (5.28)._ Proof.: We recall that, Theorem 3.4 ensures the unique weak solvability of FBSDE (5.41) and there is a _"decoupling field"_\(v\in W^{1,2,d+1}_{loc}([0,T[\times\mathbb{R}^{d},\mathbb{R})\) solution to PDE (5.28) such that the forward equation in (5.27) can be written as: \[X_{t}^{x}=x+\int_{s}^{t}\tilde{b}(r,X_{r}^{x})\mathrm{d}r+W_{t}-W_{s}, \tag{5.47}\] where \(\tilde{b}(t,x):=b(t,x,v(t,x),\nabla_{x}v(t,x)).\) Using (3.6), (3.10) and the assumptions of Proposition 5.22 the drift \(\tilde{b}\) is bounded for all \(t\in[0,T-\delta]\). Thus, using [40, Theorem 3] the forward process \(X^{x}\) is Sobolev differentiable. On the other hand, the function \(v\) is Lipschitz continuous for all \(t<T\) (see (3.10) ) then for all \(\omega\) outside every \(\mathbb{P}\)-null set, the process \(Y_{t}^{x}(\omega)=v(t,X_{t}^{x}(\omega))\) belongs to \(W_{Loc}^{1,1}(\mathbb{R})\) (see [33]). Then (5.46) follows by applying the chain rule developed in the same paper [33, Theorem 1.1]. This concludes the proof. **Remark 5.24**.: _The following relation between the Sobolev and the Malliavin derivatives of the forward and backward components hold respectively \(\mathrm{d}t\otimes\mathbb{P}\)-a.s._ \[D_{s}X_{t}^{x} =\xi_{t}^{x}(\xi_{s}^{x})^{-1},\] \[D_{s}Y_{t}^{x} =\eta_{t}^{x}(\xi_{s}^{x})^{-1}\] _for any \(s,t\in[0,T),s\leq t.\)_ #### 5.3.2. Density analysis of FBSDEs with discontinuous drift case The main objective in this section is to conduct the analysis of densities obtained in the preceding section by assuming now **Assumption 5.25**.: 1. _Assumption_ 3.2 _is valid in one dimension_ \((d=1)\) _with_ \(\sigma\equiv 1\) _and_ \(b\) _bounded in the forward and the control variable i.e.,_ \(\forall(t,x,y,z)\in[0,T]\times\mathbb{R}\times\mathbb{R}\times\mathbb{R},|b( t,x,y,z)|\leq\Lambda(1+|y|)\)_._ 2. _Assumptions_ **(A4)** _and_ **(A5)** _hold._ **Theorem 5.26** (Existence of a Density for \(X_{t}\)).: _Under_ **(A9)**_, the finite dimensional laws of the strong solution to the forward SDE (5.47) are absolutely continuous with respect to the Lebesgue measure. Precisely for fixed \(t\in(0,T],\) the law of \(X_{t}^{x}\) denoted by \(\mathcal{L}(X_{t}^{x})\) has a density with respect to the Lebesgue measure._ Proof.: This follows by showing that the exponential term in equation (5.42) is finite almost surely. The latter is obtained by using Girsanov theorem and Lemma A.1. This ends the proof. _Existence of a Density for the backward component \(Y_{t}\)._ We first derive the linear BSDE satisfied by a version \((D_{s}Y_{t}^{x},D_{s}Z_{t}^{x})\) of the Malliavin derivatives of the solution to (5.41). **Proposition 5.27**.: _Under Assumption 5.25, a version of \((D_{s}Y_{t},D_{s}Z_{t})\) satisfies_ \[D_{s}Y_{t} =0,\quad D_{s}Y_{t}=0,\quad t<s\leq T,\] \[D_{s}Y_{t} =\phi^{\prime}(X_{T})D_{s}X_{T}+\int_{t}^{T}\langle\nabla f(r, \mathbf{X}_{r}),D_{s}\mathbf{X}_{r}\rangle\mathrm{d}r-\int_{t}^{T}D_{s}Z_{r} \mathrm{d}W_{r}, \tag{5.48}\] _where \(D_{s}X_{t}\) is given by (5.42). Moreover, \((D_{t}Y_{t})_{0\leq t\leq T}\) is a continuous version of \((Z_{t})_{0\leq t\leq T}\)._ Proof.: The first part of the proof follows from Theorem 5.22. As for the representation, we appeal [23, Theorem 5.3] since \(\tilde{b}\) is uniformly bounded for all \(t\in[0,T]\). The proof is completed. Below is stated the main result of this subsection. **Theorem 5.28**.: _Suppose Assumption 5.25 is in force. Suppose in addition there is \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}|\mathfrak{F}_{t})>0\) and one of the following assumptions holds_ 1. \(\phi^{\prime}\geq 0,\)__\(\phi^{\prime}_{|\mathcal{A}}>0,\mathcal{L}(X_{T})\)_-a.e. and_ \(\underline{f}\geq 0,\)__ 2. \(\phi^{\prime}\leq 0,\)__\(\phi^{\prime}_{|\mathcal{A}}<0,\mathcal{L}(X_{T})\)_-a.e. and_ \(\overline{\overline{f}}\leq 0.\)__ _Then, \(Y_{t}\) possesses an absolute continuous law which with respect to the Lebesgue measure on \(\mathbb{R}\)._ Proof.: From Assumption 5.25, we deduce that \[|f^{\prime}_{z}(s,\mathbf{X}_{s})|\leq\Lambda\big{(}1+\ell(Y_{s})|Z_{s}|\big{)} \in\mathcal{H}_{\mathrm{BMO}}. \tag{5.49}\] Let \(\mathbb{Q}\) be the probability measure defined by \[\frac{\mathrm{d}\mathbb{Q}}{\mathrm{d}\mathbb{P}}\Big{|}_{\mathfrak{F}_{t}}= \mathcal{E}\Big{(}f^{\prime}_{z}(t,\mathbf{X}_{t})*W\Big{)}_{t}=M_{t}. \tag{5.50}\] It follows from (5.49) that \(M_{t}\) is uniformly integrable. Therefore, taking the conditioning expectation with respect to \(\mathbb{Q}\) on both sides of (5.48) gives for all \(0\leq s\leq t<T\) \[D_{s}Y_{t}=\mathbb{E}^{\mathbb{Q}}\Big{(}\phi^{\prime}(X_{T})D_{s}X_{T}+\int_{ t}^{T}\big{(}f^{\prime}_{x}(r,\mathbf{X}_{r})D_{s}X_{r}+f^{\prime}_{y}(s, \mathbf{X}_{r})D_{s}Y_{r}\big{)}\,\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{)} \tag{5.51}\] Using Remark 5.24 and the well known linearisation method, \(D_{s}Y_{t}\) can be rewritten: \[D_{s}Y_{t} =\mathbb{E}^{\mathbb{Q}}\Big{[}e^{\int_{t}^{T}f^{\prime}_{y}(u, \mathbf{X}_{u})\mathrm{d}u}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T}e^{\int_{t}^{ T}f^{\prime}_{y}(u,\mathbf{X}_{u})\mathrm{d}u}f^{\prime}_{x}(r,\mathbf{X}_{r}) \xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}(\xi_{s})^{-1}\] \[=\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{ T}\psi_{r}f^{\prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t }\Big{]}(\psi_{t})^{-1}(\xi_{s})^{-1},\] where the last equality is due to the Bayes' rule with \(\psi\) is given by \[\psi_{t}:=\exp\Big{(}\int_{0}^{t}(f^{\prime}_{y}(u,\mathbf{X}_{u})-\frac{1}{2} |f^{\prime}_{z}(u,\mathbf{X}_{u})|^{2})\mathrm{d}u+\int_{0}^{t}f^{\prime}_{z}( u,\mathbf{X}_{u})\mathrm{d}W_{u}\Big{)},\] and the expression of \(\xi\) is given in (5.45). Therefore, from the computations above, the Malliavin covariance \(\Gamma_{Y_{t}}\) of \(Y_{t}\) is given by \[\Gamma_{Y_{t}}=\Big{(}\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+ \int_{t}^{T}\psi_{r}f^{\prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|} \mathfrak{F}_{t}\Big{]}\Big{)}^{2}\times(\psi_{t}^{-1})^{2}\int_{0}^{t}(\xi_{ s}^{-1})^{2}\mathrm{d}s.\] It is enough to prove that \(\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T}\psi_{r}f^{ \prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}\neq 0\) under assumption (H+) to deduce the desired result. Let us remark that, the product \(\psi_{T}\xi_{T}\) can be rewritten as follow: \[\psi_{T}\xi_{T} =\exp\Big{(}-\int_{0}^{T}\int_{\mathbb{R}}\tilde{b}\left(u,z \right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)+\int_{0}^{T}f^{\prime}_{y}(u, \mathbf{X}_{u})\mathrm{d}u\Big{)}\] \[\quad\times\exp\Big{(}\int_{0}^{T}f^{\prime}_{z}(u,\mathbf{X}_{u} )\mathrm{d}W_{u}-\frac{1}{2}\int_{0}^{T}|f^{\prime}_{z}(u,\mathbf{X}_{u})|^{2} \mathrm{d}u\Big{)}=H_{T}\times M_{T}\] The Bayes' rule once more yields \[\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T}\psi_{r}f^{ \prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]} =\mathbb{E}^{\mathbb{Q}}\Big{[}\phi^{\prime}(X_{T})H_{T}+\int_{t}^{T}f^{ \prime}_{x}(r,\mathbf{X}_{r})H_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{]}M_{t}\] Using the fact that \(\phi^{\prime}+f^{\prime}_{x}\geq\underline{\phi}+\underline{f}\), we deduce: \[\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T} \psi_{r}f^{\prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_ {t}\Big{]}\geq \mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}}\Big{(} \underline{\phi}H_{T}+\underline{f}\int_{t}^{T}H_{r}\mathrm{d}r\Big{)}\Big{|} \mathfrak{F}_{t}\Big{]}M_{t}\] Using the bound \(|f^{\prime}_{y}(s,\mathbf{X}_{s})|\leq\Lambda(1+|Z_{s}|^{\alpha})\) and Young inequality \(|z|^{\alpha}\leq c(1+|z|^{2})\), for \(\alpha\in(0,1)\) we deduce \[\mathbb{E}\Big{[}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T} \psi_{r}f^{\prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_ {t}\Big{]}\geq \mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}}\Big{(} \underline{\phi}^{\prime}e^{-\int_{0}^{T}\int_{\mathbb{R}}\tilde{b}\left(u,z \right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)+\int_{0}^{T}f^{\prime}_{y}(u, \mathbf{X}_{u})\mathrm{d}u}\] \[+\underline{f}\int_{t}^{T}e^{-\int_{0}^{r}\int_{\mathbb{R}}\tilde{b} \left(u,z\right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)}e^{\int_{0}^{T}f^{\prime}_{y} (u,\mathbf{X}_{u})\mathrm{d}u}\mathrm{d}r\Big{)}\big{|}\mathfrak{F}_{t}\Big{]}M _{t}\] \[\geq \mathbb{E}^{\mathbb{Q}}\Big{[}1_{\{X_{T}\in\mathcal{A}\}}e^{- \Lambda T}\Big{(}\underline{\phi}^{\prime}e^{-\int_{0}^{T}\int_{\mathbb{R}} \tilde{b}\left(u,z\right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)}e^{-\Lambda\int_{0}^ {T}|Z_{u}|^{2}\mathrm{d}u}\] \[+\underline{f}\int_{t}^{T}e^{-\int_{0}^{r}\int_{\mathbb{R}}\tilde{b} \left(u,z\right)L^{X^{x}}(\mathrm{d}u,\mathrm{d}z)}e^{-\Lambda\int_{0}^{T}|Z_{ u}|^{2}\mathrm{d}u}\Big{)}\big{|}\mathfrak{F}_{t}\Big{]}M_{t}.\] Provided that the law of \(X_{T}\) is absolutely continuous with respect to the Lebesgue measure, and that \(-\int_{0}^{T}\int_{\mathbb{R}}\tilde{b}\left(u,z\right)L^{X^{x}}(\mathrm{d}u, \mathrm{d}z)<\infty\), \(\mathbb{Q}\)-a.s., we deduce that \(\mathbb{E}\Big{(}\psi_{T}\phi^{\prime}(X_{T})\xi_{T}+\int_{t}^{T}\psi_{r}f^{ \prime}_{x}(r,\mathbf{X}_{r})\xi_{r}\mathrm{d}r\Big{|}\mathfrak{F}_{t}\Big{)}>0\). The absolute continuity of the law of with respect to the Lebesgue measure follows from assumption. It then remains to show that: \[-\int_{0}^{T}\int_{\mathbb{R}}\tilde{b}\left(u,z\right)L^{X^{x}}(\mathrm{d}u, \mathrm{d}z)<\infty,\ \mathbb{Q}\text{-a.s.}\] Thanks to Girsanov's theorem, the decomposition (A.2) and Holder inequality we have \[\mathbb{E}^{\mathbb{Q}}\Big{[}-\int_{0}^{T}\int_{\mathbb{R}}\tilde{b} \left(s,z\right)L^{X^{x}}(\mathrm{d}s,\mathrm{d}z)\Big{]}\] \[= \mathbb{E}\Big{[}-M_{T}\int_{0}^{T}\int_{\mathbb{R}}\tilde{b}\left( s,z\right)L^{X^{x}}(\mathrm{d}s,\mathrm{d}z)\Big{]}\leq\mathbb{E}\Big{[}M_{T}^{2} \Big{]}^{\frac{1}{2}}\mathbb{E}\Big{[}\Big{(}-\int_{0}^{T}\int_{\mathbb{R}} \tilde{b}\left(s,z\right)L^{X^{x}}(\mathrm{d}s,\mathrm{d}z)\Big{)}^{2}\Big{]}^{ \frac{1}{2}}\] \[\leq C\mathbb{E}\Big{[}e^{2\int_{0}^{T}\tilde{b}\left(s,W_{s}^{x} \right)\mathrm{d}W_{s}-\int_{0}^{T}(\tilde{b}\left(s,W_{s}^{x}\right))^{2} \mathrm{d}s}\Big{(}-\int_{0}^{T}\tilde{b}(s,W_{s}^{x})\mathrm{d}W_{s}\] \[-\int_{0}^{T}\tilde{b}(T-s,\widehat{W}_{s}^{x})\mathrm{d}B(s)+ \int_{0}^{T}\tilde{b}(T-s,\widehat{W}_{s}^{x})\frac{\widehat{W}_{s}}{T-s} \mathrm{d}s\Big{)}^{2}\Big{]}^{\frac{1}{2}}\] \[\leq C\mathbb{E}\Big{[}e^{4\int_{0}^{t}\tilde{b}\left(s,W_{s}^{x} \right)\mathrm{d}W_{s}-2\int_{0}^{t}(\tilde{b}\left(s,W_{s}^{x}\right))^{2} \mathrm{d}s}\Big{]}^{1/4}\Big{\{}\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}\tilde{b }(s,W_{s}^{x})\mathrm{d}W_{s}\Big{)}^{4}\Big{]}^{1/4}\] \[+\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}\tilde{b}(T-s,\widehat{W}_{ s}^{x})\mathrm{d}B_{s}\Big{)}^{4}\Big{]}^{1/4}+\mathbb{E}\Big{[}\Big{(}\int_{0}^{T} \tilde{b}(T-s,\widehat{W}_{s}^{x})\frac{\widehat{W}_{s}}{T-s}\mathrm{d}s\Big{)} ^{4}\Big{]}^{1/4}\Big{\}}\] \[\leq CI_{1}(I_{2}+I_{3}+I_{4}). \tag{5.52}\] Using Cauchy-Schwartz inequality, we have that \[I_{1}\leq\mathbb{E}\Big{[}e^{4\int_{0}^{t}\tilde{b}\left(s,W_{s}^{x}\right) \mathrm{d}W_{s}-8\int_{0}^{t}(\tilde{b}\left(s,W_{s}^{x}\right))^{2}\mathrm{d} s}\Big{]}^{1/8}\mathbb{E}\Big{[}e^{6\int_{0}^{t}(\tilde{b}\left(s,W_{s}^{x} \right))^{2}\mathrm{d}s}\Big{]}^{1/8}.\] Applying Girsanov theorem to the martingale \(4\int_{0}^{t}\tilde{b}(s,W_{s}^{x})\mathrm{d}W_{s}\), the first term in the bound above is equal to \(1\). The boundedness of \(\tilde{b}\) yields the boundedness of the second term. Using Ito isometry (or Burkholder-Davis-Gundy), the boundedness of \(\tilde{b}\) implies the boundedness of \(I_{2}\), \(I_{3}\) and \(I_{4}\) (see for e.g. [7]). The proof is completed. **Remark 5.29**.: _Another interpretation of the result in Theorem 5.28 is that the spatial derivative \(v^{\prime}\) of \(v\) (solution to quasi-linear PDE (5.28)) is sucht that \(v^{\prime}(t,x)>0\) a.e. (or \(v^{\prime}(t,x)<0\) a.e.). This follows from the fact that \(D_{t}Y_{t}\) is a continuous version of \(Z_{t}=v^{\prime}(t,x)\) for all \(t\in(0,T)\) (see [2, Remark 3.4]). We emphasize that the solution \(v\) here is understood in the weak sense. In other words, we obtain conditions that guarantee the strict monotonicity of weak solutions of parabolic PDE by means of purely probabilistic tools. To our knowledge, such investigation has always been performed only for classical solution of parabolic or elliptic PDEs._ ## 6. Application to smoothness of densities The main aim in this section is to apply the results obtained above to study the regularity of densities of solution to coupled FBSDEs with quadratic drivers under minimal conditions. Indeed, it is well known that the Malliavin calculus stands as a powerful tool to show hypoellipticity (existence of smooth density) under weaker conditions than Hormander's ones (see [41]). In particular, we exploit Theorem 5.13 to prove that the densities obtained for \(X\), \(Y\) and \(Z\) in the above sections, are indeed Holder continuous despite the fact that the drift coefficient is at most only Lipschitz continuous. We consider the following FBSDE \[\begin{cases}X_{t}=x+\int_{0}^{t}b(s,X_{s},Y_{s},Z_{s})\mathrm{d}s+W_{t},\\ Y_{t}=\phi(X_{T})+\int_{t}^{T}f(s,X_{s},Y_{s},Z_{s})\mathrm{d}s-\int_{t}^{T}Z _{s}\mathrm{d}W_{s}.\end{cases} \tag{6.1}\] Recall that under assumption **(A6)** and if \(\phi^{\prime}\geq 0\), \(\underline{f}\geq 0\) (resp. \(\phi^{\prime}\leq 0\) and \(\underline{f}\leq 0\)) then for all \(0<s\leq t<T\), \(D_{s}Y_{t}\geq 0\) (resp. \(D_{s}Y_{t}\leq 0\) ). Moreover, if there is a Borel set \(\mathcal{A}\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}/\mathfrak{F}_{t})>0\) and \(\phi^{\prime}_{|\mathcal{A}}>0\) (resp. \(\phi^{\prime}_{|\mathcal{A}}<0\)) then \(D_{s}Y_{t}>0\) (resp. \(D_{s}Y_{t}<0\) ). In addition, if **(A7)** and **(A8)** are also valid, then the processes \(X,Y\) and \(Z\) solutions to (6.1) are twice Malliavin differentiable. Let \((G_{k})_{0\leq k\leq n}\) be a random variable such that \(G_{0}=1\) and for all \(0\leq s\leq t\leq T\) \[G_{k}=\delta\Big{(}G_{k-1}\Big{(}\int_{0}^{T}D_{s}X_{t}\mathrm{d}s\Big{)}^{-1} \Big{)},\text{ if }1\leq k\leq n+1,\] with \(\delta\) denoting the Skorohod integral introduced in (2.1). From [41, P.115], if \(X_{t}\in\mathbb{D}^{1,2}(\Omega)\), \(\int_{0}^{T}D_{s}X_{t}\mathrm{d}s\neq 0\)\(\mathbb{P}\)-a.s. and \(G_{k}\Big{(}\int_{0}^{T}D_{s}X_{t}\mathrm{d}s\Big{)}^{-1}\) belongs to \(\mathrm{Dom}(\delta)\) then the probability density \(\rho_{X_{t}}\) of the forward equation \(X_{t}\) is of class \(C^{n}(\mathbb{R})\) and \[\frac{d^{n}}{dx^{n}}\rho_{X_{t}}(x)=(-1)^{n}\mathbb{E}\left[1_{\{X_{t}>x\}}G_{ n+1}\right] \tag{6.2}\] In the above context, the Malliavin derivative of \(X_{t}\) is explicitly given by \[D_{s}X_{t}=\exp\Big{(}\int_{s}^{t}\tilde{b}^{\prime}(u,X_{u})\mathrm{d}u\Big{)},\text{ for all }0\leq s\leq t,\] where \(\tilde{b}(t,x)=b(t,x,v(t,x),v^{\prime}(t,x))\) and \(\tilde{b}^{\prime}\) denotes the weak derivative of \(\tilde{b}\) with respect to the variable \(x\). Hence for any \(t\in(0,T]\) there is \(\epsilon>0\) such that \(\int_{0}^{T}D_{s}X_{t}\mathrm{d}s\geq\epsilon>0.\) Using the smoothness of the function \(y\mapsto y^{-1}\) on \((\epsilon,\infty)\) and the fact that \(X_{t}\in\mathbb{D}^{2,2}(\Omega)\) we deduce that \(F:=\Big{(}\int_{0}^{T}D_{s}X_{t}\mathrm{d}s\Big{)}^{-1}\in\mathbb{D}^{1,2}(\Omega)\), so \(F\in\mathrm{Dom}(\delta)\) and \(G_{1}=FW(T)-\int_{0}^{T}D_{s}F\mathrm{d}s\) which implies that \(G_{1}\in\mathbb{D}^{1,2}(\Omega)\), then \(G_{1}F\in\mathbb{D}^{1,2}(\Omega)\) and \(G_{1}F\in\mathrm{Dom}(\delta)\). Similarly, let \((H_{k})_{0\leq k\leq n}\) be a random variable such that \(H_{0}=1\) and for all \(0\leq s\leq t\leq T\) \[H_{k}=\delta\Big{(}H_{k-1}\Big{(}\int_{0}^{T}D_{s}Y_{t}\mathrm{d}s\Big{)}^{-1 }\Big{)},\text{ if }1\leq k\leq n+1.\] Since for all \(0\leq s\leq t\), \(D_{s}Y_{t}>0\)\(\mathbb{P}\)-a.s., we deduce that, there is a constant \(\epsilon_{1}>0\) such that \(\int_{0}^{T}D_{s}Y_{t}\mathrm{d}s\geq\epsilon_{1}\). Using now the smoothness of the function \(y\mapsto y^{-1}\) on \((\epsilon_{1},\infty)\) and the fact that \(Y_{t}\in\mathbb{D}^{2,2}(\Omega)\) we deduce that \(F_{1}:=\Big{(}\int_{0}^{T}D_{s}Y_{t}\mathrm{d}s\Big{)}^{-1}\in\mathbb{D}^{1,2 }(\Omega)\) so \(H_{1}F_{1}\in\mathrm{Dom}(\delta)\). We have shown that **Proposition 6.1**.: _Under_ **(A6)**_,_**(A7)**_,_**(A8)** _and assuming further that either (A+) or (A-) is valid. Then, the densities \(\rho_{X_{t}}\) and \(\rho_{Y_{t}}\) of the processes \(X_{t}\) and \(Y_{t}\) solution to (6.1), respectively belong to \(C^{0}(\mathbb{R})\) such that_ \[\rho_{X_{t}}(x) =\mathbb{E}\left(1_{\{X_{t}>x\}}G_{1}\right)\] \[\rho_{Y_{t}}(x) =\mathbb{E}\left(1_{\{Y_{t}>x\}}H_{1}\right).\] _Moreover, if the assumptions of Theorem 5.19 are valid then for all \(0\leq s\leq t\), \(D_{s}Z_{t}>0\)\(\mathbb{P}\)-a.s. and the density \(\rho_{Z_{t}}\) of the solution process \(Z\) to (6.1) belongs to \(C^{0}(\mathbb{R})\) and_ \[\rho_{Z_{t}}(x)=\mathbb{E}\left(1_{\{Z_{t}>x\}}I_{1}\right),\] _where \(I_{1}=\delta\Big{(}\Big{(}\int_{0}^{T}D_{s}Z_{t}\mathrm{d}s\Big{)}^{-1}\Big{)}\)._ The following result improves the regularity of the densities of solutions \(X,Y\) and \(Z\) to the system (6.1) obtained above. The method consists to investigate the integrability property of the Malliavin covariance matrix of the different processes (see Lemma 6.2 below). We recall that for any random variable \(F\), the matrix \(\Gamma_{F}=(\Gamma_{F}^{ij})_{i,j=1,\ldots,d}\), given by \(\Gamma_{F}^{ij}:=\langle DF^{i},DF^{j}\rangle_{\mathcal{H}^{2}}\) is said to be non-degenerate if for all \(p\geq 1\) \[\mathbb{E}\Big{[}\left(\Gamma_{F}\right)^{-p}\Big{]}<\infty. \tag{6.3}\] **Lemma 6.2** (Proposition 23 in [5]).: _Let \(F=(F^{1},\cdots,F^{d})\) with \(F^{1},\cdots,F^{d}\in\bigcap_{p\geq 1}\mathbb{D}^{k+1,p}\), such that \(\Gamma_{F}\) satisfies (6.3). Then \(\rho_{F}\in C^{k-1,\beta}(\mathbb{R})\) for some \(k\geq 1\) and \(\beta\in(0,1)\) i.e. \(\rho_{F}\) is \(k-1\)-times differentiable with Holder continuous derivatives of exponent \(\beta<1\)._ We will first prove that the densities of \(X_{t}\) and \(Y_{t}\) are Holder continuous. **Proposition 6.3**.: _Let assumption_ **(A6)**_,_**(A7)** _and_ **(A8)** _be in force. Suppose in addition that there is \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}|\mathfrak{F}_{t})>0\) and one of the assumptions (A+) or (A-) holds. Then, \(\rho_{X_{t}}\in C^{0,\beta}(\mathbb{R})\) for all \(t\in[0,T]\) and for all \(t\in[0,T)\), \(Y_{t}\) has a \(\beta\)-Holder continuous density \(\rho_{Y_{t}}\) given by_ \[\rho_{Y_{t}}(x)=\rho_{Y_{t}}(x_{0})\exp\Big{(}-\int_{x_{0}}^{x}w_{Y_{t}}(z) \mathrm{d}z\Big{)},\,x\in\mathrm{supp}\,\rho_{Y_{t}},\] _where \(x_{0}\) is a point in the interior of \(\mathrm{supp}(\rho_{Y_{t}})\) and_ \[w_{Y_{t}}(z):=\mathbb{E}\Big{[}\delta\Big{(}\int_{0}^{T}D_{s}Y_{t}\mathrm{d}s \Big{)}^{-1}\Big{|}Y_{t}=z\Big{]},\] \(\delta\) _denotes the Skorohod integral introduced in (2.1)._ Proof.: From Lemma 5.9, we know that \(\tilde{b}\) is uniformly bounded and Lipschitz continuous in its spatial variable for all \(t\in[0,T)\). Then, using [8, Proposition 4.4] the following holds \[\Big{(}\det\Gamma_{X_{t}}\Big{)}^{-1}\in\bigcap_{p\geq 1}L^{p}(\Omega).\] This implies that \(\rho_{X_{t}}\in C^{0,\beta}(\mathbb{R})\) for all \(t\in[0,T)\) with \(\beta\in(0,1)\). On the other hand, for all \(t\in[0,T)\) we deduce \[\det\Gamma_{Y_{t}}=(v^{\prime}(t,X_{t}))^{2}\det\Gamma_{X_{t}}\geq\alpha^{2}( t)\det\Gamma_{X_{t}},\] where \(\alpha(t)\) is the positive function such that \(v^{\prime}(t,X_{t})\geq\alpha(t)\) or \(v^{\prime}(t,X_{t})\leq-\alpha(t)\)\(\mathbb{P}\)-a.s. (see proof of Theorem 5.11 ). Without loss of generality, we assume that there is a constant \(\epsilon_{1}>0\) such that \(\alpha(t)\geq\epsilon_{1}\). Hence, \(\Big{(}\det\Gamma_{Y_{t}}\Big{)}^{-1}\leq\alpha^{-2}(t)\Big{(}\det\Gamma_{X_ {t}}\Big{)}^{-1}\in\bigcap_{p\geq 1}L^{p}(\Omega)\). For the explicit representation of the density it follows by combining arguments of the proof of [41, Proposition 2.1.1] and [15, Proposition 2]. This concludes the proof. Let us turn now to the case of the control process \(Z\). **Proposition 6.4**.: _Assume that there is \(\mathcal{A}\in\mathcal{B}(\mathbb{R})\) such that \(\mathbb{P}(X_{T}\in\mathcal{A}/\mathfrak{F}_{t})>0\) and let assumptions of Theorem 5.19 be in force. Then the control process \(Z\) solution to (6.1) has a density \(\rho_{Z_{t}}\) which is Holder continuous and the following explicit representation holds_ \[\rho_{Z_{t}}(z)=\rho_{Z_{t}}(x_{0})\exp\Big{(}-\int_{z_{0}}^{z}w_{Z_{t}}(u) \mathrm{d}u\Big{)},\,z\in\mathrm{supp}\,\rho_{Z_{t}},\] _where \(z_{0}\) is a point in the interior of \(\mathrm{supp}(\rho_{Z_{t}})\) and_ \[w_{Z_{t}}(u):=\mathbb{E}\Big{[}\delta\Big{(}\int_{0}^{T}D_{s}Z_{t}\mathrm{d}s \Big{)}^{-1}\Big{|}Z_{t}=u\Big{]},\] \(\delta\) _denotes the Skorohod integral introduced in (2.1)._ Proof.: We recall that for all \(t\in[0,T)\)\(Z_{t}=v^{\prime}(t,X_{t})\)\(\mathbb{P}\)-a.s. Hence \[\det\Gamma_{Z_{t}}=(v^{\prime\prime}(t,X_{t}))^{2}\det\Gamma_{X_{t}}\geq\omega( t)^{2}\det\Gamma_{X_{t}},\] where \(\omega(t)\) is the positive function such that \(v^{\prime\prime}(t,X_{t})\geq\omega(t)\) or \(v^{\prime\prime}(t,X_{t})\leq-\omega(t)\)\(\mathbb{P}\)-a.s. (see proof of Corollary 5.20 ). Then, for all \(p\geq 1\) \[\mathbb{E}\left[\Big{(}\det\Gamma_{Z_{t}}\Big{)}^{-p}\right]\leq\omega^{-2p}( t)\mathbb{E}\left[\Big{(}\det\Gamma_{X_{t}}\Big{)}^{-p}\right]<\infty.\] **Conclusion and Possible extension.** In this paper, we derive some conditions under which the solution of a system of coupled forward-backward SDEs with non-smooth drift coefficient and quadratic driver admits a density which is absolutely continuous with respect to the Lebesgue measure. Two cases were considered: the Holder continuous drift case and the discontinuous drift one. In the latter, we only consider a Brownian motion with a drift coefficient which in fact depends on the solution \((Y,Z)\) of the backward SDE. It is worth noting that, there is less hope to obtaining such densities' results on coupled FBSDEs with non constant diffusive coefficient while the drift coefficient is allowed to be discontinuous. Indeed, via the decoupling field the forward equation reduces to a classical SDE with singular drifts and the solvability of such equations with non constant volatility remains a big challenge. Due to the quadratic behavior of the driver, most of the results derived in this paper are restricted to the one dimension setting. However, the result on the weak solvability of FBSDEs obtained in Section 3 (see Theorem 3.4) can be extended in the multidimensional setting if one considers that the driver \(f\) is at most linear growth in \(z\) (see Section 8 in [12]). In this case, the solution \((X,Y,Z)\) will be taken in the space \(\mathcal{S}^{2}(\mathbb{R}^{d})\times\mathcal{S}^{2}(\mathbb{R}^{q})\times \mathcal{H}^{2}(\mathbb{R}^{q\times d})\), where \(q\) represents the dimension of the backward component \(Y\). In this context, the strong solvability results of FBSDEs with the additional assumptions that the coefficients are Holder continuous derived in Section 5 remain valid as well as the density's result of the forward component \(X\). The case for the backward component \(Y\) requires more care, since the question relative to the strict positivity of solution to multidimensional quasi-linear PDE remains a non trivial problem to handle. However, by combining the techniques developed in this paper and the recent significant development in [11] we believe that one can successively handle this question. Many other interesting generalisation of this paper are conceivable, among them we point out a possible extension to the multidimensional quadratic setting of fully coupled FBSDEs by using the recent results from [9] and [25]. These questions are under investigation for future work. ## Appendix A Integration with local time Here, we provide some basic notions related to the integration with local time. We refer the reader to [17], [18] and [7] for more information. The integration with respect to local time-space is defined for integrands that taking value in the space \((\mathcal{H}_{x},\|\cdot\|_{x})\) (see e.g. [17]) representing the space of Borel measurable functions \(\phi:[0,T]\times\mathbb{R}\to\mathbb{R}\) with the norm \[\left\|\phi\right\|_{x}:=2\Big{(}\int_{0}^{1}\int_{\mathbb{R}}\phi^{2}(s,z) \exp(-\frac{|z-x|^{2}}{2s})\frac{\mathrm{d}s\,\mathrm{d}z}{\sqrt{2\pi s}} \Big{)}^{\frac{1}{2}}+\int_{0}^{1}\int_{\mathbb{R}}|z-x||\phi(s,x)|\exp(- \frac{|z-x|^{2}}{2s})\frac{\mathrm{d}s\,\mathrm{d}z}{s\sqrt{2\pi s}}.\] In addition, any bounded function \(\phi\) belongs to \(\mathcal{H}_{x}\) and the following representation holds for all \(t\in[0,T]\) (see for instance [7, Lemma 2.11]) : \[\int_{0}^{t}\partial_{x}\phi(s,X^{x}(s))\mathrm{d}s=-\int_{0}^{t}\int_{ \mathbb{R}}\phi(s,z)L^{X^{x}}(\mathrm{d}s,\mathrm{d}z).\] (A.1) Furthermore for \(\phi\in\mathcal{H}_{0}\), the subsequent decomposition is valid (see for example [18, Theorem 2.1]) \[\int_{0}^{t}\int_{\mathbb{R}}\phi(s,z)L^{W^{x}}(\mathrm{d}s, \mathrm{d}z) =\int_{0}^{t}\phi(s,W_{s}^{x})\mathrm{d}W_{s}+\int_{T-t}^{T}\phi( T-s,\widehat{W}^{x}(s))\mathrm{d}B(s)\] \[\quad-\int_{T-t}^{T}\phi(T-s,\widehat{W}^{x}(s))\frac{\widehat{W} (s)}{T-s}\mathrm{d}s,\quad 0\leq t\leq T\ \mathrm{a.s.},\] (A.2) where \(L^{W^{x}}(\mathrm{d}s,\mathrm{d}z)\) denotes integration with respect to the local time of the Brownian motion \(W^{x}\) in both time and space, \(W_{\cdot}^{x}:=x+W_{\cdot}\) is the Brownian motion started at \(x\) and \(\widehat{W}\) is the time-reversed Brownian motion defined by \[\widehat{W}_{t}:=W_{T-t},\ 0\leq t\leq T.\] (A.3) The process \(B=\{B(t),\ \ 0\leq t\leq T\}\) stands for an independent Brownian motion with respect to the filtration \(\mathcal{F}_{t}^{W}\) generated by \(\widehat{W}_{t}\), and satisfies: \[B_{t}=\widehat{W}_{t}-W_{T}+\int_{t}^{T}\frac{\widehat{W}_{s}}{T-s}\mathrm{d}s.\] (A.4) We borrow the following result in [7, Lemma A.2]. It gives us the exponential bound of the local time-space integral of any bounded function. **Lemma A.1**.: _Let \(b:[0,T]\times\mathbb{R}\to\mathbb{R}\) be a bounded and measurable function. Then for \(t\in[0,T],\,\lambda\in\mathbb{R}\) and compact subset \(K\subset\mathbb{R}\), we have_ \[\underset{x\in K}{\sup}E\Big{[}\exp\Big{(}\lambda\int_{0}^{t}\int_{\mathbb{R} }b(s,y)L^{W^{x}}(\mathrm{d}s,\mathrm{d}y)\Big{)}\Big{]}<C(\|b\|_{\infty}),\] _where \(C\) is an increasing function and \(L^{W^{x}}(\mathrm{d}s,\mathrm{d}y)\) denotes integration with respect to the local time of the Brownian motion \(W^{x}\) in both time and space._ ## Appendix B Comparison theorem for quadratic BSDE Consider a BSDE of the form \[Y_{t}=\xi+\int_{t}^{T}f(s,Y_{s},Z_{s})\mathrm{d}s-\int_{t}^{T}Z_{s}\mathrm{d} W_{s},\] (B.1) satisfying the following set of assumptions **Assumption B.1**.: _There exist nonegative constant \(\Lambda,K\) and a positive locally bounded function \(\ell:\mathbb{R}\mapsto\mathbb{R}_{+}\) such that \(\mathbb{P}\)-a.s._ * \(\xi\) _is an_ \(\mathfrak{F}_{T}\) _measurable uniformly bounded random variable i.e.,_\(\|\xi\|_{L^{\infty}}\leq\Lambda\)_._ * \(f\) _is an_ \(\mathfrak{F}\) _predictable continuous function and for all_ \((\omega,t,y,z)\in\Omega\times[0,T]\times\mathbb{R}\times\mathbb{R}^{d}\)_,_ \[|f(\omega,t,y,z)|\leq\Lambda(1+|y|+\ell(y)|z|^{2}).\] * _For all_ \((\omega,t,y,z),(\omega,t,y^{\prime},z^{\prime})\in\Omega\times[0,T]\times \mathbb{R}\times\mathbb{R}^{d}\)_,_ \[|f(\omega,t,y,z)-f(\omega,t,y^{\prime},z^{\prime})|\leq K\Big{(}1+\ell(|y-y^{ \prime}|^{2})(|z|+|z^{\prime}|)\Big{)}\Big{(}|y-y^{\prime}|+|z-z^{\prime}| \Big{)}.\] Here is the main Theorem developed in this section. We mimic the proof of Theorem 3.12 in [22]. **Theorem B.2**.: _Let \((Y^{i},Z^{i})\in\mathcal{S}^{\infty}\times\mathcal{H}_{BMO}\) be the solution to BSDE(B.1), with terminal value \(\xi^{i}\) and generator \(g^{i}\) for \(i\in\{1,2\}.\) Under Assumption(B.1) and that_ \[\xi^{1}\leq\xi^{2},\text{ and }f^{1}(t,Y_{t}^{2},Z_{t}^{2})\leq f^{2}(t,Y_{t}^{ 2},Z_{t}^{2})\quad\mathrm{d}t\otimes\mathrm{d}\mathbb{P}\text{-a.s.}.\] _Then for all \(t\in[0,T]\)\(Y_{t}^{1}\leq Y_{t}^{2}\quad\mathbb{P}\)-a.s.._ _Moreover, if either \(\xi^{1}<\xi^{2}\) or \(f^{1}(t,Y_{t}^{2},Z_{t}^{2})<f^{2}(t,Y_{t}^{2},Z_{t}^{2})\) in a set of positive \(\mathrm{d}t\otimes\mathrm{d}\mathbb{P}\)-measure then, \(Y_{0}^{1}<Y_{0}^{2}\)._ Proof.: Let us first recall that \(\mathcal{S}^{\infty}\)-norms of \(Y^{1},Y^{2}\) are bounded by \(\Lambda\) (see e.g. [3, Theorem 3.1]), and the \(\mathcal{H}_{\mathrm{BMO}}\)-norms of \(Z^{1},Z^{2}\) are bounded by a constant \(C\) only depending on \(\Lambda\), and \(T\) ( see [3, Corollary 3.2 ]). Se \[\begin{cases}\delta Y=Y^{1}-Y^{2},&\delta Z=Z^{1}-Z^{2}\\ \delta\xi=\xi^{1}-\xi^{2},&\delta f=f^{1}(\cdot,Y^{2},Z^{2})-f^{2}(\cdot,Y^{2},Z^{2}),\end{cases}\] and define the processes \[\Gamma_{t}: =\frac{f^{1}(t,Y_{t}^{1},Z_{t}^{1})-f^{1}(t,Y_{t}^{2},Z_{t}^{1})} {Y_{t}^{1}-Y_{t}^{2}}1_{\{Y_{t}^{1}-Y_{t}^{2}\neq 0\}},\quad e_{t}:=\exp\Big{(} \int_{0}^{t}\Gamma_{s}\mathrm{d}s\Big{)}\] (B.2) \[\Pi_{t}: =\frac{f^{1}(t,Y_{t}^{2},Z_{t}^{1})-f^{1}(t,Y_{t}^{2},Z_{t}^{2})} {|Z_{t}^{1}-Z_{t}^{2}|^{2}}(Z_{t}^{1}-Z_{t}^{2})1_{\{|Z_{t}^{1}-Z_{t}^{2}|\neq 0\}}.\] (B.3) Observe that \(|\Pi|\leq\Lambda\left(1+\ell(0)(|Z^{1}|+|Z^{2}|)\right)\), thus \(\int_{0}^{\cdot}|\Pi_{s}|\mathrm{d}B_{s}\) is a BMO martingale, since \(Z^{1},Z^{2}\in\mathcal{H}_{\mathrm{BMO}}\). Therefore, the probability measure \(\tilde{\mathbb{P}}\) given by \(\mathrm{d}\tilde{\mathbb{P}}/\mathrm{d}\mathbb{P}=\mathcal{E}(\int_{0}^{\cdot} \Pi_{s}\mathrm{d}W_{s})\) is well defined (see [28]) and the process defined by \(W^{\tilde{\mathbb{P}}}=W_{-}\int_{0}^{\cdot}\Pi_{s}\mathrm{d}s\) is a \(\tilde{\mathbb{P}}\)-Brownian motion. We deduce from [28], the existence of \(p^{*}>1\) such that \(\mathcal{E}(\int_{0}^{\cdot}\Pi_{s}\mathrm{d}W_{s})\in L^{p^{*}}\). Since \(Y^{1},Y^{2}\) are bounded and \(\ell\) is locally bounded, we have \(|\Gamma_{t}|\leq\Lambda(1+2\ell(|Y^{1}_{t}-Y^{2}_{t}|^{2})|Z^{1}_{t}|)\leq \Lambda(1+|Z^{1}_{t}|)\). Using the same arguments as before, we have \(\Gamma\in\mathcal{H}_{\mathrm{BMO}}\), thus \(\Gamma\in\mathcal{H}^{2p}\) for every \(p\geq 1\). The dynamics of \((\delta Y_{t})\) is given by: \[\delta Y_{t}=\delta\xi+\int_{t}^{T}\delta f_{s}\mathrm{d}s+\int_{t}^{T}\left( f^{1}(s,Y^{1}_{s},Z^{1}_{s})-f^{1}(s,Y^{2}_{s},Z^{2}_{s})\right)\mathrm{d}s-\int_{t}^ {T}\delta Z_{s}\mathrm{d}W_{s}.\] (B.4) The Ito's formula yields \[e_{t}\delta Y_{t}=e_{t}\delta\xi+\int_{t}^{T}e_{s}\delta f_{s}\mathrm{d}s- \int_{t}^{T}e_{s}\delta Z_{s}\mathrm{d}W^{\tilde{\mathbb{P}}}_{s},\] (B.5) where \(\int_{0}^{\cdot}e_{s}\delta Z_{s}\mathrm{d}W^{\tilde{\mathbb{P}}}_{s}\) is a true \(\tilde{\mathbb{P}}\)-martingale. Indeed, \[\mathbb{E}^{\tilde{\mathbb{P}}}\int_{0}^{T}|e_{s}|^{2}|\delta Z_{s}|^{2} \mathrm{d}s\leq\mathbb{E}\Big{[}\Big{(}\mathcal{E}(\int_{0}^{\cdot}\Pi_{s} \mathrm{d}W_{s})\Big{)}^{p^{*}}\Big{]}^{\frac{1}{p^{*}}}\mathbb{E}\Big{[}|e_{ T}|^{2q^{*}}\Big{(}\int_{0}^{T}|\delta Z_{s}|^{2}\mathrm{d}s\Big{)}q^{*} \Big{]}^{\frac{1}{p^{*}}},\] and this is finite thanks to the \(\mathcal{H}_{\mathrm{BMO}}\) property of \(\Pi,\Gamma,Z^{1}\) and \(Z^{2}\). Then, equation (B.5) becomes \[e_{t}\delta Y_{t}=\mathbb{E}^{\tilde{\mathbb{P}}}\Big{[}e_{t}\delta\xi+\int_{t }^{T}e_{s}\delta g_{s}\mathrm{d}s|\mathfrak{F}_{t}\Big{]}.\] (B.6) Thanks to the Theorem's assumption we conclude that \(e_{t}\delta Y_{t}\leq 0\) and hence for all \(t\in[0,T]\) we have \(Y^{1}_{t}\leq Y^{2}_{t}\quad\tilde{\mathbb{P}}\)-a.s. and \(\mathbb{P}\)-a.s.. In particular, for \(t=0\) and if \(\delta<0\) or if \(\delta g<0\) in a set of positive \(\mathrm{d}t\otimes\mathrm{d}\mathbb{P}\)-measure, then we conclude that \(Y^{1}_{0}<Y^{2}_{0}\), and this ends the proof. ## Appendix C Second order Malliavin differentiability In this appendix, we provide some conditions under which a BSDE admits a second order Malliavin differentiable solution. This result is taken from [24]. Let us consider the following BSDEs (which can be seen as the BSDE satisfied by the Malliavin derivative of the backward process ) \[U_{s,t}=0,\quad V_{s,t}=0,\quad t\in[0,s),\] \[U_{s,t}=D_{s}\xi+\int_{t}^{T}G(r,s,U_{s,r},V_{s,r})\mathrm{d}r- \int_{t}^{T}V_{s,r}\mathrm{d}W_{r},\quad t\in[s,T]\] (C.1) where \(h\) is a measurable function \(\Omega\times[0,T]\times[0,T]\times\mathbb{R}\times\mathbb{R}^{d}\to\mathbb{R}\) given by \[G(\cdot,t,s,y,z)=A_{s,t}(\cdot)+B_{t}(\cdot)y+\langle C_{t}(\cdot),z\rangle\] and \(\xi\) is an \(\mathfrak{F}_{T}\)-measurable random variable, satisfying the following assumptions: 1. \(\xi\) is Malliavin differentiable, \(\mathfrak{F}_{T}\)-measurable bounded random variable with Malliavin derivative given by \(D_{s}\xi\) satisfying \[\sup_{0\leq s\leq T}\|D_{s}\xi\|_{L^{2p}}<\infty,\text{ for all }p>1.\] 2. For all \(s\in[0,T]\), \(D_{s}\xi\) is Malliavin differentiable, with second order derivative given by \((D_{s^{\prime}}D_{s}\xi)_{s^{\prime},s\in[0,T]}\) and satisfying \(\sup_{0\leq s^{\prime},s\leq T}\|D_{s^{\prime}}D_{s}\xi\|_{L^{p}}<\infty\) for all \(p>1\). 3. \(B:\Omega\times[0,T]\to\mathbb{R}\) and \(C:\Omega\times[0,T]\to\mathbb{R}^{d}\) are \(\mathfrak{F}_{t}\)-adapted processes bounded by a constant \(\Lambda>0\). \(A:\Omega\times[0,T]\times[0,T]\times[0,T]\to\mathbb{R}\) satisfies \(\sup_{0\leq s\leq T}\|A_{s,\cdot}\|_{\mathfrak{F}^{2p}}<\infty\) for all \(p>1\). For each fixed \(s\in[0,T]\), the process \((A_{s,t})_{t\in[0,T]}\) is progressively measurable. * \(A_{s,t},B_{t},C_{t}\) are Malliavin differentiable for all \(s\in[0,T]\) and \(t\in[0,T]\). Measurable versions of their Malliavin derivatives are respectively given by \(D_{s^{\prime}}A_{s,t},D_{s}B_{t}\) and \(D_{s}C_{t}\) for \(s^{\prime},s\in[0,T]\) such that for all \(p>1\) \[\sup_{0\leq s^{\prime},s\leq T}\mathbb{E}\Big{[}\Big{(}\int_{0}^{T}(|D_{s^{ \prime}}A_{s,t}|^{2}+|D_{s}B_{t}|^{2}+|D_{s}C_{t}|^{2})\mathrm{d}t\Big{)}^{p} \Big{]}<\infty\] **Theorem C.1**.: _Under Assumptions (H1)-(H4), the BSDE (C.1) has a unique Malliavin differentiable solution \((U,V)\in\mathcal{S}^{2p}\times\mathcal{H}^{2p}\) for \(p>1\). A version of \((D_{s^{\prime}}U_{s,t},D_{s^{\prime}}V_{s,t})_{0\leq s^{\prime}\leq s\leq t\leq T}\) satisfies_ \[D_{s^{\prime}}U_{s,t} =D_{s^{\prime}}D_{s}\xi-\int_{t}^{T}D_{s^{\prime}}V_{s,r}\mathrm{ d}W_{r}\] \[+\int_{t}^{T}\Big{(}(D_{s^{\prime}}G)(r,s,U_{s,r},V_{s,r})+\langle \nabla G(r,s,U_{s,r},V_{s,r}),(D_{s^{\prime}}U_{s,r},D_{s^{\prime}}V_{s,r}) \rangle\Big{)}\mathrm{d}r.\] (C.2) _Moreover, a version of \((V_{s,t})_{0\leq s\leq t\leq T}\) is given by \((D_{t}U_{s,t})_{0\leq s\leq t\leq T}\)._ **Acknowledgments:** The authors thank the Editor in Chief and two anonymous referees for their remarks which helped to considerably improve the presentation of the paper. A part of this work was carried out while Rhoss Likibi Pellat visited the Department of Mathematics, Brandenburg Technology University of Cottbus, whose hospitality is greatly appreciated and he personally thank Prof Ralf Wunderlich.
2309.11524
Magnetic Fusion Plasma Drive
In the evolving realm of space exploration, efficient propulsion methods are paramount to achieve interplanetary and possibly interstellar voyages. Traditional propulsion systems, although proven, offer limited capabilities when considering longer-duration missions beyond our immediate cosmic vicinity. This paper introduces and thoroughly investigates the Magnetic Fusion Plasma Drive (MFPD) propulsion system, a novel fusion-powered propulsion mechanism. Through rigorous theoretical underpinnings and mathematical formulations, we elucidate the principles governing fusion reactions in the context of propulsion, plasma dynamics, and magnetic confinement in space. Comparative analyses indicate significant advantages of the MFPD system over existing technologies, particularly in fuel efficiency, thrust capabilities, and potential scalability. Example calculations further substantiate the immense energy potential and feasibility of the MFPD for long-duration missions. While challenges remain, the MFPD system embodies a promising avenue for a propulsion paradigm shift, potentially revolutionizing our approach to space exploration.
Florian Neukart
2023-09-20T03:18:03Z
http://arxiv.org/abs/2309.11524v3
# Highlights ###### Abstract The magnetic fusion plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma drive is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma plasma plasma. The plasma is a key ingredient for the development of the plasma drive. The plasma is a key ingredient for the development of the plasma plasma plasma plasma. The plasma is a key ingredient for the development of the plasma plasma plasma plasma. The plasma is a key ingredient for the development of the # Magnetic Fusion Plasma Drive Florian Neukart ###### Abstract In the evolving realm of space exploration, efficient propulsion methods are paramount to achieve interplanetary and possibly interstellar voyages. Traditional propulsion systems, although proven, offer limited capabilities when considering longer-duration missions beyond our immediate cosmic vicinity. This paper introduces and thoroughly investigates the Magnetic Fusion Plasma Drive (MFPD) propulsion system, a novel fusion-powered propulsion mechanism. Through rigorous theoretical underpinnings and mathematical formulations, we elucidate the principles governing fusion reactions in the context of propulsion, plasma dynamics, and magnetic confinement in space. Comparative analyses indicate significant advantages of the MFPD system over existing technologies, particularly in fuel efficiency, thrust capabilities, and potential scalability. Example calculations further substantiate the immense energy potential and feasibility of the MFPD for long-duration missions. While challenges remain, the MFPD system embodies a promising avenue for a propulsion paradigm shift, potentially revolutionizing our approach to space exploration. Magnetic Fusion Plasma Drive + Footnote †: ORCID(s): 0000-0002-2562-1618 (F. Neukart) + Footnote †: ORCID(s): 0000-0002-2562-1618 (F. Neukart) + Footnote †: ORCID(s): 0000-0002-2562-1618 (F. Neukart) ## 1 Introduction The challenge of deep-space exploration and transporting significant payloads across interplanetary and interstellar distances necessitates developing efficient and powerful propulsion systems. As humanity contemplates the establishment of colonies on distant planets and mining celestial bodies, our spacecraft's propulsion mechanisms become increasingly critical. ### Background on Space Propulsion Needs for Large Spacecraft Large spacecraft designed for transporting significant payloads or for long-duration missions often face challenges in terms of propulsion. The propulsion system must maintain efficiency over extended durations, be scalable, and offer a balance between thrust and fuel consumption. While instrumental in our space exploration endeavors, the current propulsion methods have clear limitations when scaling up for larger spacecraft or missions aiming at deep space destinations [16, 17]. The ensuing sections will consider the intricacies of current propulsion systems, the theoretical basis for the MFPD, mathematical formulations describing its operation, and a comparative analysis of its potential advantages and challenges [16, 17, 18]. ### Shortcomings of Current Propulsion Methods Chemical propulsion, the mainstay for current spacecraft, offers high thrust but has a limited specific impulse, making it unsuitable for prolonged deep-space missions [16]. Similarly, while ion and electric propulsion systems provide high efficiency, their low thrust capabilities pose challenges for the rapid transport of large spacecraft [15]. Solar sails and Nuclear Thermal Propulsion, while promising, have their challenges, which will be detailed in subsequent sections [17, 18]. ### Objective and Overview of the Proposed MFPD System This paper introduces the Magnetic Fusion Plasma Drive (MFPD), a propulsion concept that seeks to harness the immense energy potential of nuclear fusion combined with magnetically confined plasma to produce thrust. The MFPD aims to address the limitations of current propulsion systems by providing a balance between thrust and efficiency, all while ensuring scalability for larger spacecraft. To achieve this, we draw on research in nuclear fusion [17], plasma physics [16], and magnetohydrodynamics [19]. The ensuing sections will consider the intricacies of current propulsion systems, the theoretical basis for the MFPD, mathematical formulations describing its operation, and a comparative analysis of its potential advantages and challenges [16, 17, 18]. ## 2 Background on Existing Propulsion Systems The vastness of space demands propulsion systems that are both efficient and capable of delivering sustained thrust over extended durations. Historically, the domain of space propulsion has been characterized by a spectrum of technologies, each developed to address specific mission requirements and the inherent challenges of space travel. As we think about more ambitious missions, it becomes crucial to understand the underpinnings, advantages, and limitations of the current state-of-the-art propulsion technologies. This chapter provides a comprehensive overview of the primary propulsion systems that have been, and continue to be, pivotal in our space exploration endeavors. We begin with the conventional chemical propulsion systems, which, for decades, have been fundamental to our space ventures. The discourse then progresses to ion and electric propulsion systems, highlighting their role in providing fuel-efficient solutions for prolonged space missions. We touch upon the principles and prospects of nuclear thermal propulsion and its potential in bridging the gap between efficiency and high thrust. The narrative then delves into the innovative concepts of solar sails, fusion propulsion, and the theoretical domain of antimatter propulsion. In understanding each system's principles, merits, and challenges, we aim to provide foundational knowledge that sets the stage for introducing our proposed Magnetic Fusion Plasma Drive in subsequent chapters. ### Chemical Propulsion Chemical propulsion remains the primary means of propulsion for most contemporary spacecraft and has enabled a wide range of missions, from satellite launches to interplanetary explorations. The principle behind chemical propulsion is the combustion of chemical propellants to produce high-temperature and high-pressure gases that are expelled through a nozzle, resulting in thrust via Newton's third law [24]. #### Basic Principles and Mechanics The basic operation of a chemical rocket can be summarized by the following steps: * Combustion of propellants in a combustion chamber produces high-energy gases. * The rapid expansion of these gases is channeled through a nozzle. * As the gases exit the nozzle at high velocities, a force is exerted on the rocket in the opposite direction, propelling it forward. Eq. (1) gives the thrust \(F\) produced by a rocket: \[F=\dot{m}V_{e}+(P_{e}-P_{0})A_{e} \tag{1}\] Where: * \(\dot{m}\) is the propellant mass flow rate. * \(V_{e}\) is the exhaust velocity. * \(P_{e}\) is the exhaust pressure. * \(P_{0}\) is the ambient pressure. * \(A_{e}\) is the nozzle exit area [24]. The specific impulse \(I_{sp}\), a measure of rocket propellant efficiency, is defined as the thrust per unit weight flow rate of the propellant (Eq. (2)): \[I_{sp}=\frac{F}{g_{0}\dot{m}} \tag{2}\] Where \(g_{0}\) is the standard gravitational acceleration [23]. #### Limitations in terms of Specific Impulse and Fuel Mass While chemical propulsion offers significant thrust, allowing for rapid changes in velocity, its specific impulse values are inherently limited by the chemical propellants' energy content. Typically, chemical rockets have \(I_{sp}\) values in the range of 250 to 450 seconds for hippocampal systems [23]. This limitation implies a substantial fuel mass is required for long-duration missions or missions requiring significant velocity changes. Moreover, the exponential nature of the rocket equation (Eq. (3)): \[\Delta V=V_{e}\ln\left(\frac{m_{0}}{m_{f}}\right) \tag{3}\] Where: * \(\Delta V\) is the change in velocity. * \(V_{e}\) is the effective exhaust velocity. * \(m_{0}\) is the initial total mass. * \(m_{f}\) is the final total mass [23]. emphasizes the challenges of achieving high \(\Delta V\) maneuvers. As the required \(\Delta V\) for a mission increases, the ratio of initial to final mass becomes increasingly larger, necessitating an even greater propellant mass. This inherent limitation restricts the feasible mission profiles for spacecraft reliant solely on chemical propulsion, especially for deep-space endeavors. The significant mass and volume of propellant needed can make some missions infeasible or require complex mission architectures involving multiple launches and in-space assembly or refueling [23]. ### Ion and Electric Propulsion Ion and electric propulsion systems have gained popularity for their high efficiency, especially in long-duration missions where their prolonged thrust capability can achieve substantial velocity changes over time [1]. Unlike chemical rockets, which rely on the combustion of propellants, ion and electric thrusters use electricity (often from solar panels) to ionize a propellant and accelerate it using electromagnetic fields. #### Mechanisms of Ion/Electric Propulsion Electric propulsion can be categorized based on the method of accelerating the propellant: * Electrothermal Thrusters: These thrusters heat the propellant using electrical power, which then expands and is expelled through a nozzle to produce thrust [23]. * Electrostatic Thrusters (e.g., Hall Effect Thrusters and Gridded Ion Thrusters): Propellant atoms are ionized and then accelerated using electric or magnetic fields. Electrons then neutralize the positively charged ions upon exit to produce a neutral exhaust [18]. * Electromagnetic Thrusters (e.g., Magnetoplasmadynamic Thrusters): These use both electric and magnetic fields to accelerate the ionized propellant [15]. The exhaust velocity of the ionized propellant predominantly determines the specific impulse of electric propulsion systems. The relationship is given by Eq. (4): \[I_{sp}=\frac{V_{e}}{g_{0}} \tag{4}\] Where \(V_{e}\) is the exhaust velocity and \(g_{0}\) is the standard gravitational acceleration [15]. The specific impulse, \(I_{sp}\), is a key parameter used to characterize the performance of a propulsion system. It's essentially the impulse (change in momentum) provided per unit of propellant mass expended. The impulse provided by a thruster is given by \(F\times dt\), where \(F\) is the thrust and \(dt\) is an infinitesimal duration. The propellant mass expended in this duration is \(\dot{m}dt\). Thus, the specific impulse, by definition, is given by Eq. (5): \[I_{sp}=\frac{Fdt}{g_{0}\dot{m}dt}=\frac{F}{g_{0}\dot{m}} \tag{5}\] which is as per Eq. (2). However, the thrust, \(F\), generated by a propulsion system is also related to the exhaust velocity of the expelled propellant, \(V_{e}\), by the equation Eq. (6): \[F=\dot{m}V_{e} \tag{6}\] Substituting the value of \(F\) from Eq. (6) into Eq. (5), we get Eq. (4). Hence, while Eq. (2) relates \(I_{sp}\) to the thrust and propellant mass flow rate, Eq. (4) provides a direct relation to the exhaust velocity, which is often more convenient when analyzing electric propulsion systems, where the exhaust velocities can be exceptionally high [18, 19]. #### 2.2.2 Limitations in Thrust Capabilities While electric propulsion systems excel in terms of specific impulse (often achieving \(I_{sp}\) values in the range of 1000 to 5000 seconds or higher), they typically have low thrust levels compared to chemical systems. This means they impart changes in velocity over longer durations, making them unsuitable for applications requiring rapid thrust maneuvers, such as ascent or landing on celestial bodies. Moreover, power generation and heat dissipation become critical issues. The efficiency of electric propulsion is closely tied to the available electrical power. As the power requirements increase, so does the need for large solar panels or nuclear power sources, potentially increasing the spacecraft's mass and complexity [15]. Another challenge is the erosion of thruster components due to the high-energy ionized propellant. Over extended missions, this can reduce thruster lifespan and performance. Additionally, the need for precise propellant ionization and acceleration mechanisms makes electric propulsion systems more complex and potentially susceptible to technical malfunctions [18]. Furthermore, spacecraft employing electric propulsion systems often follow spiral trajectories, especially when transitioning in and out of gravitational wells. During these trajectories, a portion of the thrust is continuously expended to counteract the gravitational pull of the celestial body, leading to gravity losses. Such losses can have a notable impact on mission duration and the efficiency of propellant usage. While high \(I_{sp}\) values do imply reduced propellant consumption, gravity losses can offset this advantage, especially during extended periods of thrusting in a gravitational field. This aspect can particularly affect missions that transition between different orbits around a planet or moon, where the spiraling trajectory is pronounced [18, 19]. ### Nuclear Thermal Propulsion (NTP) Nuclear Thermal Propulsion represents a distinct branch of propulsion technology that capitalizes on the energy released from nuclear reactions, specifically nuclear fission, to heat a propellant and generate thrust. Historical interest in NTP emerged during the Cold War era, with notable projects such as the U.S.'s Project Rover. Though not currently in widespread use, NTP holds potential for future deep space missions due to its promise of high thrust combined with relatively high specific impulse [19]. #### 2.3.1 Mechanism and Operation of NTP The fundamental operation of an NTP system can be described as: * A nuclear reactor, containing fissile material, initiates a controlled nuclear fission reaction, releasing a substantial amount of thermal energy. * A propellant (commonly hydrogen) is passed through the reactor core, where it gets heated to high temperatures by the thermal energy from the fission reactions. * The heated propellant expands and is expelled through a nozzle, producing thrust in a manner analogous to a chemical rocket. Unlike chemical propulsion, where the energy source and the propellant are the same, NTP decouples the energy source (nuclear reactor) from the propellant (hydrogen). This allows for a higher specific impulse because the exhaust velocities can be much greater than those achievable with chemical reactions alone [19]. #### 2.3.2 Advantages and Potential NTP systems can achieve specific impulse values between 850 to 1000 seconds, nearly double that of the most efficient chemical rockets. This offers a balance between the high thrust of chemical rockets and the high efficiency (in terms of specific impulse) of electric propulsion. Such a combination is particularly valuable for crewed missions to distant planets, where minimizing travel time is crucial [1]. #### 2.3.3 Limitations and Challenges * Radiation Concerns: One of the primary challenges with NTP is managing the radiation produced by the nuclear reactor. This necessitates robust shielding, both to protect spacecraft systems and, in the case of crewed missions, to ensure the safety of astronauts [1]. * Technical Complexity: The need to handle fissile material, control nuclear reactions, and manage reactor temperatures presents considerable engineering challenges [1]. * Environmental and Safety Concerns: The potential consequences of an accident, either during launch or in operation, have made NTP a contentious choice. The release of radioactive materials could pose environmental risks [1]. * Political and Regulatory Hurdles: Deploying nuclear technology in space involves navigating a complex landscape of international treaties and regulations [1]. ### Solar Sails Solar sails, or photon sails, offer a radically different approach to propulsion in space. Instead of expelling mass to achieve thrust, solar sails harness the momentum of photons (light particles) emitted by the sun. As these photons reflect off the sail, they transfer momentum, generating a propulsive force. Though the force exerted by individual photons is minuscule, the cumulative effect over vast areas and extended durations can result in significant acceleration [15]. #### 2.4.1 Principle and Operation Solar sails operate on the principle of radiation pressure. When photons reflect off a surface, they transfer twice their momentum to that surface. For a perfectly reflecting sail oriented perpendicular to the sun, the force \(F\) due to radiation pressure can be given by Eq. (7): \[F=\frac{2IA}{c} \tag{7}\] Where: * \(I\) is the solar radiation intensity, typically around \(1361\,W/m^{2}\) near Earth. * \(A\) is the area of the sail. * \(c\) is the speed of light [15]. #### 2.4.2 Advantages and Potential * Fuel-less Propulsion: Since solar sails do not rely on onboard fuel or propellant, they can continue to accelerate as long as they remain exposed to solar radiation. This offers the potential for long-duration missions without the need for fuel resupply [16]. * Scalability: Larger sails capture more photons, resulting in greater thrust. Advances in materials science can lead to lightweight, yet large sails that can harness substantial radiation pressure [16]. * Interstellar Potential: While slow to start, over incredibly long distances and timeframes, solar sails could achieve a significant fraction of the speed of light, making them a contender for interstellar missions, especially when paired with powerful lasers that act as beamed propulsion sources [16]. #### 2.4.3 Limitations and Challenges * Initial Slow Acceleration: The thrust provided by solar radiation is gentle, making solar sails unsuitable for rapid maneuvers or missions requiring swift velocity changes. * Distance from Sun: As a spacecraft ventures farther from the sun, the solar radiation intensity diminishes, leading to decreased thrust [16]. * Material and Manufacturing: Crafting large, ultrathin, and durable sails that can endure the space environment is a significant engineering challenge [15]. * Control and Navigation: Steering a spacecraft using a solar sail requires precise control of the sail's orientation relative to the sun. Achieving desired trajectories involves continuously adjusting this angle [16]. ### Fusion Propulsion Fusion propulsion is a concept that envisions harnessing the immense energy released from nuclear fusion reactions to propel spacecraft. Unlike nuclear fission, which involves splitting atomic nuclei, fusion combines light atomic nuclei, typically isotopes of hydrogen, to form heavier nuclei. In the process, a tremendous amount of energy is released, surpassing that of any chemical reaction [17]. #### 2.5.1 Principle and Operation The fundamental operation of fusion propulsion can be described as: * Fusion reactions are initiated in a contained environment, often through the use of magnetic or inertial confinement methods. * The high-energy particles and radiation produced in the fusion reactions are directed out of the spacecraft through a magnetic nozzle or other mechanism, generating thrust. * Additional propellants, such as hydrogen, can be introduced and heated by the fusion reactions, producing additional thrust like nuclear thermal propulsion [3]. The energy release per fusion reaction is given by Eq. (8): \[E=\Delta m\times c^{2} \tag{8}\] Where: * \(\Delta m\) is the change in mass between the initial reactants and the final products. * \(c\) is the speed of light [3]. #### Advantages and Potential High Specific Impulse: Fusion propulsion can theoretically achieve specific impulse values exceeding those of both chemical rockets and nuclear thermal propulsion, making long-duration missions more feasible. Abundant Fuel Sources: The primary fuel for fusion, isotopes of hydrogen-like deuterium and tritium, can be found in water, making them relatively abundant in the universe [3]. Reduced Radiation Concerns: Unlike fission, fusion does not produce long-lived radioactive waste, mitigating some radiation concerns associated with nuclear propulsion. Vast Energy Potential: A small mass of fusion fuel can produce tremendous energy, potentially allowing for rapid transits between distant celestial bodies. Notably, this refers to energy per unit mass; volumetrically, fission fuels release more energy per cubic meter fuel [3]. #### Limitations and Challenges * Technical Complexity: Achieving the conditions for controlled fusion reactions is an immense engineering challenge. Despite decades of research on Earth, we have yet to achieve sustained and net-energy-positive fusion reactions. * Heat Management: The temperatures associated with fusion reactions are extremely high, demanding advanced materials and systems to handle the generated heat. * Fuel Availability: While deuterium is relatively abundant, tritium is rare and has to be bred from lithium or other processes. Other fusion fuels, like helium-3, may be rare on Earth but could be mined from celestial bodies like the Moon. * Magnetic Confinement: Using magnetic fields to confine the hot plasma in fusion reactors poses power requirements and stability challenges. * Safety and Containment: Ensuring the safe containment of fusion reactions, especially when control might be lost, is critical [3]. While fusion propulsion remains in the realm of future possibilities, its promise of efficient, high-energy propulsion drives continued interest and research. If the challenges associated with controlled fusion are overcome, it could revolutionize space travel, reducing transit times and expanding our reach within our solar system and beyond. ## 3 Theoretical Basis for the Magnetic Fusion Plasma Drive (MFPD) Nuclear fusion is when atomic nuclei come together to form a heavier nucleus. This process releases vast amounts of energy, primarily because the mass of the resulting nucleus is slightly less than the sum of its constituents. The difference in mass is released as energy according to Einstein's equation \(E=mc^{2}\). ### Nuclear Fusion in Space Propulsion In space propulsion, the most commonly considered fusion reactions involve isotopes of hydrogen: deuterium (D) and tritium (T). The primary fusion reaction can be represented as Eq. (9): \[D+T\rightarrow^{4}He+n+17.6MeV \tag{9}\] Where: * \({}^{4}He\) is helium-4. * \(n\) is a neutron. This reaction releases 17.6 MeV (mega-electronvolts) of energy, predominantly carried away by the neutron. #### Advantages over fission and chemical reactions Fusion has several advantages as a propulsion mechanism: * Higher Energy Density (J/Kg): Fusion reactions release more energy per unit mass than chemical reactions or nuclear fission. Specifically, while a uranium fission reaction releases approximately 197 MeV of energy per fission, fusion reactions release significantly more energy per kilogram. It's imperative to note that this advantage is specifically regarding energy per unit mass (J/Kg). Regarding volumetric energy density \(J/m^{3}\), fission reactions can release more energy than fusion [Fusion Energy Sciences Program(2018)]. * Abundant Fuel: Deuterium can be extracted from seawater, and while tritium is rare, it can be bred from lithium, which is relatively abundant [10]. * Safety: Fusion doesn't suffer from the same meltdown risks as fission. Furthermore, while tritium is radioactive, it's less of a long-term contaminant than many fission by-products [12]. ### Plasma Dynamics Plasma is often termed the fourth state of matter. It consists of charged particles: ions and electrons. In fusion propulsion, the fuel (like D-T mix) is heated to such high temperatures that it becomes plasma [10]. #### 3.2.1 Properties and behaviors of high-energy plasma * Conductivity: Plasma is a good conductor of electricity due to the free ions and electrons [11]. * Reactivity: High-energy plasma undergoes fusion reactions at sufficiently high temperatures and pressures [11]. * Responsiveness to Magnetic Fields: Being charged, plasma responds strongly to magnetic fields, allowing for magnetic confinement [13]. #### 3.2.2 Importance of magnetic confinement To achieve fusion, plasma must be confined at high temperatures and pressures for a sufficient duration. Magnetic confinement uses magnetic fields to contain the plasma, preventing it from coming into contact with (and being cooled by) the walls of the containment vessel [11]. ### Magnetic Confinement in Propulsion The magnetic confinement technique in propulsion systems serves as a revolutionary approach to harness plasma's immense power and potential, the fourth state of matter. Utilizing magnetic fields, it's possible to effectively control, guide, and confine the hot plasma, which is inherently challenging due to its high-energy nature and erratic behavior. These confinement strategies promise efficient plasma management and pave the way for innovative propulsion methods that could redefine space exploration. #### 3.3.1 How magnetic fields can be used to control and direct plasma The charged particles in plasma follow helical paths around magnetic field lines. By carefully designing the magnetic field topology, one can ensure that plasma remains confined in a desired region [13]. * Tokamak Configuration: This doughnut-shaped configuration combines external magnetic coils and a toroidal current within the plasma to create a strong confining magnetic field [14]. * Magnetic Nozzles: In the context of propulsion, magnetic fields can be shaped to form a "nozzle" that directs the high-energy plasma out of the thruster, generating thrust [15]. #### 3.3.2 The role of superconducting magnets Superconducting magnets are critical in advanced plasma confinement schemes because they produce strong magnetic fields with minimal energy consumption. These magnets can carry large currents without electrical resistance when cooled below their critical temperature. They're essential for large-scale, sustained magnetic confinement of plasma [12]Schober et al.(2008)Schober, Heller, Riedl and Goldacker]. #### 3.3.3 Magnetic confinement in the MFPD The nomenclature "Magnetic Fusion Plasma Drive" indeed suggests that magnetic confinement plays a pivotal role in the system. Magnetic confinement's primary purpose in the MFPD is to contain and stabilize the plasma, ensuring that the fusion reactions occur efficiently. At its core, MFPD uses magnetic fields to initiate and sustain the fusion process. The initial state of the plasma can be achieved through various methods such as electromagnetic induction, radio frequency heating, or neutral beam injection, all of which directly or indirectly leverage magnetic fields. Once the plasma is heated to the necessary conditions for fusion, maintaining those conditions and preventing the plasma from interacting with the walls (and hence cooling down) becomes crucial. This is where magnetic confinement plays its most significant role. The term "generalized magnetic confinement approach" in the manuscript is intended to convey that while specific configurations like Tokamak, Stellarator, or Field-Reversed Configuration are well-known and widely studied, MFPD might employ a combination or a variation of these configurations to achieve the desired plasma conditions and confinement. The unique design considerations for a propulsion system (as opposed to a terrestrial power generation setup) might necessitate such an approach. It's also important to note that while magnetic confinement is pivotal, additional stabilization mechanisms, both passive and active, might be employed to ensure the plasma's stability and longevity. Magnetic confinement for plasma control is one way to confine the plasma in the MFPD system. A foundational principle behind this confinement is the Lorentz Force experienced by charged particles moving in a magnetic field (Eq. (10)): \[\vec{F}=q(\vec{v}\times\vec{B}) \tag{10}\] Here, \(q\) is the charge of the particle, \(\vec{v}\) represents its velocity, and \(\vec{B}\) denotes the magnetic field [13], Jackson(1998)]. Charged particles will gyrate around the field lines upon interaction with a magnetic field. The gyroradius or the radius of this spiral is given by Eq. (11): \[r_{\mathrm{g}}=\frac{mv_{\perp}}{|q|\,B} \tag{11}\] Where \(m\) is the particle's mass, and \(v_{\perp}\) is its velocity component perpendicular to \(\vec{B}\)[10]. One prominent configuration for achieving magnetic confinement is the tokamak. A toroidal (doughnut-shaped) magnetic field confines the plasma in this design. This magnetic field can be approximately described as Eq. (12): \[B_{\text{tokamak}}=B_{\text{external}}+\frac{\mu_{0}I_{p}}{2\pi r} \tag{12}\] \(B_{\text{external}}\) represents the magnetic field from external coils, \(\mu_{0}\) the permeability of free space, \(I_{p}\) the current passing through the plasma, and \(r\) the radial distance from the torus center [22]). A significant parameter for assessing the effectiveness of magnetic confinement is the plasma beta, \(\beta\), defined as Eq. (13): \[\beta=\frac{nkT}{B^{2}/2\mu_{0}} \tag{13}\] Here, \(n\) denotes the plasma density, \(k\) the Boltzmann constant, and \(T\) the plasma temperature [19]. ### Plasma Acceleration in MFPD In MFPDs, plasma is confined and accelerated to produce thrust. The fundamental principle behind this acceleration is the Lorentz force, as mentioned previously [18]. #### 3.4.1 Generation of Plasma in MFPD In the context of fusion propulsion, plasma generation is a prerequisite to initiate and sustain fusion reactions. For the MFPD, achieving the necessary conditions for nuclear fusion relies on effectively ionizing the fusion fuel. Here's how the MFPD system approaches plasma generation: * **Electrothermal Method:** Within the MFPD, an initial electrical discharge is passed through the fusion fuel, primarily deuterium and tritium, ionizing it. This method is employed to reach the preliminary ionization state before any confinement methods take over [23][24][25][26]. * **Magnetic Induction:** Post the initial ionization, changes in the magnetic fields within the MFPD can further induce currents in the plasma. This not only aids in maintaining a high level of ionization but also plays a crucial role in magnetic confinement, a core feature of the MFPD system [23][25][26]. It's imperative to note that while traditional electric propulsion (EP) systems and the MFPD both employ plasma generation techniques, their application and end goals differ. EP systems focus on propelling ionized propellant out of a thruster for propulsion, whereas the MFPD aims to achieve controlled fusion reactions by confining and sustaining a highly ionized plasma state [23][25][26]. #### 3.4.2 Acceleration Mechanism in MFPD The MFPD leverages the principle of the Lorentz force for plasma acceleration, but its specificity lies in how it configures the plasma and magnetic fields to achieve this effect. * **Plasma Geometry and Currents:** In the MFPD system, the plasma is shaped in a toroidal (donut-like) geometry. This configuration ensures a continuous and circulating plasma current, which is essential for sustaining fusion reactions. This circulating current, denoted as \(I_{plasma}\), interacts with the applied magnetic field, resulting in a Lorentz force that aids in both confinement and acceleration. * **Magnetic Field Profile:** The magnetic field in the MFPD is a composite of the external field produced by superconducting magnets and the self-induced field due to the plasma current. The strength and direction of these fields are meticulously controlled to maximize the Lorentz force's effect, driving the plasma towards the magnetic nozzle and producing thrust. * **Acceleration Using Lorentz Force:* * The relationship between the plasma current density \(\mathbf{J}\), the magnetic field \(\mathbf{B}\), and the pressure gradient \(\nabla P\) can be articulated by Eq. (14). In the MFPD, the term \(\mathbf{J}\times\mathbf{B}\) is maximized by optimizing the plasma geometry and magnetic field profile, ensuring efficient acceleration. \[\mathbf{J}\times\mathbf{B}=\nabla P\] (14) Where: * \(\mathbf{J}\) is the current density in the plasma. * \(\mathbf{B}\) is the magnetic field. * \(\nabla P\) represents the pressure gradient in the plasma. The cross product \(\mathbf{J}\times\mathbf{B}\) yields the Lorentz force per unit volume acting on the plasma, which drives its acceleration, while \(\nabla P\) indicates the change in pressure across a certain distance in the plasma. The balance between these two terms is essential for efficient plasma confinement and acceleration. The plasma's acceleration is thus a result of the strategic configuration and interplay between the plasma currents and the magnetic fields within the MFPD. This approach ensures not only effective propulsion but also aids in maintaining the required conditions for continuous fusion reactions. ### Thrust Generation Mechanism of MFPD The fundamental principle for any propulsion system in space relies on Newton's third law: to accelerate in one direction, a spacecraft must expel mass in the opposite direction. In the context of the MFPD, this expelled mass comes in the form of hot, charged particles - plasma. The mechanism that enables this expulsion is multi-faceted: #### 3.5.1 Creation of High-Energy Plasma The initial step in thrust generation is to create a high-energy plasma. Fusion reactions provide the energy source for this plasma. When light atomic nuclei, typically isotopes of hydrogen-like deuterium and tritium, are fused under extreme conditions, they form helium and release a neutron and a significant amount of energy (Eq. (9). This reaction releases energy primarily in the form of kinetic energy of the produced helium and neutron [Glendinning(2004)]. The helium ions (or alpha particles) are fully ionized and are confined within the magnetic field, thereby increasing the plasma's energy. #### 3.5.2 Expelling the Plasma With the plasma heated to sufficient temperatures and pressures by fusion reactions, expelling it to produce thrust becomes essential. The plasma, being charged, responds strongly to magnetic and electric fields. By designing a suitable magnetic nozzle, the high-energy plasma can be directed and expelled from the spacecraft, producing thrust. The principle is analogous to a de Laval nozzle in a chemical rocket, where the shape of the nozzle accelerates the exhaust gases and directs them in a specific direction. However, the magnetic nozzle employs magnetic fields instead of physical walls to contain and direct the plasma. The thrust, \(T\), is given by: \[T=inv_{e}+(p_{e}-p_{0})A_{e} \tag{15}\] Where \(\dot{m}\) is the mass flow rate of the plasma, \(v_{e}\) is the exhaust velocity, \(p_{e}\) and \(p_{0}\) are the exhaust and ambient pressures respectively, and \(A_{e}\) is the nozzle exit area [Humble et al.(2005)Humble, Henry and Larson]. #### 3.5.3 Advantages of Fusion-Driven Propulsion The exhaust velocity \(v_{e}\) is a critical parameter as it determines the propulsion system's efficiency. The fusion-driven propulsion can achieve much higher exhaust velocities compared to chemical rockets. Higher \(v_{e}\) means that for a given amount of thrust, the spacecraft needs to expel less mass, making the propulsion system much more mass-efficient. The specific impulse, \(I_{sp}\), a measure of propulsive efficiency, is related to the exhaust velocity (Eq. (16)): \[I_{sp}=\frac{v_{e}}{g_{0}} \tag{16}\] Where \(g_{0}\) is the standard acceleration due to gravity. Fusion-driven systems can achieve \(I_{sp}\) values several times greater than chemical propulsion systems [Frisbee(2003)]. ### Mass Budget and Neutron Shielding in MFPD #### 3.6.1 Importance of Neutron Shielding The primary fusion reaction in the MFPD involves deuterium and tritium (Eq. (9)). This reaction yields a high-energy neutron, as in equation (9). These neutrons are not confined by the magnetic fields (as they are neutral) and can penetrate deep into materials, causing structural damage, inducing radioactivity, and posing threats to human health [Mitchell and Hayman(2013), Slater(2018)]. #### 3.6.2 Shielding Materials The ideal shielding material for neutrons would possess the following characteristics: * High cross-section for neutron absorption. * Capacity to slow down fast neutrons to thermal energies, where they can be easily absorbed. * Low secondary gamma-ray production. * Structural integrity under irradiation. * Lightweight for space applications. Hydrogenous materials, such as polyethylene, are effective at slowing down fast neutrons due to their similar mass to the neutron. Additionally, compounds like boron carbide (\(B_{4}C\)) can absorb neutrons with minimal secondary gamma production [Turner(2013)]. #### 3.6.3 Mass Budget Estimation The mass of the neutron shield is determined by the required attenuation factor and the material's properties. For instance, considering a desired attenuation factor \(AF\) (a reduction factor of the incoming neutron flux), the mass thickness \(m\) can be estimated by Eq. (17): \[m=\frac{-\ln(AF)}{\Sigma} \tag{17}\] Where \(\Sigma\) is the macroscopic cross-section of the shielding material. However, neutron shielding isn't the only component contributing to the mass budget. The total mass budget \(M\) can be conceptualized as Eq. (18): \[M=M_{\text{MFPD core}}+M_{\text{shielding}}+M_{\text{support structures}}+\dots \tag{18}\] Where: * \(M_{\text{MFPD core}}\) is the mass of the core propulsion system. * \(M_{\text{shielding}}\) is the mass of the neutron and other radiation shields. * \(M_{\text{support structures}}\) encompasses the mass of structural elements, conduits, coolant systems, and other auxiliary systems. Note: The exact masses would depend on detailed design specifications, materials chosen, and engineering constraints. ### Thrust Generation Mechanism in MFPD #### 3.7.1 Fusion Reactions and High-Energy Particles The primary reaction driving the MFPD involves deuterium and tritium fusion (Eq. (9)). The fusion of these nuclei in equation releases a neutron and a helium nucleus (or alpha particle) with significant kinetic energy. This energy, in the form of high-speed particles, is central to the thrust generation process. #### 3.7.2 Harnessing Plasma Thrust The high-speed helium nuclei (alpha particles) produced from the fusion reactions serve as the primary propellant in the MFPD. They are charged particles, which means they can be manipulated using magnetic fields. The principle behind MFPD's thrust generation is to extract the kinetic energy of these particles and expel them at high velocities, generating thrust via Newton's third law [3]. A magnetic nozzle is utilized for this purpose. This nozzle converts the thermal and kinetic energy of the plasma into directed kinetic energy, expelling the plasma at high velocities and producing thrust (Eq. (19)). \[F_{\text{thrust}}=\dot{m}v_{\text{exhust}}+(P_{\text{exit}}-P_{\text{ ambient}})A_{\text{exit}} \tag{19}\] Where: * \(F_{\text{thrust}}\) is the thrust. * \(\dot{m}\) is the mass flow rate of the expelled plasma. * \(v_{\text{exhust}}\) is the exhaust velocity of the plasma. * \(P_{\text{exit}}\) and \(P_{\text{ambient}}\) are the pressures at the nozzle exit and ambient space, respectively. * \(A_{\text{exit}}\) is the area of the nozzle exit. The primary thrust component arises from the high exhaust velocity of the plasma, while the pressure differential term becomes significant only in low vacuum environments and negligible in deep space. #### 3.7.3 Magnetic Nozzle and Plasma Expansion A magnetic nozzle doesn't have solid walls like traditional nozzles. Instead, it uses magnetic fields to guide and accelerate the plasma. As the plasma moves through the diverging magnetic field, it expands, and its particles are accelerated due to the conservation of magnetic moment, leading to an increase in the particle's perpendicular kinetic energy and a conversion of this energy into directed flow energy [1]. This magnetic guidance ensures that the plasma is expelled, controlled, and directed, maximizing thrust and preventing plasma losses to the spacecraft's structure. ### Design Considerations for MFPD Thrusters Magnetic Fusion Plasma Drive (MFPD) thrusters promise a new frontier in propulsion capabilities, but their implementation and efficiency hinge on careful design and meticulous engineering. With the potential to revolutionize propulsion technology, the MFPD also brings with it challenges that demand innovative solutions. This section outlines key design considerations for MFPD thrusters, highlighting the challenges and prospective solutions in ensuring these propulsion systems' optimal performance, longevity, and energy efficiency. #### 3.8.1 Role of Electrodes in MFPD In MFPD systems, electrodes play a crucial role in initiating and maintaining the plasma. They serve as a medium to inject current into the plasma, which can either help in ionizing the propellant or in maintaining the fusion process, depending on the design specifics of the propulsion system. * **Location and Configuration**: Electrodes are strategically placed in the thruster chamber, often at the entrance or close to the propellant injection site. Their placement ensures optimal interaction with the incoming propellant and existing plasma. * **Operating Envelope**: These electrodes operate under extreme conditions with temperatures exceeding thousands of Kelvin and exposure to high-energy plasma particles. As a result, they are subject to wear and erosion over time. * **Purpose**: Their primary purpose is to introduce an electric current into the propellant. This current, in conjunction with applied or inherent magnetic fields, helps in the ionization of the propellant, sustenance of the fusion process, and acceleration of the plasma for thrust generation. Given their critical role, electrode erosion emerges as a significant challenge in MFPD systems. #### 3.8.2 Electrode Erosion Continuous exposure to high-energy plasma leads to the degradation of electrodes over time. This erosion not only affects the longevity of the MFPD system but can also introduce impurities into the plasma, impacting its performance. Addressing this challenge requires: * **Cooling Systems**: Implementing active cooling mechanisms for the electrodes can significantly prolong their operational lifespan by reducing thermal stresses and sputtering effects [14]. * **Material Selection**: Opting for materials with higher melting points and lower sputtering yields can ensure that electrodes retain their structural integrity for longer periods. Materials such as tungsten or graphite, commonly used in other plasma-facing components, might be suitable choices [14]. * **Electrodeless Designs in Fusion Propulsion**: While the specific design of the MFPD we are discussing utilizes electrodes, it's worth noting the broader landscape of fusion propulsion technologies. In the domain of fusion propulsion, some approaches are being explored that do not rely on solid electrodes to initiate and sustain plasma. These designs aim to address the erosion issue inherent to systems with electrodes. While commonly associated with Electric Propulsion (EP) systems, methods such as radio-frequency or microwave ionization have also been researched for potential application in fusion propulsion. However, the implementation in fusion systems presents its own set of challenges and is outside the primary focus of our discussion on the MFPD system [12]. In this paper, MFPD refers to a specific novel fusion propulsion system that uses superconducting magnets for plasma confinement and magnetic nozzles for thrust generation rather than a generic label for any fusion propulsion system. #### 3.8.3 Power Supply MFPD thrusters require significant power. The source of this power can be: * Solar Arrays: Suitable for low to moderate power requirements [13]. * Nuclear Reactors: For high-power applications, especially in deep space where sunlight is scarce [13]. #### 3.8.4 Magnetic Field Generation and Fusion Power in the MFPD #### 3.8.5 Magnetic Field Generation and Fusion Power in the MFPD In the MFPD system, fusion reactions serve dual purposes: producing high-energy plasma for propulsion and generating electricity for onboard systems, including magnetic field generation. The fusion process yields high-energy plasma that can be expelled for thrust and produces neutrons which can be captured in a blanket, triggering further reactions that release heat. This heat can be converted to electricity using thermoelectric generators or other methods. Herein lies the distinction of the MFPD from typical EP systems. * **Self-Sustaining Power Generation**: Once the fusion reactions are initiated, the MFPD uses the energy produced by the fusion process itself to generate the electricity required to sustain the magnetic fields. This potential closed-loop system contrasts with traditional EP systems that rely on external power sources, such as solar panels or nuclear reactors. Fusion's inherent energy density enables this self-sustenance, a characteristic absent in traditional EP systems. * **High Thrust and Efficiency**: Fusion reactions, given their immense energy release, allow the MFPD to potentially provide both high specific impulse and high thrust, a combination challenging to achieve with existing propulsion systems. * **Dual Utility**: The MFPD serves as a propulsion system and power generator for spacecraft subsystems, reducing the need for additional onboard power generation methods. * **Choice of Magnetic Field Generation**: Depending on mission requirements, the MFPD can leverage self-generated magnetic fields, a concept challenging to achieve but under active investigation in the fusion community. For more demanding operations, these fields can be augmented with externally applied fields using electricity derived from fusion reactions. It's crucial to emphasize that while the conceptual benefits of MFPD are significant, the technology is still nascent, especially when compared to more mature EP systems. However, its potential merits warrant further research and development. ### Future of MFPD Thrusters With advances in power generation and materials science, MFPD thrusters are poised to play a significant role in future space missions. Their ability to provide high thrust combined with good efficiency makes them attractive for various mission profiles, from satellite station-keeping to deep space exploration. The MFPD offers a promising avenue for electric propulsion, leveraging electromagnetic principles to accelerate plasma and produce thrust. While challenges remain in electrode erosion and power requirements, ongoing research and technological advancements hint at a bright future for MFPD propulsion in space exploration. ## 4 Mathematical Formulations of the MFPD System The MFPD system, with its potential to redefine space propulsion, operates on intricate principles grounded in physics and mathematics. As with any novel technology, a rigorous mathematical treatment is imperative to comprehend and fine-tune its performance. This section systematically breaks down the core mathematical relationships governing the MFPD system, ranging from the foundational equations of fusion reactions and plasma thrust to the intricate interplay of magnetic fields and plasma confinement. By dissecting these formulations, we aim to establish a theoretical framework that not only elucidates the mechanics of the MFPD system but also lays the groundwork for its optimization and further innovations. ### Fusion Reaction Rates Quantum mechanics dictates fusion reactions, but for macroscopic rates, we often use cross-sections averaged over thermal distributions of particle speeds. This leads to the concept of reactivity (Eq. (20)): \[\langle\sigma v\rangle=\int_{0}^{\infty}\sigma(v)vf(v)dv \tag{20}\] Where \(f(v)\) represents the Maxwellian velocity distribution (Eq. (21)) [3]: \[f(v)=\left(\frac{m}{2\pi k_{B}T}\right)^{3/2}4\pi v^{2}e^{-\frac{mv^{2}}{2k_{B}T }} \tag{21}\] Given this, the fusion power density \(P\) can be expressed as Eq. (22) [3]: \[P=\frac{1}{2}n^{2}\langle\sigma v\rangle E_{fusion} \tag{22}\] Where \(E_{fusion}\) is the energy released per fusion reaction. The efficiency of magnetic confinement directly impacts the fusion rate. Particle confinement time \(\tau\) measures how long, on average, a particle remains in the plasma before it's lost. It's a crucial parameter, and the product \(n\tau\) often serves as a benchmark for the viability of sustained fusion [12]. ### Plasma Thrust Equations The fundamental equations governing the behavior of a plasma in an electromagnetic field are the fluid equations coupled with Maxwell's equations. For a quasi-neutral plasma with inertial effects neglected, the fluid momentum equation becomes Eq. (23) [3]: \[m_{i}n\left(\frac{\partial\mathbf{V}}{\partial t}+\mathbf{V}\cdot\nabla \mathbf{V}\right)=-n\nabla U+q_{n}(\mathbf{E}+\mathbf{V}\times\mathbf{B})- \nabla\cdot\mathbf{P} \tag{23}\] Here, \(m_{i}\) is ion mass, \(\mathbf{V}\) is plasma velocity, \(U\) is potential energy, \(q_{n}\) is charge number, \(\mathbf{E}\) and \(\mathbf{B}\) are electric and magnetic fields, and \(\mathbf{P}\) is the pressure tensor. Using this, the change in momentum \(\Delta p\) produced by the MFPD, as a function of the fusion energy and the propellant mass, is given by: \[\Delta p=\sqrt{2\times m_{\text{ propellant}}\times E_{\text{fusion}}} \tag{24}\] This momentum change, in combination with the nozzle design and propellant flow rates, gives rise to the thrust \(T\) produced by the MFPD: \[T=\dot{m}V_{exit}=A_{p}P_{exit} \tag{25}\] Where \(A_{p}\) is the area of the propulsion nozzle and \(P_{exit}\) is the plasma pressure at the nozzle exit [3]. For optimal performance, the specific impulse \(I_{sp}\) is defined as Eq. (26) [3]: \[I_{sp}=\frac{V_{exit}}{g_{0}} \tag{26}\] Here, \(g_{0}\) is the gravitational acceleration constant. ### Magnetic Field Equations The confinement and manipulation of plasma in an MFPD system are intrinsically tied to the properties of the magnetic fields employed. Maxwell's equations form the foundation for the behavior of these magnetic fields in the presence of currents and charges (Eq. (27)): \[\nabla\cdot\mathbf{B}=0 \tag{27}\] This equation asserts no magnetic monopoles exist; the magnetic field lines are continuous and closed (Eq. (28)) [11]. \[\nabla\times\mathbf{B}=\mu_{0}\mathbf{J}+\mu_{0}\epsilon_{0}\frac{\partial \mathbf{E}}{\partial t} \tag{28}\] Here, \(\mu_{0}\) is the permeability of free space, \(\mathbf{J}\) is the current density, \(\epsilon_{0}\) is the permittivity of free space, and \(\mathbf{E}\) is the electric field. This equation, Ampere's law with Maxwell's addition, relates the magnetic field to currents and changing electric fields [11]. For an MFPD system, magnetic confinement can be described using the safety factor, \(q\), which is crucial for assessing plasma stability (Eq. (29)): \[q(a)=\frac{aB_{t}}{RB_{p}} \tag{29}\] Where \(R\) is the major radius of the torus and \(B_{t}\) and \(B_{p}\) are the toroidal and poloidal magnetic field components, respectively. Understanding the value of \(q\) across the plasma profile helps predict potential instabilities, with specific values and profiles preferred for stability [12]. Magnetic confinement is often modeled using the Grad-Shafranov equation, which describes equilibria in magnetically confined plasmas (Eq. (30)) [10]: \[\Delta^{*}\Psi=-\mu_{0}R^{2}\frac{dp}{d\Psi}-F\frac{d\,F}{d\Psi} \tag{30}\] Here, \(\Psi\) is the poloidal magnetic flux, \(p\) is the plasma pressure, \(R\) is the major radius, and \(F\) is a function representing the toroidal current inside a magnetic surface. The challenge with magnetic confinement is not just to confine the plasma but to do so stably over long periods. This often requires superimposing additional magnetic fields, analyzing various MHD stability modes, and even using feedback control mechanisms [12]. This section provided a mathematical overview of the key aspects underlying the operation of a Magnetic Plasma Drive system. Understanding these equations is pivotal to model, predict, and control the behavior of such propulsion systems. A comprehensive grasp of these principles will be instrumental in advancing and optimizing MFPD technologies. ## 5 Comparison of MFPD with Existing Fusion Propulsion Concepts Several key concepts have been proposed over the years in the realm of fusion-based propulsion. Our Magnetif Fusion Plasma Drive (MFPD) shares some similarities but also presents unique characteristics. Here, we delineate the primary distinctions between MFPD and other notable fusion propulsion concepts. ### Bussard Ramjet * Concept: Proposed by Robert W. Bussard in 1960, this interstellar fusion rocket collects hydrogen from space with a magnetic ramscoop to use as fusion fuel [Bussard(1960)]. * Comparison: Unlike the MFPD, the Bussard Ramjet relies on collecting its fuel from the interstellar medium. While the Ramjet is designed for interstellar distances, MFPD is envisioned primarily for interplanetary missions. ### Direct Fusion Drive (DFD) * Concept: Developed primarily by Princeton Satellite Systems, the Direct Fusion Drive (DFD) is a propulsion system that simultaneously provides thrust and electric power. It uses a fusion reaction involving deuterium and other isotopes [Cohen(2013)]. Field-Reversed Configuration (FRC) is a method to confine plasma without needing an external magnetic field, using the plasma's self-induced magnetic field. In DFD, a field-reversed configuration is employed to achieve compact and efficient confinement of the fusion fuel. The thrust in DFD is primarily generated by expelling deuterium that has been heated by the hot plasma surrounding the fusion reaction. Due to its high temperature, this plasma is expelled at high velocities, generating a thrust as per Newton's third law. * Comparison with MFPD: DFD and MFPD employ magnetic confinement as central components. However, they differ in their fusion reactions and confinement strategies. While DFD relies on deuterium and adopts a field-reversed configuration for confinement, the MFPD focuses on D-T (Deuterium-Trtium) fusion and utilizes a generalized magnetic confinement approach. ### Magnetic Target Fusion (MTF) * Concept: MTF employs magnetic fields to compress fusion fuel, followed by ignition through lasers or other external agents [Lindemuth and Kirkpatrick(2000)]. * Comparison: MFPD primarily focuses on harnessing magnetic fields for controlling plasma for propulsion, not necessarily for fusion ignition, marking a departure from the MTF methodology. ### Vasimr * Concept: A creation of the Ad Astra Rocket Company, VASIMR utilizes radio waves to ionize propellant, with magnetic fields subsequently accelerating the plasma to generate thrust [Chang-Diaz(2000)]. * Comparison: While both models employ plasma and magnetic fields, VASIMR's thrust generation does not hinge on fusion reactions, setting it apart from the MFPD. ### Project Daedalus * Concept: A brainchild of the British Interplanetary Society from the 1970s, Daedalus was a proposed uncrewed interstellar probe utilizing fusion for propulsion, specifically deuterium/helium-3 fusion [Bond(1978)]. * Comparison: While both concepts leverage fusion for propulsion, Daedalus had interstellar travel in its crosshairs and employed D/He-3 fusion. In contrast, MFPD targets interplanetary missions with a broader D-T fusion perspective. Based on the theoretical foundations and the principles underlying the MFPD system, certain potential advantages emerge in the context of fusion propulsion. As described, one distinguishing feature of the MFPD system is the proposed use of superconducting magnets for plasma confinement and magnetic nozzles for thrust generation. While these features might lead to enhanced thrust capabilities and potentially reduced fusion fuel requirements, these are based on initial analyses and require further in-depth study and simulation for validation. Like other fusion propulsion methods, the MFPD system is expected to face challenges, especially in areas such as controlled fusion in the space environment and materials science concerns. However, if realized, the potential for higher specific impulses could position the MFPD system and fusion propulsion at large as attractive candidates for prolonged space missions. It's crucial to note that the advantages and challenges highlighted here are based on preliminary assessments, and comprehensive simulations and analyses are needed to ascertain these claims conclusively. ## 6 Example Calculations and Descriptions ### MFPD Plasma Dynamics The behavior of plasma within the Magnetoplasmadynamic (MFPD) propulsion system can be primarily understood through magnetohydrodynamics (MHD). In the MFPD, the presence of an externally applied magnetic field and electric currents through the plasma results in a Lorentz force which accelerates the plasma out of the thruster. Using the MHD momentum equation: \[\rho\left(\frac{\partial\mathbf{V}}{\partial t}+\mathbf{V}\cdot\nabla\mathbf{ V}\right)=-\nabla P+\mathbf{J}\times\mathbf{B}-\nabla\cdot\mathbf{II} \tag{31}\] where \(\rho\) is the plasma density, **V** is the plasma velocity, \(P\) is the pressure, **J** is the current density, **B** is the magnetic field, and **II** is the viscous stress tensor. In the context of the MFPD, the dominant term is the Lorentz force, \(\textbf{J}\times\textbf{B}\). This force accelerates the plasma, providing thrust to the spacecraft. The electric currents that give rise to **J** are induced by the fusion reactions taking place within the MFPD chamber, making it imperative to maintain conditions conducive to fusion. To maintain a quasi-neutral plasma and to aid in achieving conditions for fusion, the magnetic confinement becomes crucial. It not only ensures that the charged plasma particles remain within the thruster chamber long enough for fusion reactions but also drives the particles out at high velocities due to the Lorentz force. The magnetic confinement time and the particle densities dictate the efficiency of fusion reactions and, subsequently, the efficiency of the MFPD. It's worth noting that the efficiency of the MFPD system is intrinsically linked to the balance between the magnetic confinement, ensuring sufficient fusion reactions, and the Lorentz force, which provides the thrust. Any variations in the plasma parameters, such as density, temperature, or electric conductivity, will directly influence the MFPD's thrust and efficiency. In summary, the plasma dynamics within the MFPD thruster revolve around harnessing the Lorentz force to achieve efficient propulsion. This force is the culmination of fusion-driven electric currents and the externally applied magnetic field. Understanding and optimizing these dynamics are pivotal for realizing the potential advantages of the MFPD system for space missions. ### Fusion Processes in MFPD The Magnetoplasmadynamic (MFPD) propulsion system capitalizes on the energy released from fusion reactions to generate thrust. Here, we delve into the specifics of these fusion reactions and the conditions required for their occurrence within the MFPD. #### 6.2.1 Deuterium-Tritium Fusion As noted, our focus is on deuterium-tritium (D-T) fusion, which is one of the most energetically favorable fusion reactions. The reaction is represented as: \[D+T\rightarrow\alpha+n+17.6\,\mathrm{MeV} \tag{32}\] Here, \(\alpha\) is a helium nucleus, and \(n\) is a neutron. The energy of 17.6 MeV is distributed among the products, with the majority carried away by the neutron. #### 6.2.2 Conditions for Fusion For fusion to occur, the plasma within the MFPD must attain conditions referred to as the Lawson criterion. This involves achieving a critical product of plasma density and confinement time. The equation representing the Lawson criterion is: \[nr\geq\frac{12kT}{E_{\mathrm{fusion}}} \tag{33}\] Where: - \(n\) is the number density of the fusion fuel. - \(\tau\) is the energy confinement time. - \(k\) is the Boltzmann constant. - \(T\) is the plasma temperature. - \(E_{\mathrm{fusion}}\) is the energy released per fusion reaction. Given the high temperatures required for D-T fusion, the fusion fuels are fully ionized, forming a plasma of positively charged nuclei and free electrons. At these temperatures, the coulombic repulsion between the positively charged D and T nuclei becomes significant. Thus, the plasma needs to be sufficiently hot, dense, and confined for a long enough time to overcome this repulsion and allow fusion to take place. #### 6.2.3 Challenges and Constraints Fusion within the MFPD faces challenges that include: * Magnetic Confinement: Achieving a strong and stable magnetic field that can confine the high-temperature plasma efficiently. * Radiative Losses: High-temperature plasmas emit radiation, leading to energy losses that can inhibit fusion reactions. * Fuel Recycling: Capturing and recycling unburnt fusion fuel to maximize fuel efficiency. * Neutron Management: The neutrons produced in D-T fusion are not confined by magnetic fields and can escape, leading to radiation hazards and potential structural damage to the MFPD. ### Performance Metrics We use standardized performance metrics to objectively compare the MFPD propulsion system against conventional chemical rockets. These metrics give insights into the efficiency, capability, and potential advantages of the MFPD system for long-duration space missions. #### 6.3.1 Specific Impulse (\(I_{sp}\)) Specific Impulse is a measure of the efficiency of a propulsion system. It is defined as the total impulse delivered per unit weight of propellant consumed: \[I_{sp}=\frac{\Delta\upsilon}{g_{0}\cdot\ln\left(\frac{m_{0}}{m_{f}}\right)} \tag{34}\] Where: - \(\Delta\upsilon\) is the change in velocity. - \(g_{0}\) is the standard gravitational acceleration (approximately 9.81 m/s\({}^{2}\)). - \(m_{0}\) is the initial mass of the spacecraft. - \(m_{f}\) is the final mass of the spacecraft after propellant consumption. Higher \(I_{sp}\) values represent better fuel efficiency. The MFPD system's ability to attain significantly higher \(I_{sp}\) values than chemical rockets makes it particularly suitable for long-duration missions. #### 6.3.2 **Thrust-to-Weight Ratio** The thrust-to-weight ratio is a dimensionless parameter indicating the propulsion system's performance concerning its weight. For spacecraft propulsion, especially for interstellar missions, a higher thrust-to-weight ratio can be crucial: \[\frac{T}{W}=\frac{\text{Thrust produced by the engine}}{\text{Weight of the propulsion system}} \tag{35}\] #### 6.3.3 **Fuel Efficiency** While \(I_{sp}\) provides insights into the efficiency concerning propellant consumption, fuel efficiency looks at the energy extracted from the fuel relative to the total energy available in that fuel. For fusion reactions, this metric becomes critical given the high-energy yields of fusion fuels. #### 6.3.4 **Endurance** Endurance, in this context, refers to the ability of the propulsion system to sustain thrust over extended periods. Given that space missions can last months to years, the longevity of the propulsion system without significant degradation is pivotal. #### 6.3.5 **Operational Flexibility** This metric considers the system's ability to adapt to different mission profiles. It examines aspects like throttleability, start-stop cycles, and adaptability to different power levels. #### 6.3.6 **Safety and Reliability** Especially crucial for manned missions, this metric assesses the risks associated with the propulsion system, including radiation hazards, potential system failures, and challenges in emergency shutdowns. #### 6.3.7 **Payload Fraction** Given by the ratio of the payload mass to the total spacecraft mass, a higher payload fraction indicates a greater proportion of the spacecraft's mass is dedicated to the actual mission (instruments, crew, supplies), rather than propulsion or support systems. \[\text{Payload Fraction}=\frac{m_{\text{payload}}}{m_{0}} \tag{36}\] Where \(m_{\text{payload}}\) is the mass of the payload. In conclusion, by evaluating the MFPD propulsion system through these standardized metrics, we can understand its advantages, potential limitations, and areas for optimization. This comprehensive analysis facilitates informed decisions about the suitability of the MFPD system for specific mission profiles and objectives. #### 6.3.8 **MFPD's Fusion Strategy** To optimize fusion reactions, the MFPD system uses a combination of magnetic confinement and inertial confinement. Magnetic fields confine the plasma and reduce losses due to transport phenomena, while the inertia of the fuel itself (aided by rapid heating and compression) ensures a high local density, promoting fusion reactions. The combination aims to create a "sweet spot" where fusion conditions are achieved and maintained for efficient propulsion. In conclusion, harnessing fusion energy in the MFPD requires a careful balance between plasma confinement, fuel density, and temperature. Achieving and maintaining this balance is pivotal for the propulsion system's efficient operation and realizing its potential benefits for long-duration space missions. ### **Implications for Mars Mission** Utilizing an MFPD propulsion system for a mission to Mars brings forth several significant advantages over traditional chemical rockets, primarily due to the increased efficiency and higher thrust capability. To understand these advantages better, we can perform some example calculations using a spacecraft with a total mass of \(m_{0}=100\) metric tonnes, of which \(m_{\text{payload}}=20\) metric tonnes is payload. #### 6.4.1 **Delta-V Requirements** The Delta-V (\(\Delta v\)) requirements for a Mars mission can vary based on the mission profile, orbital dynamics, and propulsion system. For a typical transfer orbit from Earth to Mars, a \(\Delta v\) of approximately 4.3 km/s is required. #### 6.4.2 **Propellant Mass Fraction (PMF)** Using the rocket equation, we can determine the propellant mass fraction (PMF) required to achieve the necessary \(\Delta v\): \[\Delta v=I_{sp}\cdot g_{0}\cdot\ln\left(\frac{m_{0}}{m_{f}}\right) \tag{37}\] Given that MFPD has a significantly higher \(I_{sp}\) than chemical rockets, for the sake of illustration, let's assume an \(I_{sp}\) of 5000 s for MFPD. Rearranging the above equation, we find: \[m_{f}=m_{0}\cdot\exp\left(-\frac{\Delta v}{I_{sp}\cdot g_{0}}\right) \tag{38}\] Inserting the values: \[m_{f}=100\times\exp\left(-\frac{4.3\times 10^{3}}{5000\times 9.81}\right)\approx 77. 1\text{ tonnes} \tag{39}\] Thus, the mass of propellant required \(m_{p}\) is: \[m_{p}=m_{0}-m_{f}\approx 22.9\text{ tonnes} \tag{40}\] #### 6.4.3 **Duration to Reach Mars** For continuous thrust propulsion systems like MFPD, the time to reach Mars can be estimated using the formula: \[t=\frac{\Delta v}{a} \tag{41}\] Where \(a\) is the average acceleration of the spacecraft. If we assume an average acceleration of \(0.001\,\mathrm{g}\) (or \(0.00981\,\mathrm{m}\mathrm{/}\mathrm{s}^{2}\)) for MFPD, we can compute: \[t=\frac{4.3\times 10^{3}}{0.00981}\approx 438,327\,\mathrm{s}\approx 5\,\mathrm{ days} \tag{42}\] This dramatic reduction in transit time compared to conventional chemical rockets, which typically take months, is one of the critical advantages of using MFPD propulsion for interplanetary travel. #### 6.4.4 Advantages and Challenges * Shorter Transit Times: As demonstrated, MFPD offers significantly reduced travel times, minimizing crew exposure to space radiation and reducing mission risks. * Higher Payload Capacities: Given the efficiency of MFPD, a more significant fraction of the spacecraft's mass can be dedicated to the payload, enhancing mission capabilities. * Flexibility: MFPD systems can be throttled, allowing for adaptive mission profiles and even potential abort scenarios. * Challenges: High energy requirements and the necessity for robust radiative cooling systems present engineering challenges. Additionally, ensuring the reliability and longevity of the propulsion system for the entire mission duration is critical. While challenges exist, the potential benefits of using MFPD for Mars missions are compelling. The shortened transit times, increased payload capacities, and operational flexibility position MFPD as a promising propulsion technology for future interplanetary exploration. ### Implications for Proxima Centauri Mission Using MFPD for an interstellar mission to Proxima Centauri offers the tantalizing potential of making such a vast journey feasible within human lifetimes. For context, let's continue with the spacecraft parameters: a total mass of \(m_{0}=100\) metric tonnes, of which \(m_{\mathrm{payload}}=20\) metric tonnes is payload. #### 6.5.1 Distance to Proxima Centauri The distance to Proxima Centauri is approximately \(4.24\) light-years, which translates to about \(4.017\times 10^{13}\) kilometers. #### 6.5.2 Delta-V Requirements For an interstellar journey, achieving a substantial fraction of the speed of light is desirable. For this calculation, we'll aim for \(10\%\) of the speed of light, \(c\), which is \(3\times 10^{7}\) m/s. #### 6.5.3 Propellant Mass Fraction (PMF) Using the rocket equation: \[\Delta v=I_{sp}\cdot g_{0}\cdot\ln\left(\frac{m_{0}}{m_{f}}\right) \tag{43}\] Let's continue assuming an \(I_{sp}\) of \(5000\,\mathrm{s}\) for the MFPD. Rearranging the above equation, we get: \[m_{f}=m_{0}\cdot\exp\left(-\frac{\Delta v}{I_{sp}\cdot g_{0}}\right) \tag{44}\] Plugging in the values: \[m_{f}=100\times\exp\left(-\frac{3\times 10^{7}}{5000\times 9.81}\right)\approx 0.003\,\mathrm{tonnes} \tag{45}\] Thus, the propellant mass required \(m_{p}\) is: \[m_{p}=m_{0}-m_{f}\approx 99.997\,\mathrm{tonnes} \tag{46}\] This indicates that almost the entire spacecraft's mass would be required as propellant to achieve \(10\%\) of \(c\), illustrating the immense challenges of interstellar travel. #### 6.5.4 Duration to Reach Proxima Centauri Given the average acceleration of \(a=0.001\,\mathrm{g}\) and the desired final velocity: \[t_{\mathrm{accel}}=\frac{\Delta v}{a} \tag{47}\] \[t_{\mathrm{accel}}=\frac{3\times 10^{7}}{0.00981}\approx 3.06\times 10^{9}\, \mathrm{s}\approx 97\,\mathrm{years} \tag{48}\] The spacecraft will spend a similar amount of time decelerating upon approach. Thus, the total journey time is roughly \(194\) years, not including the time spent coasting at maximum velocity. #### 6.5.5 Relativistic Effects As the spacecraft approaches a significant fraction of the speed of light, relativistic effects come into play. However, these effects are minimal at \(10\%\) of \(c\), leading to a time dilation factor of about \(1.005\). This is a minimal difference, but for speeds closer to \(c\), this would become more prominent. #### 6.5.6 Challenges and Conclusions * Enormous Energy Requirements: Achieving a significant fraction of \(c\) requires vast amounts of energy, and the propellant mass fraction becomes incredibly high. * Long Journey Times: Even at \(10\%\) of \(c\), the journey would take nearly two centuries. * Hazard Protection: Protecting the spacecraft from interstellar dust and particles becomes essential at these speeds. Even a grain-sized object could be catastrophic. * Communication Delays: Signals to and from Earth would take over four years. ### Comparative Propulsion Journey Times #### 6.6.1 Chemical Propulsion Mars Mission:For chemical propulsion systems, a Hohmann transfer orbit is often used for interplanetary missions. The time \(\Delta t_{\text{Mars, chemical}}\) for a one-way trip to Mars using this method is approximately: \[\Delta t_{\text{Mars, chemical}}\approx 259\,\text{days} \tag{49}\] Proxima Centauri MissionConsidering the vast distance to Proxima Centauri (4.24 light-years), chemical propulsion is impractical for such journeys due to enormous travel times. If we attempted to use chemical propulsion, the travel time \(\Delta t_{\text{Proxima, chemical}}\) would be: \[\Delta t_{\text{Proxima, chemical}}\approx 73,000\,\text{years} \tag{50}\] #### 6.6.2 Nuclear Thermal Rocket (NTR) Mars MissionNuclear thermal rockets (NTRs) can potentially reduce the travel time to Mars compared to chemical propulsion. The journey time \(\Delta t_{\text{Mars, NTR}}\) is roughly: \[\Delta t_{\text{Mars, NTR}}\approx 90\,\text{days} \tag{51}\] Proxima Centauri MissionFor a mission to Proxima Centauri using NTRs, the duration \(\Delta t_{\text{Proxima, NTR}}\) would still be prohibitively long: \[\Delta t_{\text{Proxima, NTR}}\approx 40,000\,\text{years} \tag{52}\] #### 6.6.3 Comparison with MFPD Comparing the MFPD system with other propulsion methods: For the Mars mission: * Chemical propulsion: \(\Delta t_{\text{Mars, chemical}}\) is about 259 days as given in Eq. (49). * NTR: \(\Delta t_{\text{Mars, NTR}}\) is around 90 days from Eq. (51). * MFPD: \(\Delta t_{\text{Mars, MFPD}}\) is roughly 40 days. For the Proxima Centauri mission: * Chemical propulsion: \(\Delta t_{\text{Proxima, chemical}}\) would be around 73,000 years as per Eq. (50). * NTR: \(\Delta t_{\text{Proxima, NTR}}\) would be about 40,000 years according to Eq. (52). * MFPD: \(\Delta t_{\text{Proxima, MFPD}}\) would be approximately 194 years. However, if the MFPD could be scaled to achieve speeds of 0.1c, the journey would take only 42.4 years. These comparative times emphasize the potential advantages of the MFPD system, especially for long-duration missions. ## 7 Advantages and Potential of the MFPD System The pursuit of advancing space exploration and technology has often been intertwined with the development of efficient propulsion systems. Among the myriad of propulsion options, the MFPD system stands out as a beacon of promise for the future of space travel. With its mechanisms and operational characteristics, the MFPD system heralds a paradigm shift in propulsion science. While traditional propulsion methods have long dominated the space industry, the MFPD system presents a compelling case for rethinking how we navigate the cosmos. This section outlines the advantages and the vast potential of the MFPD system, illustrating how it could redefine interstellar travel and push the boundaries of human exploration. ### Comparative Analysis with Existing Propulsion Methods The unique nature of the MFPD system allows for several key advantages over existing propulsion systems, particularly when one evaluates them over long-duration space missions. * Higher Specific Impulse (Isp): One of the standout advantages of MFPD thrusters is the potential for a much higher specific impulse (Isp) when compared to chemical rockets. The Isp represents the amount of thrust per unit of propellant flow. While chemical rockets typically have an Isp in the 200-450 seconds range, MFPD systems can theoretically achieve Isp values exceeding 10,000 seconds [1]. * Continuous Thrust: Unlike pulsed propulsion systems, like the pulsed inductive thruster, MFPD systems can offer continuous thrust. This ensures smoother trajectory adjustments and potentially quicker transits [17]. * Fuel Flexibility: MFPD thrusters can use a variety of propellants, including noble gases like xenon or argon, as well as more abundant resources like hydrogen. This provides flexibility in mission planning and potential for in-situ resource utilization [1]. ### Potential Scalability and Fuel Efficiency Benefits The scalability of the MFPD system is another significant advantage: * Adaptability to Various Mission Profiles: MFPD systems can be designed for both small-scale missions (like satellite station-keeping) or large-scale interplanetary missions [3]. * Fuel Efficiency: MFPD thrusters have the potential to be much more fuel-efficient than their chemical counterparts. Their high lsp ensures that a greater proportion of onboard propellant is converted into kinetic energy. As a result, for long-duration missions, spacecraft can either carry less fuel or allocate more space for payloads [11]. ### Thrust Capabilities and Range Predictions While MFPD systems offer higher specific impulses, their thrust is typically lower than that of chemical rockets. However, this trade-off is acceptable for many missions since the continuous thrust over extended periods can result in higher final velocities: * Thrust-to-Weight Ratio: While MFPD systems might not match the high thrust-to-weight ratios seen in chemical rockets, their continuous operation can result in higher delta-v over extended missions. For deep-space missions, achieving high delta-v is often more critical than immediate high thrust [3]. * Predicted Range: Given a specific propellant mass and power source, the operational range of an MFPD-propelled spacecraft can significantly surpass that of a chemically propelled one. For missions beyond Mars, or even for asteroid mining, MFPD systems provide a compelling propulsion alternative [3]. The MFPD system offers a host of advantages that make it a compelling choice for future space missions. Its higher specific impulse, fuel flexibility, and scalability make it adaptable to various mission profiles, from satellite adjustments to interplanetary exploration. While it might not replace chemical rockets for short-duration missions or those requiring immediate high thrust, its potential for long-duration, deep-space missions is undeniable. ## 8 Challenges and Limitations of the MFPD System While the MFPD system offers notable advantages, it also presents significant challenges. These challenges span the gamut from technical intricacies to materials and safety concerns. ### Technical Challenges in Achieving Controlled Fusion in Space Achieving controlled fusion in the expanse of space presents a set of unique challenges: * Sustained Magnetic Confinement: While magnetic confinement is a cornerstone of the MFPD system, maintaining stable confinement for extended durations without external interferences is nontrivial. The dynamic nature of plasma and its interactions with magnetic fields can lead to instabilities like kink or drift modes [12]. * Breakeven Point: For fusion to be a viable energy source for propulsion, the fusion reactions must release more energy than is input into the system. Achieving this breakeven point remains one of the principal challenges of fusion-based propulsion [3]. * Propellant Feed and Ignition: Ensuring a steady supply of fusion fuel and achieving consistent ignition in the variable conditions of space require intricate control systems and reliable fuel feed mechanisms [12]. ### Materials and Safety Considerations The intense conditions within the MFPD system place substantial demands on the materials used: * Radiation and Heat Resistance: The fusion process emits copious amounts of radiation and heat. Materials used in constructing the thruster must not only withstand these conditions but also maintain integrity over prolonged durations [3]. * Neutron Damage: Fusion reactions, especially those involving deuterium and tritium, emit high-energy neutrons. These neutrons can cause damage to materials, leading to potential system failures over time [12]. * Safety Protocols: In the unlikely event of a containment failure, there need to be mechanisms in place to ensure the safety of the spacecraft and its occupants [12]. ### Power and Control System Requirements The operation of an MFPD system necessitates robust power and control systems: * High Power Demands: Achieving and maintaining the conditions for fusion requires significant amounts of energy. A reliable and high-output power source is imperative for the MFPD system's operation [3]. * Fine-Tuned Control Mechanisms: The dynamic nature of plasma and the need for precise magnetic field adjustments call for intricate control systems. These systems must be capable of real-time adjustments to ensure optimal and safe operation [12]. * Redundancy and Fail-Safes: Given the critical nature of propulsion, especially on long-duration missions, having redundant systems and fail-safe mechanisms are essential to mitigate potential system failures [12]. The MFPD system, while promising, is not without its challenges. From the technical intricacies of achieving controlled fusion in space to the demanding material requirements, understanding these challenges is vital for the system's advancement. However, with continued research and development, many of these challenges can be addressed, paving the way for a new era of space propulsion. ## 9 Conclusion The endeavor to identify efficient and sustainable propulsion techniques is at the forefront of challenges faced by aerospace engineering. As humanity's aspirations soar, aiming at the distant realms of our solar system and potentially at neighboring stars, the confines of traditional propulsion systems become starkly evident. This paper delves into the potential of the Magnetic Fusion Propulsion Drive (MFPD), driven by the immense power of fusion reactions, to overcome these constraints. Harnessing the remarkable energy densities that fusion offers, the MFPD dramatically reduces the required propellant mass and extends the durations for which thrust can be maintained, allowing for continuous thrust over prolonged intervals. This breakthrough has profound implications: substantially accelerated travel times, augmented mission adaptability, and enhanced payload capabilities. Through the lens of this research, we elucidated the underlying principles of the MFPD and explored both its promising advantages and the challenges it might pose. Our comparative analysis underscored the MFPD's superior efficiency. The example calculations vividly demonstrated its potential, especially when benchmarked against other contemporary propulsion methods. For missions as close as Mars or as distant as Proxima Centauri, fusion-driven propulsion outperforms, implying its suitability for a spectrum of space ventures. However, the path to realizing this vision is not devoid of hurdles. Technical uncertainties persist, and a multitude of engineering challenges await solutions. But with the synergy of fusion science, intricate plasma dynamics, and avant-garde magnetic confinement techniques, we stand on the cusp of a propulsion renaissance that might redefine our engagement with the vast expanse of space. On the brink of a transformative era in space exploration, catalyzed by groundbreaking innovations such as the MFPD, we're reminded of the limitess vistas human innovation can unveil and the continually broadening scope of our shared dreams.
2309.17155
Immersed figure-8 annuli and anyons
Immersion (i.e., local embedding) is relevant to the physics of topologically ordered phases through entanglement bootstrap. An annulus can immerse in a disk or a sphere as a ``figure-8", which cannot be smoothly deformed to an embedded annulus. We investigate a simple problem: is there an Abelian state on the immersed figure-8 annulus, locally indistinguishable from the ground state of the background physical system? We show that if the answer is affirmative, a strong sense of isomorphism must hold: two homeomorphic immersed regions must have isomorphic information convex sets, even if they cannot smoothly deform to each other on the background physical system. We explain why to care about strong isomorphism in physical systems with anyons and give proof in the context of Abelian anyon theory. We further discuss a connection between immersed annuli and anyon transportation in the presence of topological defects. In appendices, we discuss related problems in broader contexts.
Bowen Shi
2023-09-29T11:43:48Z
http://arxiv.org/abs/2309.17155v2
# Immersed figure-8 annuli and a strong isomorphism conjecture ###### Abstract Immersion (i.e., local embedding) is relevant to the physics of topologically ordered phases through entanglement bootstrap. An annulus can immerse in a disk or a sphere as a "figure-8", which cannot be smoothly deformed to an embedded annulus. We investigate a simple problem: is there an Abelian state on the figure-8 annulus? We show that if the answer is affirmative, a strong sense of isomorphism must hold: two homeomorphic immersed regions must have isomorphic information convex sets, even if they cannot smoothly deform to each other on the background physical system. We explain why to care about strong isomorphism and give proof in the context of Abelian anyon theory. We further discuss a connection between immersed annuli and anyon transportation in the presence of topological defects. In appendices, we discuss related problems in broader contexts. ## I Introduction It is fascinating that many universal properties of a many-body system, such as topologically ordered phases, symmetry-protected phases, and quantum critical points, can be extracted from a single wave function [1; 2; 3; 4; 5; 6; 7; 8; 9; 10; 11]. A further surprise is that in recent years, we have seen a hope to derive [12; 13] these rules from conditions on a state: entanglement bootstrap [14; 15; 16; 17] is a framework that aims to do this. In entanglement bootstrap, we start with a single wave function (or reference state) on a topologically trivial region, e.g., a ball or a sphere. We impose some conditions on the wave function as the starting point and derive (bootstrap) laws of the emergent theory from there. On the way to deriving the emergent physical laws, we identify information-theoretic concepts (forms of many-body entanglement), such as information convex sets and the modular commutator. The goal of entanglement bootstrap has similarities and distinctions with quantum field theory, bootstrap in other physical contexts, renormalization group, categorical theory, and solvable models. Some of these aspects have been discussed in recent works on this subject. In this work, we investigate a simple aspect of entanglement bootstrap, the role of immersion. As we shall explain, immersion, i.e., local embedding of a topological space to another, is natural from the internal theoretical structure of the entanglement bootstrap. (Here, we are mainly interested in the case that a topological space is immersed in a background of the same space dimension. See the immersed "figure-8" annulus in Fig. 1 for an example.) The importance of immersion is noticed only gradually: see [17] for a state-of-the-art discussion and [18] for an early application with the terminology "immersion" unnoticed. We suspect that the full extent of the benefits we can reap from immersion remains largely hidden beneath the surface. To motivate the usage of immersed regions, let us first recall why embedded regions are of interest. For systems with anyons, it is natural to consider states locally indistinguishable from the ground state. On closed manifolds, there can be multiple locally indistinguishable ground states. Similarly, it is meaningful to think of such states reduced to subsystems. In entanglement bootstrap, this intuition is made precise with the concept "information convex
2301.13511
Privacy-Preserving Online Sharing Charging Pile Scheme with Different Needs Matching
With the development of electric vehicles, more and more electric vehicles have difficulties in parking and charging. One of the reasons is that the number of charging piles is difficult to support the energy supply of electric vehicles, and a large number of private charging piles have a long idle time, so the energy supply problem of electric vehicles can be solved by sharing charging piles. The shared charging pile scheme uses Paillier encryption scheme and improved scheme to effectively protect user data. The scheme has homomorphism of addition and subtraction, and can process information without decryption. However, considering that different users have different needs, the matching is carried out after calculating the needs put forward by users. This scheme can effectively protect users' privacy and provide matching mechanisms with different requirements, so that users can better match the appropriate charging piles. The final result shows that its efficiency is better than the original Paillier scheme, and it can also meet the security requirements.
Zhiyu Huang
2023-01-31T09:58:06Z
http://arxiv.org/abs/2301.13511v1
# Privacy-Preserving Online Sharing Charging Pile Scheme with Different Needs Matching ###### Abstract With the development of electric vehicles, more and more electric vehicles have difficulties in parking and charging. One of the reasons is that the number of charging piles is difficult to support the energy supply of electric vehicles, and a large number of private charging piles have a long idle time, so the energy supply problem of electric vehicles can be solved by sharing charging piles. The shared charging pile scheme uses Paillier encryption scheme and improved scheme to effectively protect user data. The scheme has homomorphism of addition and subtraction, and can process information without decryption. However, considering that different users have different needs, the matching is carried out after calculating the needs put forward by users. This scheme can effectively protect users' privacy and provide matching mechanisms with different requirements, so that users can better match the appropriate charging piles. The final result shows that its efficiency is better than the original Paillier scheme, and it can also meet the security requirements. Private charging pile sharing service, Privacy protection, Demand analysis, Homomorphic encryption, Internet of things ## 1 Introduction With the advancement of "carbon peak and carbon neutral" goals and the development of electric vehicles (EVs), EVs have the potential to effectively reduce air pollution caused by daily transportation [1]. As the number of EVs on the road increases, the demand for charging infrastructure also increases. However, the current prevalence and coverage of charging stations are insufficient to meet this demand[2-3]. Surveys have shown that between 2015 and 2020 in Table 1, the number of EVs and charging stations has been steadily increasing, with a particularly notable increase in the number of private charging stations, from 8,000 in 2015 to 874,000 in 2020[16]. However, compared to the growth rate of EVs, the number of charging stations is still far from adequate. As a result, for those EV users who are unable to install charging stations, the problem of charging difficulties is becoming increasingly apparent. In 2020, the Chinese government proposed to include charging stations as one of the fields of the nation's "new infrastructure", with an estimated investment of approximately 10 billion to build charging stations. According to international data surveys, it is expected that by 2030, there will be 5 million EVs on the road in California alone, and 12-24 million private charging stations and 10-20 million public charging stations globally. Charging facilities have become an indispensable infrastructure in new energy development planning [4-5]. Considering the high installation cost of charging piles [6], other technologies are needed to make up for the shortcomings of charging piles. With the rapid development of Internet of Things technology, Internet of Things devices connect everything and have gradually entered the mode of Internet of Everything [7-9]. As one of the applications of the Internet of Things, the Internet of Vehicles can realize the information exchange between vehicles and provide certain research value and commercial value. The application of V2X technology of the Internet of Vehicles in cloud (edge) computing is the cornerstone of building a smart city and smart transportation [10-12]. At present, the research on shared charging pile is still in the initial stage. The traditional charging pile sharing scheme generally consists of three entities, including charging pile provider, electric vehicle and matching server, in which both buyers and sellers upload their own information to the server for matching calculation, and the server returns the matching results to both buyers and sellers, as shown in Fig 1. However, in the traditional charging pile sharing scheme, the user's information is published or uploaded to the server through simple encryption, and the server needs to decrypt the participants' information to get their plaintext information. Therefore, in the traditional scheme, the user's privacy may be attacked and leaked. In the traditional charging pile sharing system, all information will be published directly on the Internet. One of the biggest problems faced by the system is that users will expose their private information to the public platform when they apply for it. For example, a malicious user has used a certain charging station. The charging pile is marked and recorded, and he may not use it directly through the platform when he knows that the charging pile is unmanaged for a period of time. At the same time, there is also the possibility that the shared service platform exposes the privacy of customers. Because the location information of electric vehicles may include workplaces, home addresses, special hospitals or frequently visited entertainment venues, buyers' hobbies and health status information are leaked, and the privacy of charging pile sellers will also be greatly affected threaten. On the other hand, once the information of buyers and sellers is obtained by malicious attackers, not only will there be profitable and targeted advertisements, but also related work and home addresses will be threatened, and may even lead to personal safety. Therefore, in order not to disclose the private information of customers, it is necessary to design a secure service platform. This paper proposes the use of homomorphic encryption technology to protect the privacy of users. In order to meet the above challenges, the main contributions of this paper are summarized as follows: 1. We use homomorphic encryption technology to encrypt user information, and at the same time use homomorphic characteristics to process ciphertext, and match the obtained results in the cloud server. In the public service platform, users' effective information will not be exposed, and matching can be completed efficiently. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Year & 2015 & 2016 & 2017 & 2018 & 2019 & 2020 \\ \hline Number of electric vehicle & 570 & 1280 & 1840 & 2740 & 3890 & 4840 \\ \hline Number of public charging piles & 58 & 149 & 240 & 387 & 516 & 807 \\ \hline Number of private charging piles & 8 & 63 & 232 & 477 & 703 & 874 \\ \hline \end{tabular} \end{table} Table 1: Approximate number of electric vehicles and charging piles in the world 2) For users with different needs, we designed the demand parameter $\(\backslash\)omega$. Through matching calculation, we can get the matching index parameter W. By comparing W, we can get the most suitable buyers and sellers. This requirement parameter can better match users with different requirements. 3) We use chinese remainder theorem (CRT) to speed up the modular exponentiation in the decryption process of cloud server, CRT is used to convert a\({}^{\text{b}}\) from Z\({}_{\text{n}^{\text{o}}2}\) to Z\({}_{\text{p}^{\text{o}}2}\) and Z\({}_{\text{q}^{\text{o}}2}\) for calculation. We use Paillier scheme with optimized parameters, which can speed up the encryption calculation although it loses homomorphism. The remainder of this paper is organized as follows: In Section 2, we introduce the homomorphic encryption, parameter optimization of Paillier scheme and china remainder theorem. In Section 3, we introduce the system model and present the proposed scheme. In Section 4, we describe the performance evaluation results. Finally, we conclude the paper in Section 5. ## 2 Related work In this section, homomorphic encryption technology, parameter optimization of Paillier scheme and Chinese remainder theorem are introduced. ### Homomorphic Encryption Encryption technology is often used to protect privacy, among which homomorphic encryption is a special encryption method, which has the characteristics of directly calculating encrypted data, such as addition and multiplication, and will not reveal any information of the original text during the calculation process. And the calculated result is also encrypted, and the result obtained after decrypting the processed ciphertext with the key is exactly the result obtained after processing the original text. Paillier scheme has the homomorphism of addition/subtraction. For plaintext m1 and m2, there is a function E () that makes E(m1+m2)=E(m1)\(\backslash\)dot E(m2). Paillier scheme satisfies the standard semantic security of encryption scheme[13], that is, the ciphertext is indistinguishable (IND-CPA) under the attack of selected plaintext, that is, the information about plaintext will not be leaked in ciphertext. Its security is proved by the hypothesis of deterministic composite residue. So far, no algorithm can be cracked in polynomial time, so Paillier encryption scheme is considered to be safe. The detailed process includes the following steps. \(\bullet\)**KeyGen() :** Pick two prime numbers p and q compute n = p * q and \(\lambda=\)lcm(p-1,q-1). Choose a random number g, and gcd(L(g\({}^{\text{\tiny A}}\)\(\lambda\) mod n\({}^{\text{\tiny A}}\)2),n) = 1, computer \(\mu=\)(L(g\({}^{\text{\tiny A}}\)\(\lambda\) mod n\({}^{\text{\tiny A}}\)2))\({}^{\text{\tiny A}}\)(-1) mod n, where L(x) = (x-1)/n the public and private keys are pk = (n,g) and sk =(\(\lambda\), \(\mu\)), respectively. \(\bullet\)**Encrypt() :** Enter the plaintext message m and select the random number r. Encrypt plaintext: \[\text{c}=\text{g}^{\text{m}}\cdot\text{r}^{\text{n}}\text{ mod n}^{2}\text{,} \tag{1}\] \(\bullet\)**Decrypt() :** Enter ciphertext C. Calculate plaintext message: \[\mathrm{m}=\mathrm{L}(c^{\lambda}\,\mathrm{mod}\,\mathrm{n}^{2})\cdot\mu\, \mathrm{mod}\,\mathrm{n}. \tag{2}\] ### Parameter Optimization of Paillier scheme In order to simplify computation without affecting the algorithm's correctness, the algorithm may take g=n+1 during the key generation phase[14]. This allows for the simplification of the calculation of \(\mathrm{Sg}^{\alpha}\mathrm{m}\mathrm{S}\) during the encryption process. For \(\mathrm{g}^{\alpha}\mathrm{m}\mathrm{=}\mathrm{(n+1)}^{\alpha}\mathrm{m}\), using the binomial theorem, we can express \(\mathrm{g}^{\alpha}\mathrm{m}\) as the sum of the product of the binomial coefficients and the corresponding powers of \(\mathrm{n}\) and \(1\), where each term of the sum can be calculated efficiently. \[(n+1)^{m}\,\mathrm{\,mod}\,\mathrm{\,n}^{2}=\binom{m}{0}\,n^{m}+\binom{m}{1}\, n^{m-1}+\cdots+\binom{m}{m-2}\,n^{2}+\mathit{mn}+1\,\mathrm{\,mod}\, \mathrm{\,n}^{2}, \tag{3}\] As the previous \(\mathrm{m}\mathrm{-}1\) terms are multiples of \(\mathrm{n}\), under the condition of modulo \(\mathrm{Sn}^{\alpha}\mathrm{2}\mathrm{S}\) operation, they can all be eliminated, thus this modulo exponentiation operation can ultimately be simplified to one modulo multiplication operation, thus accelerating the encryption process. \[\mathrm{c}=\ \ (\mathrm{1}+\mathrm{mn}\,)\ \cdot\,\mathrm{r}^{\mathrm{n}}\, \mathrm{mod}\,\mathrm{n}^{2}, \tag{4}\] Decrypt ciphertext \(\mathrm{c}\): \[\mathrm{m}=\frac{c^{\kappa}-\mathrm{1}\,\mathrm{mod}\,\mathrm{n}^{2}}{\mathrm{ n}}. \tag{5}\] ### Chinese remainder theorem The Chinese Remainder Theorem, also known as the Sunzi Theorem, originates from the ancient Chinese mathematical treatise "Sunzi Suanjing" and describes the isomorphism of two algebraic spaces. Specifically, an algebraic space can be decomposed into several mutually orthogonal subspaces and the original space corresponds one-to-one to the decomposed space, similar to two forms of the same space. Specifically, when \(\mathrm{n}=\mathrm{pq}\) and \(\mathrm{p}\), \(\mathrm{q}\) are relatively prime, there exists the algebraic isomorphism property: a \(\mathrm{mod}\,\mathrm{n}=\mathrm{a}\,\mathrm{mod}\,\mathrm{p}+\mathrm{a}\, \mathrm{mod}\,\mathrm{q}\), thus the operations under \(\mathrm{mod}\,\mathrm{n}\) can be transformed into operations under \(\mathrm{mod}\,\mathrm{p}\) and \(\mathrm{mod}\,\mathrm{q}\). By converting to this form, the calculation efficiency is higher. Therefore, this property can be utilized to accelerate modular exponentiation operations under \(\mathrm{mod}\,\mathrm{n}\). ## 3 The Proposed Scheme ### System Model The matching scheme of shared charging piles consists of multiple electric vehicle buyers, multiple charging pile sellers, multiple edge proxy servers, a cloud server and a certificate certification center. All entities communicate through the mobile network. The Figure.1 describes our system model. **Electric vehicle** (EVs): As a user of shared charging piles, when charging piles are needed, a charging request will be sent out, and the EV set is expressed as {1, ', ', i, ', I}. After receiving the response, you will get the public key of information encryption, and the terminal equipment of the Internet of Things will encrypt the information to be sent with the public key and send it to the nearest proxy server. **Private charging piles** (PCPs): As a provider of shared charging piles, there will be J private charging piles in a given area, and the collection of charging piles is denoted as {1,** ### Requests and Information Encryption In the shared charging pile matching system, the certificate certification center is responsible for managing and issuing public keys and maintaining public key information. In the system, electric vehicle buyers and charging pile sellers provide information, including location, price, demand and other information. Buyers of electric vehicles need to provide their own location \(\mathrm{x_{i}}\),\(\mathrm{y_{i}}\), proposed price (\(\mathrm{r_{i}}\)), farthest acceptable distance \(\mathrm{d_{imax}}\), demand \(\mathrm{\alpha_{i=1,2},\cdots,n}\)={0, 1} and the price proposed by the demand \(\mathrm{\alpha^{\prime}}_{\mathrm{i=1,2},\cdots,n}\). The seller of charging piles needs to provide the location (\(\mathrm{x_{j}}\),\(\mathrm{y_{j}}\)), the price (\(\mathrm{r_{j}}\)), the demand \(\mathrm{\alpha_{j=1,2},\cdots,n}\)={0, 1} of charging piles and the price \(\mathrm{\alpha^{\prime}}_{\mathrm{i=1,2},\cdots,n}\). After the user sends a request, the certificate certification center will send the public key pk to the user, and the electric vehicle buyer and the charging pile seller will select the random numbers \(\mathrm{r_{i}}\) and \(\mathrm{r_{j}}\) respectively, and use the public keys pk, \(\mathrm{r_{i}}\) and \(\mathrm{r_{j}}\) to encrypt the provided information. \[C_{m_{i}}=\mathrm{Enc}(m_{i})=g^{m_{i}}\cdot r_{1}^{n}\bmod n^{2}, \tag{6}\] \[C_{m_{i}}=\mathrm{Enc}(m_{j})=g^{m_{j}}\cdot r_{j}^{n}\bmod n^{2}, \tag{7}\] After encryption, \(C_{m}\) are obtained. The buyers of electric vehicles package \(C_{m}\) and \(w\) and send them to the proxy server. The sellers of charging piles send \(C_{m}\) to the proxy server. Except at the user end, the private information of users is all the information obtained after encryption and processing. ### Ciphertext Processing The proxy server has a certain computing power, and can use the homomorphic addition/subtraction characteristics of pailiier encryption scheme to process the ciphertext information as well as the plaintext. The process is as follows. \[C_{a_{m_{ij}}}=C_{m_{i}}\cdot C_{m_{j}}=g^{m_{i}+m_{j}}\cdot(r_{i}\cdot r_{j})^ {n}\bmod n^{2}, \tag{8}\] \[C_{d_{m_{ij}}}=C_{m_{i}}/C_{m_{j}}=g^{m_{i}-m_{j}}\cdot(r_{i}/r_{j})^{n}\bmod n ^{2}, \tag{9}\] The demand of buyers and sellers is encrypted by homomorphic addition, and the information such as location, price and demand price is encrypted by homomorphic subtraction to get the sum of the processed information and the difference of the information. Among them, the main function of (8) is to judge whether the demand of electric vehicle buyers can be met. Information difference is a difference comparison between the information provided by buyers and sellers, which can reflect the information similarity of both parties. The smaller the information difference, the closer the information provided by the charging pile seller is to the preference of the electric vehicle buyer, which is more suitable for matching and has a higher matching probability. On the contrary, the larger the information difference between the two parties means that the user's matching probability will be smaller. ### Information Decryption The cloud server owns the private key sk issued by CA, including p and q. In Paillier cryptosystem, the main cost of decryption is modular exponentiation under Z\({}_{n^{*}2}\). With the private key (decomposition p, q of n), the modular exponentiation under Z\({}_{n^{*}2}\) can be converted into Z\({}_{p^{*}2}\) and Z\({}_{q^{*}2}\) by CRT. The optimization function using CRT is expressed as L\({}_{p(x)}\)=(x-1)/p and L\({}_{q(x)}\)=(x-1)/q respectively, and the decryption process needs to be divided by using the following mathematical principles. \[h_{p}=L_{p}(g^{p-1}\ mod\ p^{2})^{-1}\ mod\ p, \tag{10}\] \[h_{q}=L_{q}(g^{q-1}\ mod\ q^{2})^{-1}\ mod\ q, \tag{11}\] \[m_{p}=L_{p}(c^{p-1}\ mod\ p^{2})h_{p}\ mod\ p, \tag{12}\] \[m_{q}=L_{q}(c^{q-1}\ mod\ q^{2})h_{q}\ mod\ q, \tag{13}\] \[\mathrm{m}=\mathrm{CRT}(m_{p},m_{q})\ \mathrm{mod\ pq}, \tag{14}\] CRT(m\({}_{p}\),m\({}_{q}\) mod pq) is to use CRT to calculate the modulus index, and the detailed process is as follows. For the modulus index a\({}^{\mathrm{b}}\) mod n, n=pq, CRT is used to convert a\({}^{\mathrm{b}}\) from Z\({}_{\mathrm{a}}\) to Z\({}_{\mathrm{p}}\) and Z\({}_{\mathrm{q}}\) for calculation. Calculate the mapping a\({}^{\mathrm{b}}\) of \(Z_{\mathrm{p}}\) on m\({}_{p}\)=a\({}_{p}\)\({}^{\mathrm{b}}\)p, where a\({}_{p}\)=a mod p can be obtained from euler theorem, where \(\mathrm{\,\,\phi\,(p)}\)=p-1 is Euler function. Calculate the mapping m\({}_{q}\)=a\({}_{q}\)\({}^{\mathrm{b}}\)a\({}^{\mathrm{a}}\) on a\({}^{\mathrm{b}}\), which is the same as the calculation process of m\({}_{p}\). Calculate m\({}_{p}\) and m\({}_{q}\) separately and then aggregate them back. \[\mathrm{m}=m_{p}\cdot q^{-1}(\ mod\ p)\cdot q+m_{q}\cdot p^{-1}(\ mod\ q)\cdot p, \tag{15}\] Because p and q are coprime, there are q\({}^{\mathrm{-1}}\)(mod p)q+p\({}^{\mathrm{-1}}\)(mod q)p=1. Substituting into the formula (15) gives: \[\mathrm{m}=m_{p}+(m_{q}-m_{p})p^{-1}(\ mod\ q)\cdot p, \tag{16}\] In Paillier scheme, CRT is used to speed up decryption of plaintext to get plaintext m. After receiving the processed information, the modular operation under Z\({}_{q^{*}2}\) is converted into modular operation under Z\({}_{p^{*}2}\) and Z\({}_{q^{*}2}\) by using private key sk by using China remainder theorem, and then decrypted. \[\mathrm{amij}=\mathrm{Dec}(\mathrm{Camij})=\mathrm{mi}+\mathrm{mj}, \tag{17}\] \[\mathrm{dmij}=\mathrm{Dec}(\mathrm{Cdmij})=\mathrm{mi}-\mathrm{mj}. \tag{18}\] ### System Matching After decryption by using CRT-optimized decryption scheme, the sum of information and the difference between information are obtained. After obtaining the decrypted information, first calculate the distance between the buyer i and the seller j: \[d_{d_{ij}}=\sqrt{d_{x_{ij}}^{2}+d_{x_{ij}}^{2}}, \tag{19}\] The maximum acceptable distance of the EV buyer i is also encrypted and decrypted, because the maximum acceptable distance does not carry specific information such as location and price, so it is not regarded as important privacy information, so the cloud server gets the same clear text d\({}_{imax}\) as the user. To meet the matching conditions of buyer i, we must first compare the direct distance between buyer and seller. If d\({}_{dij}\)\(<\)d\({}_{imax}\), it means that the distance between buyer i and seller j is less than the maximum distance accepted by buyer i, which meets the matching conditions. If d\({}_{dij}\)\(\cdot\)d\({}_{imax}\), the distance between users i and j does not meet the conditions, the seller j cannot match the buyer. Remove the unqualified sellers by comparison before proceeding to the next step. Demand analysis is an interesting part of this paper. We consider that different buyers of electric vehicles may have different needs. Specifically, the distance and price are the information that the buyer i and the seller j must provide. Besides, other related demands \(\alpha\) i and \(\alpha\) j can be set, and the sum of them can be obtained through information and calculation. There are three situations: Case1: a\({}_{\text{-}ij}\)=0, indicates that neither user has this requirement. Case2: a\({}_{\text{-}ij}\)=1, it means that only one of the buyer i or the seller j owns the demand, and in case1 and case2, the corresponding i and j are removed because the demand cannot be provided. Case3: a\({}_{\text{-}ij}\)=2, it means that i have the demand, and j can also provide the demand. At this time, whether to use it can be judged by the demand price difference d\({}_{dij}\). i) when d\({}_{rij}\)\(<\)0, it means that the price proposed by i is less than that proposed by j. At this time, i and j cannot match.ii)When a\({}_{\text{-}ij}\)=2 and d\({}_{iij}\)\(>\)0 are met at the same time, it means that i and j meet the matching conditions, and the corresponding i and j are added to the matching set. After getting the matching set that meets the distance and demand, the cloud server will make the final price matching. For buyer i, all sellers j who meet the demand will calculate the demand price difference d\({}_{rij}\) provided by i and j through formula (9). Because there is no information in the buyer's preferences that can lead to the leakage of location information, w are also unimportant information that can be obtained in the cloud server only after being decrypted by the private key sk. At this time, the information in the cloud server includes the location information difference d\({}_{dij}\), demand and value a\({}_{ij}\), price information difference d\({}_{rij}\), demand price difference d\({}_{rij}\), buyer's preferences w and the number k of sellers j in the number matching set. When matching the buyer i of the electric vehicle, the seller j in the matching set is calculated respectively to obtain W\({}_{ij}\). \[Wij=ddij*wdi+drij*wri+\sum_{n=1}^{k}d_{\alpha_{ijn}}*w_{\alpha_{ijn}}, \tag{20}\] For buyer i, W\({}_{ij}\) is used as a matching evaluation index to judge the suitability of matching with seller j. So we sort W\({}_{ij}\) in descending order. The smallest W\({}_{ijmin}\) is obtained, which means that the current sellers j and u are the most suitable matching objects in terms of price and demand, so i match j. ### Matching Result Return After the matching of buyer i is completed, the cloud server will send a request to the successfully matched i and j. User i and j use the Paillier scheme with optimized parameters to generate public keys pki and pkj and send them to the cloud server. The cloud server generates a random number r, and encrypts the private key sk with pki and pkj. \[\mathsf{C}=\text{Enc}(\text{sk})_{\text{pk}}=\ \ (1+\text{mn}\,)\ \ \cdot \ r^{\text{n}}\,\text{mod}\,\text{n}^{2}, \tag{21}\] And the matched result is packaged and sent to the proxy server. The proxy server stores the encrypted address information uploaded by the user. After receiving the packaged result and the encrypted private key, the proxy server finds the corresponding encrypted address information C Loci and C Loci and the encrypted price information Cij of the seller j through the matching results i and j. The proxy server packages and sends the ciphertext of C Loci, Cij and private key sk encrypted with public key pki to buyer i, and packages and sends the ciphertext of C Loci and private key sk encrypted with public key pkj to seller j. The buyer I and the seller j use their own private keys to decrypt and get sk. \[\text{sk}=\text{Dec}(\mathsf{C})_{\text{$\mathsf{K}$}}=\frac{\mathsf{c}^{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{\mathsf{ \mathsf{ }}}}}}}}}}}}}}}}{ \text{n}}\,, \tag{22}\] Then use sk to decrypt the encrypted address information C Loci, C Loci and rj. \[\text{m}=\text{L}(\mathsf{c}^{\lambda}\,\text{mod}\,\text{n}^{2})\cdot\mu\, \text{mod}\,\text{n}, \tag{23}\] At this time, the buyer I is matched, and the I+1th buyer is matched in the next round, and the matched seller J is eliminated from the matching set until the last seller set is empty, indicating that the current round of matching has ended. Re-apply to CA, get a new public and private key pair, and start the next round of matching. ## 4 Performance Evaluation Results ### Number Analysis In this section, we consider a 3km*3km scene with different numbers of I buyers and J sellers. All the simulations are done on python and on 2.5 GHz Inter Core i5-7300HQ CPU and 32G RAM. Finally, all the simulation results are averaged in 50 simulations, and the consistent results are finally obtained. Paillier encryption algorithm is a public key encryption algorithm based on number theory, which has high performance in security and time cost. The following is the time cost of operating on a piece of data of the original Paillier encryption algorithm: Randomly generate public key and private key: O(1) Encryption operation: O(log n) Decryption operation: O(log n) ### Addition/subtraction operation (adding/subtracting two ciphertext numbers): O(1) Where n refers to the length of the public key (the number of digits of the modulus). In our simulation experiment, for I buyers and J buyers, the time cost from issuing an application to obtaining the corresponding public key is O(1). For all users, because the encryption operation uses the original Paillier encryption scheme, its time cost corresponds to O(log n). Our setting is that all users encrypt on the terminal equipment of the Internet of Things, and each user does it independently, so the time cost is fixed regardless of the number of matching users. When a user encrypts n data, the time cost is n*O(log n). When the terminal equipment of the Internet of Things encrypts the information to be sent, the proxy server will add/subtract the ciphertext data, and the corresponding operation is O(1). Every buyer I in the proxy server needs to add and subtract with the seller J. For J sellers and K pieces of information, the time cost is J*k*O(1). For I buyers, the total time cost required for calculation in the proxy server is I*J*k*O(1). Decrypt each calculated result in the cloud server. Under the condition that all sellers meet the requirements, the time cost of each decryption operation is O(log n) corresponding to I*J*k calculation results. After decryption, we execute the matching algorithm, calculate the matching index wij for M users who satisfy user I, and then get the minimum matching index wijmin for user I after sorting it, with the time cost of j*O(1). At this time, the buyer I and the seller J are successfully matched. The time cost corresponding to the above process is shown in Table 2. In the process of decryption, we use CRT to speed up the calculation process. The original Paillier encryption algorithm needs to do modular exponential operation under Z\({}_{\mathfrak{n}^{\times}2}\). However, when the cloud server knows the private key sk and the corresponding coefficients p and q, the modular exponentiation under Z\({}_{\mathfrak{n}^{\times}2}\) is transformed to Z\({}_{\mathfrak{p}^{\times}2}\) and Z\({}_{\mathfrak{q}^{\times}2}\), thus improving the encryption and decryption efficiency. The time required to decrypt the ciphertext with the original Paillier encryption scheme and the time required to accelerate the calculation with CRT are shown in the figure. As can be seen from Figure.2, the decryption time is about 1/3 of that of Paillier encryption scheme after accelerated calculation with CRT. Compared with DJN scheme [15], the decryption time is basically the same as that of Paillier scheme. Therefore, using CRT to speed up the decryption process can effectively improve efficiency. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline & I buyers & J sellers & n data & Buyer & Index \\ & & & & matching & ranking \\ \hline Encrypt & O(log n) & O(log n) & k*O(log n) & / & / \\ \hline Process & / & / & k*O(1) & I*J*k*O(1) & j*O(1) \\ \hline Decrypt & O(log n) & O(log n) & k*O(log n) & / & / \\ \hline \end{tabular} \end{table} Table 2: Paillier scheme time cost in this paper After the cloud server gets the matching result, it sends out a successful matching application, and the buyer I and the seller J call the Paillier encryption algorithm with optimized parameters to generate a public key pair, and send the public key n to the cloud server, and the private key is stored at the user end. Compared with the original Paillier encryption algorithm, the encryption scheme with parameter optimization simplifies the modular exponential operation into a modular multiplication, which speeds up the encryption process. Its time efficiency is shown in Figure.3. ### Correctness Analysis For ciphertext C in paillier scheme with optimized parameters, its correctness is expressed as: \[\text{Dec(C)}=\frac{\text{c}^{\text{c}}-1\bmod\text{n}^{2}}{\text{n}}\] Figure.3: Comparison of encryption time cost between Paillier scheme and parameter optimization Figure.2: Computational overhead of data encryption and decryption with different schemes \[=\frac{((1+n)^{m}r^{n})^{\Upsilon\lambda}-1\bmod n^{2}}{n}\] \[=\frac{(1+n)^{m\Upsilon\lambda}-1\bmod n^{2}}{n}\] \[=\frac{\operatorname*{nm}\Upsilon\lambda\bmod n^{2}}{n}=\frac{ \operatorname*{nm}}{n}=\operatorname*{m}, \tag{24}\] For ciphertext C in Paillier scheme which uses CRT optimization for decryption, its correctness is expressed as: \[\operatorname*{m}=m_{p}\cdot q^{-1}(\bmod p)\cdot q+m_{q}\cdot p^ {-1}(\bmod q)\cdot p\] \[=m_{p}\cdot(1-p^{-1}(\bmod q)\cdot p)+m_{q}\cdot p^{-1}(\bmod q)\cdot p\] \[=m_{p}+(m_{q}-m_{p})\cdot p^{-1}(\bmod q)\cdot p, \tag{25}\] Where \(\operatorname*{h}_{p}\), \(\operatorname*{h}_{q}\), \(\operatorname*{m}_{p}\), \(\operatorname*{m}_{q}\) and are obtained from formula (10), formula (11), formula (12) and formula (13) respectively. ### Security Analysis First of all, we assume that there are curious buyers and sellers, denoted as B, who want to attack through some information on the network to obtain other users' private information. In this paper, the original Paillier encryption scheme was adopted before the matching was completed. This scheme has been fully studied, so far there is no polynomial time algorithm to break it, so the security of Paillier encryption scheme is considered to be reliable. When B obtains the ciphertext message and the processed ciphertext message in the proxy server through attack, it cannot obtain effective information because there is no corresponding private key sk. So in the proxy server, we think the information is safe and reliable. When the processed information is sent to the cloud server, the cloud server needs to use the private key sk to decrypt the information. Suppose that B obtains the sum and difference of the decrypted information in the cloud server through special means attack, and these B have their own information, they will infer other useful information through the difference between the existing information and the information obtained by the attack in the cloud server. When inferring the position of other sellers through the difference between their own position information and the obtained position information, because there is only a straight distance, the inferred information cannot locate the specific position of the seller. From the above, even if the attack obtains the encrypted information in the proxy server or the decrypted information in the cloud server, B cannot infer the valid information. Therefore, for curious buyers and sellers, our scheme is safe and effective. Secondly, for the premeditated attacker C, in our scheme, in order to prevent C from eavesdropping, all the communication between entities is encrypted. In our scheme, the random number r generated each time is different, and the encrypted result is also different. After attacking the information obtained by the proxy server and the cloud server, the user's location information and price information cannot be obtained through calculation. And we will refresh the key after each round of user matching. In this case, we think the scheme is also safe and effective. ## 5 Conclusions In this paper, we solve the security problem of shared charging pile scheme through homomorphic encryption technology. In order to protect the privacy of users' location and provide matching strategies for users with different needs, we have formulated a privacy protection shared charging pile scheme based on users with different needs. First of all, we use the public key to encrypt the information in the terminal equipment of the Internet of Things, which effectively protects the privacy information such as location. Through homomorphism, the ciphertext matching the user is calculated in the proxy server, and CRT is used in the cloud server to accelerate the encryption process. We design the matching rules, calculate the matching index W and compare them to get the most suitable matching result. When we return the results, we use Paillier scheme with optimized parameters to effectively speed up the encryption process. Finally, our numerical analysis results show that the decryption time after CRT optimization is about 1/3 of the original Paillier scheme and DJN scheme. The encryption time after parameter optimization is 1/3 faster than that of the original Paillier scheme. At the same time, we also analyzed the security of the scheme, and the attacks of both curious users and malicious attackers are safe and reliable in the scheme on the public platform.
2309.08075
Social media polarization reflects shifting political alliances in Pakistan
The rise of ideological divides in public discourse has received considerable attention in recent years. However, much of this research has been concentrated on Western democratic nations, leaving other regions largely unexplored. Here, we delve into the political landscape of Pakistan, a nation marked by intricate political dynamics and persistent turbulence. Spanning from 2018 to 2022, our analysis of Twitter data allows us to capture pivotal shifts and developments in Pakistan's political arena. By examining interactions and content generated by politicians affiliated with major political parties, we reveal a consistent and active presence of politicians on Twitter, with opposition parties exhibiting particularly robust engagement. We explore the alignment of party audiences, highlighting a notable convergence among opposition factions over time. Our analysis also uncovers significant shifts in political affiliations, including the transition of politicians to the opposition alliance. Quantitatively, we assess evolving interaction patterns, showcasing the prevalence of homophilic connections while identifying a growing interconnection among audiences of opposition parties. Our study, by accurately reflecting shifts in the political landscape, underscores the reliability of our methodology and social media data as a valuable tool for monitoring political polarization and providing a nuanced understanding of macro-level trends and individual-level transformations.
Anees Baqir, Alessandro Galeazzi, Andrea Drocco, Fabiana Zollo
2023-09-15T00:07:48Z
http://arxiv.org/abs/2309.08075v1
# Social media polarization reflects shifting political alliances in Pakistan ###### Abstract The rise of ideological divides in public discourse has received considerable attention in recent years. However, much of this research has been concentrated on Western democratic nations, leaving other regions largely unexplored. Here, we delve into the political landscape of Pakistan, a nation marked by intricate political dynamics and persistent turbulence. Spanning from 2018 to 2022, our analysis of Twitter data allows us to capture pivotal shifts and developments in Pakistan's political arena. By examining interactions and content generated by politicians affiliated with major political parties, we reveal a consistent and active presence of politicians on Twitter, with opposition parties exhibiting particularly robust engagement. We explore the alignment of party audiences, highlighting a notable convergence among opposition factions over time. Our analysis also uncovers significant shifts in political affiliations, including the transition of politicians to the opposition alliance. Quantitatively, we assess evolving interaction patterns, showcasing the prevalence of homophilic connections while identifying a growing interconnection among audiences of opposition parties. Our study, by accurately reflecting shifts in the political landscape, underscores the reliability of our methodology and social media data as a valuable tool for monitoring political polarization and providing a nuanced understanding of macro-level trends and individual-level transformations. **Keywords:** polarization, social media, politics, South Asia ## 1 Introduction In recent years, there has been a growing concern over the ideological divides that have emerged within our societies in public debates. Researchers, politicians, and public figures have raised alarms about the widening ideological disagreements between opposing factions on various fronts and the potential consequences for public discourse [1, 2, 3, 4, 5, 6, 7]. One notable example is the Capitol Hill assault, which occurred during a period when U.S. politics had reached its highest level of polarization since the Civil War [8]. Polarization, in its various forms, has garnered significant attention from scholars across multiple disciplines, resulting in a vast body of literature on the subject [2, 9]. In particular, political polarization has been extensively studied as a key aspect of societal ideological divides [4, 10, 11, 12]. Social and political scientists have investigated political divisions for decades, employing various methodologies such as roll call voting records, surveys, or combining these data with other sources [5, 13]. The emergence of the Internet and social media has enabled researchers to investigate polarization through digital data, quantifying the divides and tracking their evolution within online communities [1, 11, 7, 12, 14]. These studies have examined various aspects of social media polarization, including the roles of politicians, news outlets, public figures, and community structures [4, 14]. However, most studies have primarily focused on a limited number of countries, primarily Europe and the United States, with little attention given to other geographical areas [15, 16, 17, 18, 19]. Notably, South Asia and the Middle East remain understudied in this context. Existing works have mainly revolved around the "Arab Spring" [20, 21, 22, 23] and have largely overlooked other relevant issues. Here, we aim to address this gap by conducting an analysis of political polarization in South Asia, with a specific focus on Pakistan. Since its inception in 1947, Pakistan's political landscape has been characterized by persistent turbulence. Despite being a federal parliamentary democratic republic, military rule has consistently exerted influence over the nation, fostering skepticism towards democratic institutions [24, 25]. This military involvement stands as a pivotal driver of political polarization in Pakistan, yet it is not the sole contributing factor. In addition, ethno-linguistic groups within the country actively seek recognition, resulting in regional political conflicts that pose substantial challenges to national unity efforts [26]. Moreover, the presence of religious extremism, exemplified by groups like the Taliban, exacerbates polarization. Extremist actors conduct attacks not only within Pakistan but also across its borders with India and Afghanistan [27]. In this charged environment, the Pakistani media, instead of promoting critical news assessment, often exhibit pronounced political biases [24]. Pakistan was ranked 107th in the Democracy Index 2022, with an overall score of 4.13, notably lower than scores exceeding 8, which are typical of fully democratic countries [28]. It is important to note that the concept of political polarization is closely linked to the political system in which it is assessed. Indeed, polarization in democratic countries can differ significantly from that in flawed democracies, hybrid regimes, or authoritarian states. Therefore, gaining a comprehensive understanding of how political polarization has emerged and evolved over time in Pakistan holds significant importance from multiple perspectives. In this work, we examine the evolution of political polarization in Pakistan using Twitter data. Between 2018 and 2022, Pakistani politics witnessed significant shifts, marked by the rise of Pakistan Tehreek-e-Insaf (PTI) - in English, the "Pakistan Movement for Justice"- to power after securing a majority in the 2018 General Elections. Imran Khan led a coalition government with various allies. The formation of the Pakistan Democratic Movement (PDM) in September 2020, an opposition alliance featuring major players like Pakistan Muslim League - Nawaz (PML-N) and the Pakistan People's Party (PPP), added a new dimension to opposition politics. As PTI's allies and some members defected to the PDM, PTI lost its parliamentary majority. Ultimately, this culminated in the ousting of PTI's government in 2022 following a vote of no confidence. This led to the formation of a coalition government under Shehbaz Sharif of PML-N in April 2022. To explore these developments, we collected the timelines of members of parliament belonging to the three major political parties from 2018 to 2022. Additionally, we gathered data on the retweets they received to reconstruct interaction networks and measure political polarization. Our study includes a temporal analysis to track the evolution of polarization and the volume of interaction across parties. Our results highlight a close correspondence between the parties' distances in the latent ideological space and the evolution of Pakistani politics. This correspondence reveals a convergence over time among accounts and audiences of opposition parties that eventually formed a coalition in 2022. Thus, we prove the parallelism between our ideology estimation in the latent space obtained by social media data and the evolution of parties' political views. Our research provides valuable insights into the state and evolution of political polarization within a South Asian nation, an area that has received limited attention in the literature. Furthermore, it offers further evidence of the reliability of social media data for tracking the evolution of political polarization, demonstrating a close alignment between temporal analysis and unfolding political events. Finally, this study underscores the adaptability of our methodology, making it applicable to a wide range of scenarios. ## 2 Results We begin our analysis of the Pakistani political landscape on Twitter by depicting the weekly post volume and the number of active accounts per week for members of parliament from the three major political parties: PML-N (yellow), PPP (brown), and PTI (blue), as shown in Figure 1. Traditionally, the PML-N platform has leaned towards conservatism, advocating principles such as free markets, deregulation, lower taxes, and private ownership. However, in recent years, the party's political ideology and platform have shifted noticeably, becoming more liberal on social and cultural issues. The PPP, on the other hand, has its roots in socialist principles and aims to transform Pakistan into a social-democratic nation. It remains dedicated to promoting equality, pursuing social justice, and maintaining a strong military presence. PTI has declared its primary focus on shaping Pakistan into a model welfare state with principles of Islamic socialism. It is committed to dismantling religious discrimination against Pakistan minorities and identifies itself as an anti-status quo movement advocating Islamic democracy rooted in egalitarianism. However, while in power, PTI has faced criticism for its repression of opposition parties, attempts to curb freedom of speech, and efforts to increase control over the media. We observe that both PML-N and PTI maintained consistently high, stable, and comparable volumes of tweets over time. In contrast, PPP's tweet activity exhibited a gradual increase over time, with notable spikes occurring during June-July 2020 and again in March-April 2022. During these spikes, PPP's tweet volume reached levels comparable to the other two parties. Furthermore, when examining the number of weekly active accounts, we notice a slight upward trend for all three parties. Despite having a significantly smaller number of accounts, PML-N managed to produce a volume of content on par with that of PTI, indicating more intensive social media usage. Similarly, while PPP had fewer accounts, their weekly post count occasionally matched that of the other parties. This suggests that the two opposition parties, PML-N and PPP, made more extensive use of Twitter than PTI, which was the ruling party at the time. Crucially, Figure 1 provides an insightful overview of politicians' Twitter usage, highlighting their consistent presence on the platform. This underscores Twitter's reliability as a valuable data source for studying the evolution of online political discourse. Figure 1: **Politicians’ activity over time.** Evolution of weekly content volume and active accounts. The upper panel shows the weekly number of posts (tweets and retweets) made by politicians of PML-N (yellow), PPP (brown), and PTI (blue). The lower panel reports the number of active accounts belonging to politicians from 2018-2022. ### Interactions among parties over time A commonly employed approach to assess the state of online discourse involves analyzing users' consumption patterns. Thus, we visualize the divisions within the Pakistani political landscape and and how they have evolved over time by leveraging the similarities among the retweeters of members of parliament (MPs). We built undirected networks of MP accounts based on their retweeters to unveil similarities in their audiences and then compared these patterns with their political affiliations (for details, see section 4). We generated one network for each year spanning from 2018 to 2022, as shown in Figure 2. This allowed us to examine the evolution of political divisions through users' consumption patterns. Figure 2: **Politicians’ Audiences Similarity Network.** Networks of politicians based on the similarity of their retweeters from 2018 to 2022. Each node is color-coded based on the political party to which it belongs. Each node is color-coded according to the political party to which it belongs. The thickness of the edges represents the strength of the cosine similarity between the retweeter sets of the nodes. To emphasize the most significant connections, we have excluded edges with weights below the median of network weights (see SI for the complete networks). The convergence between PML-N and PPP clusters indicates an increase in intra-party links from 2019 onwards. By 2022, these clusters have become very close. Notably, some nodes (in green), representing defectors from PTI who left the party in 2022 to join the opposition parties, have increased their proximity to the PML-N and PPP clusters while distancing themselves from PTI. The results reveal the presence of three distinct clusters, each corresponding to one of the top three parties, indicating a high degree of audience similarity among politicians within the same affiliation. However, it is noteworthy that in 2018, which marked the year of the general elections, the PML-N (yellow dots) and PPP (red triangles) clusters had limited mutual connections. Yet, over time, the connectivity between these two clusters has steadily increased, reaching its peak in 2022. Conversely, the PTI cluster (blue squares) exhibits a lower degree of connectivity with the other two parties, and this connectivity does not appear to increase over time. These trends are visually evident in the decreasing distance between the PPP and PML-N clusters from 2018 to 2022, while the PTI cluster consistently maintains its distance from the others. These findings imply that an increasing proportion of supporters of both the PML-N and PPP parties have progressively started consuming political content from both parties. However, this pattern does not hold for nodes affiliated with the PTI party, which appear to have a group of supporters primarily or exclusively engaging with the content they produce. It is noteworthy that some nodes from the PTI cluster, highlighted in green, defected from the party in 2022 and joined the other two parties. This shift in affiliations is reflected in their decreased distance from the PML-N and PPP clusters while substantially increasing their distance from the PTI cluster. The fact that our results accurately reflect the defection of these two politicians confirms the validity of our methodology for analyzing the political discussion. Overall, Figure 2 provides an overview of the initial conditions, evolving relationships, and changes in user consumption patterns in response to shifts in political affiliations within the Pakistani political landscape. To quantify the shifts in audience similarity among nodes affiliated with different parties, we conducted an analysis of the changing proportion of connections between opposing factions within the networks depicted in Figure 2. Figure 3 illustrates the progression of connections between political parties from 2018 to 2022. Throughout all years, interactions are predominantly characterized by homophilic connections. Notably, links between nodes affiliated with PTI constitute the majority of total network interactions in all years, ranging from 64.2% in 2018 to 49.5% in 2022. These PTI connections also represent the highest proportion of links within PTI (over 79%). Similarly, interactions within the PML-N party primarily consist of connections among members of the same party, accounting for over 47% of total interactions. Conversely, starting in 2019, PPP accounts have a higher share of links with PML-N nodes (over 44%) than among themselves (less than 39%). Notably, the mutual connections between PPP and PML-N nodes have increased over time, rising from 5.39% in 2018 to 12.54% in 2022. Additionally, when considering the share of links relative to each party, the fraction of PPP links connecting to a PML-N node was approximately 32% in 2018 and increased to nearly 51% in 2022. Similarly, for PML-N links to PPP, the share rose from 22% in 2018 to 34% in 2022. On the contrary, connections between PTI and other parties exhibit a declining trend, decreasing from 4.60% to 2.98% with PPP and from 6.46% to 4.14% with PML-N. Despite defections from PTI, defector nodes still share approximately 39% of their links with PTI, about 24% with PPP, and roughly 37% with PML-N (for detailed values, see SI). To validate our findings, we also analyzed networks constructed from direct retweets among politicians. While the number of nodes and edges is considerably lower in these networks, the analysis produced qualitatively identical results (see SI). By quantifying the increasing intertwining of PPP and PML-N audiences, our results highlight the convergence of these two parties in the online political discourse, which corresponds with unfolding political events. ### Quantifying Polarization To assess the evolution of the political divisions in the online debate over time, we employed the latent ideology technique, which has proven to be a robust methodology for quantifying users' opinions on various topics [1, 3, 4, 11, 29]. The latent ideology method is designed to infer users' opinions through correspondence analysis. It operates by transforming a weighted bipartite matrix representing interactions between a set of prominent accounts, known as influencers, and other users, projecting them into a one-dimensional real space. Users who interact with similar sets of influencers will Figure 3: **Evolution of the shares of connections among political parties.** The flows represent the number of network edges in each network, grouped by nodes’ political affiliation for each year. While connections within the same party (homophilic connections) are prevalent, there is a noticeable increase in the fraction of connections between the PPP and PML-N parties. This trend aligns with their eventual formation of a coalition. be placed close to each other, while those interacting with disjoint sets of influencers will be placed in opposite positions (further details can be found in Section 4). This approach enabled us to infer the political ideology of users based on the politicians they retweeted. We considered accounts belonging to politicians and users who had retweeted at least three different politicians. We applied the latent ideology algorithm to the weighted bipartite matrix of retweets between users and politicians, obtaining the estimated users' ideologies. Furthermore, we calculated the ideological positions of politicians as the median of their retweeters' positions. To capture the dynamics of the political discourse, we conducted the ideology analysis for each year within the observation period. The results of latent ideology estimation are presented in Figure 4. On the left side, we present the ideological positions of politicians, while on the right side, we display the distribution of ideologies among users. This distinction arises from the fact that the numbers of politicians affiliated with the three parties are of similar magnitudes, but there is a significant disparity in user counts among the parties. Our focus regarding politicians is to monitor their positions, even at the individual level, especially in cases of defectors. Conversely, when it comes to users, our primary concern is tracking changes in their distribution across the ideological spectrum. Therefore, for users, the priority is unveiling where the audience is most concentrated for each party, rather than merely comparing absolute values among the parties. To aid in this distinction, we color-coded the politicians' affiliations and the ideology distribution of their retweeters. This allows us to track the evolution of both politicians and users' positions within each party. We notice that polarization dominates the Pakistani political landscape. However, the temporal analysis also reveals an interesting pattern. In 2018, politicians' ideologies display a distinct separation between the three parties, with PTI politicians on one side and two closer yet distinct peaks for PPP and PML-N representatives on the other. Users' ideology distribution also exhibits three distinct peaks, each corresponding to one of the three parties. Notably, users tend to occupy a broader range of the ideological spectrum compared to politicians. Parties' audiences extend into the territory of other parties' peaks, with PPP even having a smaller secondary peak in the same position as the PML-N peak. This observation underscores the close relationship between PPP and PML-N and the presence of a shared audience between these two parties. Since we assigned users to parties based on the politicians they retweeted, users who retweeted accounts from different parties are counted multiple times. Thus, individuals who primarily retweeted content from one party but occasionally engaged with others contribute to the density of each retweeted party. The presence of PPP users under the PML-N peak indicates shared retweeters and helps explain why PPP and PML-N are closer in ideological space than PTI across all the observation periods. When we examine the temporal evolution of polarization, we observe that both politicians and users from PPP and PML-N moved closer to each other over time, to the extent that they merged into a single peak in 2022. Meanwhile, the gap between PTI and PML-N remained consistent throughout the observation period. This is also reflected in the level of polarization, which we measured using Hartigan's dip test (as described in Section 4). The polarization level among politicians increased from 0.14 in 2018 to 0.23 in 2022 (p values \(<0.001\)). Notably, while PPP and PML-N increased their audience similarity and reduced their distance in the latent ideological space over time, in 2022 PTI experienced a dramatic increase in its audience, reflected in the highest differences in its users' density with respect to the other parties. To shed light on users' growth over time, we examined the number of users participating in the debate for each political party. We assigned a political affiliation to each user based on the dominant retweeted party. Hence, if a user retweeted accounts from different parties, they were assigned to the party they retweeted the most. We repeated this procedure for each year to uncover any changes in users' affiliations. The Figure 4: **Evolution of Politicians’ and Users’ Latent Ideology.** Latent ideology estimation for politicians (left column) and users (right column) of the three political parties, based on retweet data from 2018 to 2022. Bars (left panel) represent the count of politicians belonging to one of the three political parties, while curves(right panel) represent the distribution density of their retweeters. Notably, the PML-N (yellow) and PPP (brown) bars are positioned on the right side, indicating a higher audience similarity with respect to PTI. On the left side, the PTI bar (blue) represents a distinct political ideology. Notice the presence of blue points on the right part of the spectrum in 2022, corresponding to the PTI defectors. results are shown in Figure 5, where for each party the height of each bar corresponds to the number of retweeters, and the flow widths represent shifts in users' political affiliations over time. Our analysis reveals that users' affiliations have remained relatively consistent over the years, with only minimal shifts from one party to another. Moreover, the number of retweeters for PML-N and PPP experienced a gradual increase over time, particularly in 2022 when they saw more pronounced growth(+92.62% and +35.78% respectively). The PTI audience witnessed a notable increase in users in 2019 (+55.71%), a slight decrease in 2020 (-12.41%) and 2021 (-10.51%), but a dramatic surge in 2022(+525.36%). The reasons behind this substantial growth can be diverse, ranging from a more heated political debate to the increased presence of PTI supporters and automated accounts. While identifying the exact causes is outside the scope of this study, future research in this direction is certainly warranted. When comparing these results with the evolution of polarization shown in Figure 4, it becomes evident that, while users, in general, have not significantly changed their primary party affiliation, the volume of accounts retweeting content from both PML-N and PPP has steadily increased over time. Conversely, the PTI audience has not shown a significant increase in cross-ideological interactions but has experienced a substantial growth in retweeting exclusively its content. Figure 5: **Users’ political affiliation over time.** Flows widths are proportional to audience changes across political parties over time. The size of each bar represents the proportion of unique accounts associated with a specific political party. The shifts indicate the percentage of users who have changed their political affiliation. ## 3 Discussion In this work, we studied the evolution of polarization within the Pakistani political debate on Twitter. We focused on the content generated by accounts affiliated with politicians from the three major Pakistani parties and the interactions they fostered. Our findings highlight the consistent use of Twitter as a communication medium by Pakistani politicians throughout the entire observation period, with their presence on the platform growing over time. Notably, our comparison of account and content production volumes revealed a more active presence from opposition parties (PML-N and PPP) compared to the party in government (PTI). Moreover, our analysis delved into the similarities among the audiences of different parties and their evolution over time, revealing a convergence between the two opposition parties (PML-N and PPP). Our techniques proved sensitive enough to detect the shift of two PTI representatives who left the party to join the opposition alliance. Furthermore, we quantified the evolution of interactions between parties, showing that homophilic connections dominated the entire observation period. However, we observed an increasing entanglement among audiences of opposition parties, marked by a growing share of common consumers over time. Consequently, the distance between these two opposition parties, as measured by latent ideology estimation, decreased over time. In contrast, opposition parties maintained a consistent gap in relation to the government party (PTI). Remarkably, the shift of defectors was clearly discerned through latent ideology estimation, reinforcing its reliability as a tool for measuring polarization in online debates. Lastly, we examined shifts in audiences among parties over time, revealing the stability of most users' political affiliations while also emphasizing a surge in PTI retweeters in 2022. Our results indicate that, although the majority of users remained loyal to their chosen political party over time, opposition parties experienced an increase in the number of common retweeters. Thus, while users predominantly consumed content from one party, the number of PML-N/PPP users consuming content from both opposition parties increased over time. The significance of our work extends on multiple fronts, emphasizing its importance in the following three key aspects. First, we study the intricate evolution of polarization within the Pakistani political discourse on Twitter. While polarization on social media, particularly Twitter, has garnered extensive attention, certain regions like South Asia have remained conspicuously under-explored. Our study contributes to bridging this critical gap, providing valuable insights into the state and dynamic evolution of online polarization in response to unfolding political events. This is especially pertinent in understanding the unique dynamics of polarization in the South Asian context and within a peculiar political environment. Secondly, our research reinforces the credibility of Twitter data as a robust tool for studying political polarization. It underscores the possibility to not only detect macro-level shifts in political alliances but also capture nuanced individual-level changes in accounts' political affiliations. This adds another layer of validation to the use of Twitter data in political polarization analysis. Lastly, our work demonstrates the scalability of polarization analysis using social media data across diverse environments. Recent studies have applied similar techniques to various debates, spanning from politics [3, 4], to climate change [1]. Despite the cultural and social disparities inherent in these debates, our results affirm the adaptability of such analytical approaches to different contexts. This highlights the utility of the applied methodology in understanding multifaceted issues across various domains. Naturally, we recognize certain limitations. First, it is important to acknowledge that Twitter may not perfectly mirror real-life circumstances and may not represent a fully representative cross-section of the Pakistani population. Furthermore, our methodology may inadvertently exclude users and politicians who did not generate a substantial volume of data. Despite these limitations, we maintain that our results offer a robust and reliable portrayal of the state and evolution of online political polarization in Pakistan, as evidenced by their alignment with real-life events. Moreover, the significance of our work may extend beyond the specific context of Pakistani politics. High levels of polarization, whether in politics, climate change discussions, or vaccine debates, can pose significant threats to public society, potentially resulting in inaction against critical issues or even public unrest. The capability to analyze polarization trends nearly in real-time using social media data holds immense value and can become a tool for informing decision-making to prevent critical situations. Hence, we believe that the availability of social media data for such analysis is of paramount importance, serving both academic inquiries and societal concerns. ## 4 Materials and Methods ### Twitter Data In this study, we exploited a dataset consisting of tweets from the Twitter timeline of members of the Pakistani Parliament. We created a list of active Twitter accounts belonging to members of the Pakistani Parliament by manually gathering their Twitter usernames. Using the Twitter API for academic research purposes, we collected data from a total of 160 politicians representing the three major political parties in Pakistan, as determined by the number of elected members in the 2018 General Elections 1. The data collection spanned from January 2018 to December 2022, and we exclusively gathered publicly available content from public accounts. To allow the study of polarization and interaction structure, we also collected all retweets for all the tweets whose retweets count is greater than 3. The detailed breakdown of the dataset is provided in Table 1. Footnote 1: [https://ecp.gov.pk/storage/files/1/National%20Assembly1.pdf](https://ecp.gov.pk/storage/files/1/National%20Assembly1.pdf) ### Network Construction We constructed interaction networks among politicians based on retweets information. We focused solely on retweets, since unlike quote tweets or comments that may not express support for the original tweet's content, retweets are usually utilized and considered a form of endorsement [1, 4, 11, 30]. For each year, we built an undirected weighted graph \(G\), in which nodes represent politicians belonging to one of the three parties, and edges the retweeters' similarity among them. Using one year of data, we started by creating a matrix \(R_{y}\) for each year, with retweeters as rows and politicians as a column, whereas \(y\in\{\)_2018, 2019, 2020, 2021, 2022_\(\}\). The entry \(r_{i,j}\) of \(R_{y}\) is the number of times user \(i\) retweeted content posted by politician \(j\) in the year \(y\). We then computed the cosine similarity for each pair of columns to quantify the retweeters' similarity for each pair of politicians. Thus, the weight \(w_{a,b}\) of the edge between node \(a\) and \(b\) in the graph \(G\) is equal to: \[w_{a,b}=\frac{r_{a}\cdot r_{b}}{\|r_{a}\|\|r_{b}\|}\] where \(r_{a}\) and \(r_{b}\) are the two column vectors of politicians \(i\) and \(j\), respectively. It should be noted that \(w_{a,b}\in[0,\,1]\) since all the matrix entries are non-negative. Finally, we excluded all the 0-degree nodes and deleted all the edges with a weight below the median of all edge weights. This approach enabled us to capture the strongest similarities among politicians' audiences across the years. ### Latent Ideology Estimation To estimate the ideological stance of users in the debate, we start from the latent ideology algorithm proposed in [3, 14]. Following the studies already conducted in this field [1, 4], we consider retweets instead of follower/following relationships as interaction since retweets have been found to be good indicators of content endorsement [1, 4]. The latent ideology algorithm requires the selection of a set of accounts, called influucers, which critically affects the ideology estimation results. Since we aim at quantifying the political stance of users, we chose members of parliament accounts as influucers set. On these accounts and their retweeters, we applied the Correspondence Analysis algorithm [31], which follows three steps: (i) Construction of the interaction matrix \(A\), (ii) normalization of the matrix, and (iii) singular value decomposition. We constructed a matrix \(A\), whose elements \(A_{ij}\) represent the number of retweets user \(i\) directs toward influencer \(j\). Once \(A\) is known, we normalize it as follows. First, we divide by the total number of retweets, obtaining: \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c} \hline \hline **Party** & & **2018** & **2019** & **2020** & **2021** & **2022** & **Total (unique)** \\ \hline \multirow{4}{*}{PML-N} & \multirow{2}{*}{\begin{tabular}{c} Retweeters \\ Politicians \\ Tweets \\ \end{tabular} } & 118,357 & 112,174 & 120,096 & 142,725 & 279,312 & 542,352 \\ & & 25 & 31 & 34 & 38 & 42 & 44 \\ & & 10,456 & 14,046 & 15,393 & 19,739 & 23,767 & 83,401 \\ & & 2,883,688 & 3,691,028 & 3,782,702 & 4,241,579 & 6,882,825 & 21,481,822 \\ \hline \multirow{4}{*}{PPP} & \multirow{2}{*}{\begin{tabular}{c} Retweeters \\ Politicians \\ Tweets \\ \end{tabular} } & 39,253 & 47,645 & 46,385 & 37,832 & 73,916 & 170,737 \\ & & 20 & 16 & 21 & 26 & 28 & 34 \\ & & 3,353 & 3,680 & 7,094 & 6,295 & 8,444 & 28,866 \\ & & 433,930 & 456,010 & 507,036 & 421,237 & 970,812 & 2,789,025 \\ \hline \multirow{4}{*}{PTI} & \multirow{2}{*}{ \begin{tabular}{c} Retweeters \\ Politicians \\ Tweets \\ \end{tabular} } & 300,263 & 391,068 & 371,595 & 375,356 & 939,669 & 1,691,124 \\ & & 66 & 68 & 71 & 70 & 74 & 82 \\ \cline{1-1} & & 18,831 & 19,431 & 25,536 & 22,959 & 43,943 & 130,700 \\ \cline{1-1} & & 4,469,750 & 6,302,333 & 6,181,935 & 5,499,317 & 48,576,191 & 71,029,526 \\ \hline \hline \end{tabular} \end{table} Table 1: Breakdown of the dataset by Party and year. \[P=\frac{A}{\sum_{ij}A_{ij}}. \tag{1}\] Then, we define the following quantities: \[\begin{cases}\mathbf{r}=P\mathbf{1},\\ \mathbf{c}=\mathbf{1}^{T}P,\\ D_{r}=\text{diag}(\mathbf{r}),\\ D_{c}=\text{diag}(\mathbf{c}),\end{cases} \tag{2}\] and we perform the following normalization operation: \[S=D_{r}^{-1/2}(P-\mathbf{rc})D_{c}^{-1/2} \tag{3}\] For the third step, we perform a singular value decomposition of the form \(S=U\Sigma V^{T}\), where \(U,V\) are orthogonal matrices and \(\Sigma\) is a diagonal matrix containing the singular values of \(S\). Finally, we take the latent ideology of user \(i\) to be the \(i\)-th entry of the first column of the orthogonal matrix \(U\), while the retweeters' median ideology represents the latent ideology of an influencer. ### Hartigan's Diptest Hartigan's dip test serves as a nonparametric examination for assessing the presence of multiple modes in a distribution drawn from a sample [32]. It computes the maximum difference across all sample points between the distribution function that minimizes this difference while remaining unimodal and the actual empirical distribution function. The outcome of the test provides a value denoted as \(D\), which measures the extent of multimodality, along with a statistical significance value represented as \(P\). Supplementary information.This article has an accompanying supplementary file. Acknowledgments.A.G. and F.Z. acknowledge support from the IRIS Research Coalition (UK government, grant no. SCH-00001-3391).
2309.03120
Quantifying the limits of controllability for the nitrogen-vacancy electron spin defect
Solid-state electron spin qubits, like the nitrogen-vacancy center in diamond, rely on control sequences of population inversion to enhance sensitivity and improve device coherence. But even for this paradigmatic system, the fundamental limits of population inversion and potential impacts on applications like quantum sensing have not been assessed quantitatively. Here, we perform high accuracy simulations beyond the rotating wave approximation, including explicit unitary simulation of neighboring nuclear spins. Using quantum optimal control, we identify analytical pulses for the control of a qubit subspace within the spin-1 ground state and quantify the relationship between pulse complexity, control duration, and fidelity. We find exponentially increasing amplitude and bandwidth requirements with reduced control duration and further quantify the emergence of non-Markovian effects for multipulse sequences using sub-nanosecond population inversion. From this, we determine that the reduced fidelity and non-Markovianity is due to coherent interactions of the electron spin with the nuclear spin environment. Ultimately, we identify a potentially realizable regime of nanosecond control duration for high-fidelity multipulse sequences. These results provide key insights into the fundamental limits of quantum information processing using electron spin defects in diamond.
Paul Kairys, Jonathan C. Marcks, Nazar Delegan, Jiefei Zhang, David D. Awschalom, F. Joseph Heremans
2023-09-06T15:55:21Z
http://arxiv.org/abs/2309.03120v2
# Quantifying the limits of controllability for the nitrogen-vacancy electron spin defect ###### Abstract Solid-state electron spin qubits, like the nitrogen-vacancy center in diamond, rely on control sequences of population inversion to enhance sensitivity and improve device coherence. But even for this paradigmatic system, the fundamental limits of population inversion and potential impacts on applications like quantum sensing have not been assessed quantitatively. Here, we perform high accuracy simulations beyond the rotating wave approximation, including explicit unitary simulation of neighboring nuclear spins. Using quantum optimal control, we identify analytical pulses for the control of a qubit subspace within the spin-1 ground state and quantify the relationship between pulse complexity, control duration, and fidelity. We find exponentially increasing amplitude and bandwidth requirements with reduced control duration and further quantify the emergence of non-Markovian effects for multipulse sequences using sub-nanosecond population inversion. From this, we determine that the reduced fidelity and non-Markovianity is due to coherent interactions of the electron spin with the nuclear spin environment. Ultimately, we identify a potentially realizable regime of nanosecond control duration for high-fidelity multipulse sequences. These results provide key insights into the fundamental limits of quantum information processing using electron spin defects in diamond. ## I Introduction Defects in solid-state systems that host isolated electron spins are an extremely promising platform for quantum technologies [1; 2; 3]. Arguably the most widely studied and technologically mature system is the nitrogen-vacancy (NV\({}^{-}\)) center in diamond, owing to a number of convenient properties like room temperature coherent operation, magnetically driven dynamics, and optical readout. These properties have driven a number of applications for NV\({}^{-}\) center technologies in the areas of quantum information, including sensing and computing [4; 5; 6]. In all applications, maximizing coherence and operation fidelity is paramount. The standard technique to extend qubit coherence relies on multipulse refocusing sequences to dynamically decouple the system from its environment [7; 8]. Defined for a two-level system, a multipulse sequence is composed of many population-inversions, commonly referred to as \(\pi\)-pulses because they generate a rotation of angle \(\pi\) around an axis of the Bloch sphere. For applications in sensing or computing, it is important that these \(\pi\)-pulses are simultaneously fast, high-fidelity, and Markovian [9; 4]. If the \(\pi\)-pulses are instantaneous and have perfect fidelity this leads to a signal filtering formalism useful for many applications [10]. Unfortunately, in practical application, these requirements are contradictory because every non-trivial unitary evolution that can be generated in a quantum system has a finite control duration and finite fidelity loss [11; 12]. Therefore it is crucial to understand the fundamental limits of population inversion in NV\({}^{-}\) center systems and the impact that these limits can have on application performance. Assessing the limits of population inversion at short control duration can be formalized as a quantum optimal control (QOC) task. As pulse duration decreases, a larger drive field is necessary to generate the desired rotation, breaking common experimental control approximations like resonance conditions and the rotating wave approximation (RWA) [13; 14]. Therefore, simulation accuracy is paramount to capture the relevant physics in this regime and accurately guide experimental effort. In this work, we build an accurate model of a NV\({}^{-}\) center and neighboring nuclear spins system using parameters derived from detailed experiments [5; 15]. We include significant non-secular terms in the spin Hamiltonian and model global driving conditions, while avoiding the RWA. This enables us to accurately and quantitatively probe the fundamental limits of information dynamics in the NV\({}^{-}\) center system. We perform an ensemble of QOC simulations using our NV\({}^{-}\) center model. We examine how the fidelity of optimized \(\pi\)-pulses depends on pulse duration and identify the mechanism of control and the loss of fidelity at short pulse times. Then, we analyze the optimal pulses and quantify the growth in amplitude and bandwidth required to achieve population inversion. Using these results, we consider these pulses for a model multipulse quantum sensing application, enabling us to quantify the non-Markovian effects that emerge in multipulse sequences of short-time \(\pi\)-pulses. We conclude by discussing the feasibility of our identified controls and future research directions. The rest of this paper is structured as follows: in Section II we define the model NV\({}^{-}\) center system and discuss the experimental context, thereafter we present our numerical results in Section III and connect these results to an example quantum sensing application in Section IV. Finally, we state and discuss our conclusions in Section V. ## II System model The negatively charged NV\({}^{-}\) center in diamond hosts a number of coherent quantum degrees of freedom. The most commonly used is the ground state electronic spin-1 degree of freedom. This electron spin couples to nearby nuclear spins in diamond via the hyperfine interaction. The neighboring nuclei consist of the spin-1 nitrogen nucleus in the NV\({}^{-}\) center itself and randomly distributed spin-1/2 \({}^{13}\)C nuclei. There is an additional diffuse bath of electron and nuclear spin-containing Nitrogen defects that we do not consider in this work. All of these spins couple to magnetic fields and can therefore be controlled by manipulating the external magnetic field. Ignoring interactions due to the electric field, the Hamiltonian of this NV\({}^{-}\) center system including a single \({}^{14}\)N nuclear spin and multiple \({}^{13}\)C spins is written as [16]: \[H(t) = DS_{z}^{2}+QI_{N,z}^{2}+\sum_{j}\omega_{j}I_{C_{j},z}\] \[+\sum_{j}\vec{S}\mathcal{N}_{j}\vec{I}_{j}+\vec{B}(t)\cdot\biggl{[} \gamma_{e}\vec{S}+\sum_{j}\gamma_{j}\vec{I}_{j}\biggr{]},\] where \(\vec{S}=(S_{x},S_{y},S_{z})\) is the vector of spin-1 operators that act on the electron spin of the NV\({}^{-}\) center and \(\vec{I}_{j}=(I_{(j,x)},I_{(j,y)},I_{(j,z)})\) is the vector of spin operators acting on nuclei \(j\). The parameters \(D,Q,\omega_{j}\) are the zero-field splitting, nuclear quadrupole, and resonance frequencies of the electron, nitrogen nucleus, and carbon nuclei, respectively. The spin interactions are specified by the hyperfine tensor \(\mathcal{N}\) and \(\gamma_{j}\) are the nuclear gyromagnetic ratios that specify the interaction with the time-dependent magnetic field \(\vec{B}(t)\). Recently, experiments have been conducted to map the neighboring nuclear spin bath of an NV\({}^{-}\) center in great detail and extract the relevant intra- and inter-spin interaction strengths between nuclear and electronic spins [15]. It was found in Ref. [15] that at least 27 \({}^{13}\)C nuclei can be individually identified near the NV\({}^{-}\) center in a diamond sample with natural \({}^{13}\)C abundance but it is anticipated that coupling to many more is possible [17]. In our simulations we consider only two such nuclei but use the parameters found in Ref. [5; 15] to build an accurate model of an NV\({}^{-}\) center and its immediate environment. While a larger model would enable more accuracy, two nuclei were chosen to mitigate the memory requirements and enable a large number of quantum optimal control simulations to be run. We label basis states for this system in the eigenbasis of local spin-\(z\) operators with a labeling \(\left|s_{z,e^{-}},s_{z,{}^{14}N},s_{z,{}^{13}C},s_{z,{}^{13}C}\right\rangle\) for the electron spin, nuclear spin, and the two carbon spins, respectively. For spin states with eigenvalue \(+1\) and \(-1\) we use \(\left|\uparrow\right\rangle\) and \(\left|\downarrow\right\rangle\). For example, for a spin-1 \(S_{z}\) operator: \(S_{z}\left|\uparrow\right\rangle=+1\left|\uparrow\right\rangle\), \(S_{z}\left|\downarrow\right\rangle=-1\left|\downarrow\right\rangle\), \(S_{z}\left|0\right\rangle=0\left|0\right\rangle\). The electron spin is used to define a qubit by applying a static external magnetic field in the \(z\) direction, along the NV quantization axis. This lifts the degeneracy of the \(\left|m_{s}=\pm 1\right\rangle\) states and allows one to define a magnetically addressable two-level subspace spanned by \(\left|m_{s}=0\right\rangle\) and \(\left|m_{s}=-1\right\rangle\). This is the standard qubit subspace used in Figure 1: The numerical results for an ensemble of optimizations with a varying number of pulse basis functions, \(N\), yielding a total number of \(3N\) optimized parameters. **a)** shows the optimal final-time infidelity found for decreasing control duration. We observe an exponential increase in infidelity with decreasing control duration. In **b)** the maximum pulse amplitude is plotted as a function of decreasing control duration. For reference, the maximum amplitude required by a Gaussian envelope function with \(\pi\) area for the same duration is plotted. This Gaussian reference is what is anticipated analytically in the long-time, low-amplitude regime where the RWA is valid and the NV\({}^{-}\) center electron system can be well described as a two-level system. most applications of NV\({}^{-}\) centers and the one we consider in this work. For our simulations, we assume that a static magnetic field of \(B_{z}=850\) G is constantly applied along the NV quantization axis defined by the zero-field splitting. This static bias ensures that the qubit transition frequency of about 0.49 GHz is separated from the \(\left|m_{s}=0\right\rangle\rightarrow\left|m_{s}=+1\right\rangle\) transition by about 5 GHz. It has been shown in experiments that this enables precise addressing of the qubit subspace even during strong, fast driving [13]. We assume \(B_{y}(t)=0\) and control the qubit by tuning the drive \(B_{x}(t)\). We decompose this drive as the sum of parametrized sinusoidal functions and optimize the respective amplitude, frequency, and phase of each component, see Appendix A for additional details and methods. Typical \(\pi\)-pulse controls for NV\({}^{-}\) center systems occur on the length of tens to hundreds of nanoseconds [18; 13]. In this work we consider control duration less than 10 ns. In this time regime, decoherence from both amplitude damping and spin-bath induced dephasing effects are negligible and we can consider only a unitary simulation without significant loss of accuracy [16]. ## III Numerical results We implement an ensemble of quantum optimal control tasks to find population-inverting pulse waveforms for decreasing control duration. Our first set of numerical results is the infidelity of each pulse optimized for population inversion, shown in Fig. 1a for various number \(N\) of pulse basis functions. We observe that there is an exponentially increasing infidelity with decreased control duration. Specifically, we note that control duration greater than 1 ns can achieve almost arbitrarily good control over the qubit subspace. However, at intermediate times, \(10^{-1}\text{ ns}\leq t\leq 1\text{ ns}\), the optimal controls consistently obtain infidelities around \(10^{-5}\). While not arbitrarily good control, this sub-nanosecond timescale is an interesting regime, because these fidelities are still relatively high from the standpoint of noisy intermediate-scale quantum systems and may be useful for several applications [5]. We also vary the total number of basis functions, \(N\), used to describe the pulse in the sub-ns regime. We observe that this only negligibly affects the attainable infidelity, suggesting that this infidelity limit is a fundamental property of the quantum system, not the assumed control decomposition. In addition to the achievable fidelity, it is critical to examine the properties of the controls used to achieve these fidelities. In Fig. 1b we observe an exponentially increasing maximum pulse amplitude with decreasing time. While this is overall expected from quantum theory, the growth in pulse amplitude observed is more pronounced than what one would expect from purely analytical results. We plot for reference the required pulse amplitude for a \(\pi\)-pulse using a Gaussian envelope function and observe that the optimal pulses begin deviating significantly from the expected required amplitude around 1 ns. Within the two-level and rotating wave approximations, one expects from theory that the integrated area of the optimal pulse envelope will have integer multiples of \(\pi\) area in order to induce population inversion. We observe this effect near \(7-10\text{ ns}\) where a hierarchy of optimal pulse amplitudes are found, separated roughly by a integer multiple. However, even our results for 10 ns deviate from the ideal Gaussian \(\pi\)-pulse because we include all effects beyond the two-level and rotating wave approximations. The required pulse amplitude is only one limitation on the feasibility of a control pulse. In addition, the frequency components needed to generate the pulse must be quantified. This is visualized in Fig. 2 where all amplitude and frequency components (the \(a_{i},\omega_{i}\) in Eq. (A3)) for each optimization run are plotted on the left and only the most optimal pulse components are shown on the Figure 2: The optimal amplitude and frequency components of each sinusoidal basis function for all optimizations at various control duration. **a)** the optimal amplitude and frequency components of each optimal control pulse for a set of decreasing control duration. **b)** a subset of the data presented on the left, plotting only the frequency and amplitude components of the lowest infidelity control pulse found for each control duration. right. We observe that in addition to the exponential increase in amplitude, an exponential increase in frequency is required to represent the optimal pulses with decreasing control duration. Interestingly, the optimal controls for pulses within 1-10 ns lie within an experimentally achievable regime. For example, the required amplitudes for each frequency component are below 1000 Gauss and require frequencies below 10 GHz. This suggests that realizing true nanosecond control over NV\({}^{-}\) center system dynamics, while experimentally challenging, is not impossible. We observe qualitatively a transition in infidelity in Fig. 1a below 1 ns control times. To elucidate the mechanism of fidelity loss with decreasing control duration, we examine the population dynamics generated by the most optimal control pulses for pulse times below 1 ns in Fig. 3. First, we observe in Fig. 3a that for control approximately 0.5 ns in length, the \(\left|m_{s}=+1\right\rangle\) state is strongly populated at intermediate control duration. This indicates that the third level of the electron spin is actually being utilized coherently as a resource to generate the unitary evolution in the qubit subspace. Critically, at the final control duration all the population leaves the \(\left|m_{s}=+1\right\rangle\) state, achieving the desired population inversion in the qubit subspace. Next, we observe in Fig. 3b, with \(T=0.3\) ns, that not all of the initial population returns to the qubit subspace at the final time. In fact, the two states \(\left|\downarrow\downarrow\uparrow\downarrow\right\rangle\) and \(\left|\downarrow\circ\downarrow\downarrow\right\rangle\) have final population \(\mathcal{O}(10^{-6})\) and \(\mathcal{O}(10^{-8})\), respectively. These two states represent partial flips of the nuclear spins of \({}^{13}C_{1}\) and \({}^{14}N\) induced by the control pulse. Importantly, however, the final-time electron spin populations of \(\left|m_{s}=0\right\rangle\) and \(\left|m_{s}=+1\right\rangle=\left|\uparrow\right\rangle\) are almost completely suppressed. This suggests that the loss in fidelity that occurs in Fig. 1a below 1 ns is primarily due to population loss to the surrounding nuclear spin environment. We now examine even shorter control duration. In Fig. 3c we observe that in addition to the final-time population of the nuclear spin environment, there is also comparable final-time population in the electron spin states, \(\left|m_{s}=0\right\rangle\) and \(\left|m_{s}=+1\right\rangle\). However, both final-time populations are still below the population loss into the nuclear spin environment, which have increased slightly for the shorter duration pulse. In Fig. 3d, at a control duration of 0.05 ns, we observe an evolution with infidelity of 0.35. This infidelity is remarkably poor and indicates that the system is likely not controllable at such short times. Examining the population dynamics, we observe that nearly all of the initial population has become mixed into the other electron spin states \(\left|m_{s}=0\right\rangle\) and \(\left|m_{s}=+1\right\rangle\) with only a relatively small increase in population lost to the nuclear spin bath. These observed population dynamics enable a simple explanation for the mechanism of short-time control and losses of fidelity seen in Fig. 1a. First, control pulses around 1 ns coherently utilize the \(\left|m_{s}=+1\right\rangle\) to induce the desired evolution in the qubit subspace. This is not surprising, as Fig. 2(b) shows that the pulse has frequency components on the order of the splitting between the \(\left|m_{s}=-1\right\rangle\) and \(\left|m_{s}=+1\right\rangle\) transitions. However, between approximately 0.3 ns and 1.0 ns control duration, population begins to transfer to the nuclear spin bath leading to fidelity loss. Below 0.3 ns, the residual population of \(\left|m_{s}=0\right\rangle\) and \(\left|m_{s}=+1\right\rangle\) induces further loss of fidelity until it becomes the primary source of fidelity loss below around 0.05 ns. Next, we examine the properties of the most experimentally relevant optimal pulses. Specifically, shown in Fig. 4 are the time and frequency representations for the most optimal pulses in the laboratory frame of reference. We observe that the optimal solutions begin to slowly diverge from a Gaussian-like envelope under 10 ns by adding more weight to higher frequency components. This happens slowly at first, but, for the optimal pulse at 1.0 ns, the spectral density has broadened significantly. According to our calculations, population inversion occurring \(<5\) ns would require 5-15 GHz of bandwidth to Figure 3: A set of plots visualizing the population dynamics generated by the most optimal control pulses. The initial state for all plots is \(\left|0\downarrow\downarrow\downarrow\right\rangle\). A log scale is used on the y-axis in order to highlight the intermediate and final-time leakage populations. The final-time infidelities found via a high-accuracy simulation is approximately \(3.07\times 10^{-10}\), \(5.84\times 10^{-7}\), \(3.14\times 10^{-5}\), and 0.35 for control durations of 1.0 ns, 0.3 ns, and 0.2 ns, and 0.05 ns, respectively. The Hilbert space for the system under study has 36 dimensions. However, for clarity, the only state populations plotted are those larger than \(10^{-9}\). generate the control pulse. It is important to note the overall complexity of these optimal control pulses. Each is identified using a basis of only 10 sinusoidal functions, totalling 30 parameters. In principle, this means each would require optimizing the same number of experimental parameters for calibration, suggesting that even with larger amplitudes, and larger bandwidths, the cost to calibrate these pulses should be similar [19]. ## IV Application in Multipulse Sequences NV\({}^{-}\) centers and the neighboring nuclear spin environment can be used for a variety of applications such as quantum computing and sensing. The effect of repeated \(\pi\)-pulses can be described in the language of multipulse sequences where ideal \(\pi\)-pulse sequences induce a filtering effect on the dynamical phase acquired by the qubit interacting with its environment [10]. For quantum sensing, this filtering effect couples or decouples the qubit's phase evolution to specific frequencies in an external oscillating magnetic or electric field, thus providing a way to detect and characterize external signals [4; 10]. The same mechanism is useful for quantum computing, where multipulse sequences can enhance entanglement rates between NV\({}^{-}\) center qubits and nuclear spins, allowing selective and robust quantum information processing [5; 20]. Additionally, multipulse sequences induce dynamical decoupling of the qubit from its environment, yielding longer coherence times in general [21; 22]. In all of these applications the population inversion must be simultaneously high-fidelity, fast, and free from non-Markovian effects [11]. Non-Markovian dynamics arise from coherent exchange of quantum information between the system and its environment on timescales comparable to the dynamics of the system [23]. These effects can lead to complex, time-correlated errors in long multipulse sequences [4]. Thus it is critical to quantify any non-Markovian effects that may arise from the optimal \(\pi\)-pulses found in this work. We probe the non-Markovianity of our optimal control pulses by simulating a multipulse sequence of repeated population inversion, i.e. repeatedly applying an \(X\) gate to the qubit subspace. This is a realization of the Carr-Purcell sequence [4; 9]. Ideally, the evolution induced in the qubit subspace during each control pulse should be unitary and involutory, thereby inducing no evolution after two \(\pi\)-pulses. In Fig. 5 we show the population of the quantum state \(\left|+\downarrow\downarrow\downarrow\right\rangle\) as a function of number of pulses applied, focusing on sub-nanosecond optimal controls, where \(\left|+\right\rangle=\frac{1}{\sqrt{2}}(\left|m_{s}=0\right\rangle+\left|m_{s} =-1\right\rangle)\) is a typical sensing state [9]. Critically, in Fig. 5a we observe that, for pulses with duration above 0.3 ns, population begins to decrease monotonically with increasing pulse number, suggesting that population leaving the qubit subspace does not return within about 500 control pulses. However, when the number of pulses increases beyond this or when the pulse duration is shorter, we observe in Fig. 5b that there can be a coherent return of population into the qubit subspace, yielding non-Markovian effects. This population collapse and revival would dramatically erode the effectiveness of these pulses for applications such as dynamical decoupling or long-time multipulse sensing where it is common to use sequences of thousands of pulses [4]. We showed for the \(T=0.05\) ns control duration in Fig. 3d that this pulse has significant residual coupling outside of the qubit subspace to the \(\left|m_{s}=+1\right\rangle\) electron state. In a multipulse sequence, this will yield fast oscillations of population with number of pulses. However, pulses of slightly longer duration were shown in Fig. 3 to primarily exchange population with the nuclear spins at a significantly lower rate. Therefore, in Fig. 5, we attribute the high frequency dynamics with population dynamics of the electron spin and low frequency population dynamics is attributed to the relatively slower population exchange with the nuclear spin environment. ## V Conclusion In this work we have explored the sub-nanosecond controllability of an NV\({}^{-}\) center electron spin qubit using high-accuracy numerical simulations and studied the feasibility of these controls for relevant applications. We have found an exponential increase in infidelity with decreasing control duration along with comparable exponential increases in the pulse amplitude and bandwidth required for control. Importantly, these amplitude and bandwidth requirements grow even faster than what we would expect analytically. We identify a regime of arbitrarily good control with pulse times near and above 1 ns in duration. In these control pulses, the third level of the NV\({}^{-}\) center electron spin \(\left|m_{s}=+1\right\rangle\) is coherently used as a resource in order to generate fast, nearly unitary population inversion in the qubit subspace. We also identify an intermediate regime of control duration yielding infidelity \(\mathcal{O}(10^{-5})\) just below 1 ns. We were able to determine that the loss of fidelity is due primarily to the coupling of the qubit to the neighboring nuclear spin environment. This begins to change with controls less than 0.2 ns where the electron spin qubit cannot be controlled well and infidelities become unreasonably large. Finally, we show that the optimal sub-nanosecond control pulses will actually yield non-Markovian effects when applied to multipulse sequences larger than about 500 pulses. This makes these short-time pulses potentially inadequate for tasks in quantum sensing and quantum information, even though their fidelity is relatively high. We conclude that it is possible to reduce the population inversion time to nearly 1 ns and still achieve arbitrarily good population inversion. If realized in the laboratory, these ultrashort controls could be used to substantially increase the accuracy, precision, and range of quantum sensing and information processing in NV\({}^{-}\) diamond systems. However, these controls seem to require maximum pulse amplitudes near 1000 G and bandwidth near to 10 GHz making their implementation challenging. Going further below this threshold reduces fidelity and increases control requirements exponentially. The simulations performed in this work assumed a static magnetic field of 850 G aligned with the NV\({}^{-}\) center principle axis. This is a large magnetic field and was used in previous work to create a well-isolated qubit system [13]. However, our simulations have shown that coherent use of the third electron spin level \(\left|m_{s}=+1\right\rangle\) is critical to fast control, indicating that a perfectly well-isolated qubit is not necessary and therefore large magnetic fields may not be required for ultrafast control. One straightforward route for continued work is to explore the impact of level splitting from the bias field, or even the impact of non-static bias fields on the controllability of NV\({}^{-}\) center qubits. This may be useful to further reduce the amplitude and bandwidth requirements for optimal Figure 4: A set of the most optimal identified control pulses (**a,b,c**) and their respective power spectral densities (**d,e,f**) for various control duration. In the lower row, the power spectrum of a Gaussian envelope function with \(\pi\) area for the same control duration is plotted for reference. The final-time infidelities found via a high-accuracy simulation is approximately \(3.55\times 10^{-9}\), \(4.28\times 10^{-12}\), and \(5.07\times 10^{-11}\) for control duration \(T=9.0\) ns, \(T=5.0\) ns, and \(T=1.0\) ns, respectively. Figure 5: The population dynamics of \(\left|+\downarrow\downarrow\downarrow\right\rangle\) under a typical multipulse sequence of population inversions for various optimal controls. In an ideal case, the evolution induced in the qubit subspace during each control pulse should be unitary and involutory, thereby inducing no evolution after two \(\pi\)-pulses. Therefore, only the population at an even number of pulses is shown. Both plots are identical, but **a)** has an x-axis truncated at 500 pulses to highlight the short-multipulse dynamics of the quantum state whereas **b)** shows the dynamics for a sequence up to 5000 pulses. The legend labels the corresponding pulse duration. pulses and enable near-term realizations in the laboratory. Looking forward, we found that below 1 ns the qubit begins to interact with the neighboring nuclear spins with the largest coupling due to the interactions with \({}^{13}C\) nuclei. Therefore, material synthesis methods using isotopic purification of the \({}^{13}\)C spin bath may provide a way to mitigate these deleterious couplings and achieve faster control. In future work, understanding the impact of material properties and synthesis strategies will be critical to push spin defect technologies towards their ultimate limits. ## VI Acknowledgements This material is based upon work supported by the U.S. Department of Energy Office of Science National Quantum Information Science Research Centers as part of the Q-NEXT center. ## Appendix A Methods ### Optimal control of NV center In this section we explain our definition of the control parametrization, constraints, and quantum optimal control task. First, we use the standard assumption that information processing is done in the reference frame rotating at the frequencies of the individual spins. We decompose the driving field as a sum of parametrized basis functions: \[b(t)=\sum_{i=1}^{N}a_{i}\sin(\omega_{i}t+\phi_{i}) \tag{1}\] where \(a_{i},\omega_{i},\phi_{i}\) are the amplitude, frequency, and phase of the sinusoidal functions and \(N\) determines the number of basis functions used. In this work we vary the total number of basis functions in order to understand optimal pulse complexity. We require that the pulses have finite time, and therefore set boundary conditions that the pulse begins and ends at zero amplitude. This is enforced by modulating the function \(b(t)\) with a flat-topped cosine function: \[\Omega(t)=\begin{cases}\frac{1-\cos(\pi t/\tau_{r})}{2}\Omega_{m}&0\leq t\leq \tau_{r}\\ \Omega_{m}&\tau_{r}\leq t\leq(\tau_{c}-\tau_{r})\\ \frac{1-\cos(\pi(\tau_{c}-t)/\tau_{r})}{2}\Omega_{m}&(\tau_{c}-\tau_{r})\leq t \leq\tau_{c},\end{cases} \tag{2}\] where \(\tau_{c}=T\) is the total control duration, \(\Omega_{m}=1\) scales the magnitude of the control pulse, and \(\tau_{r}\) is the ramp time, which was chosen to be \(0.3\tau_{c}\) to reduce spectral leakage [24]. These two functions combine to yield our parametrization of the dynamical magnetic field: \[B_{x}(\vec{\alpha},t)=\Omega(t)b(\vec{\alpha},t) \tag{3}\] where \(\vec{\alpha}=(a_{1},\omega_{1},\phi_{1},\dots,a_{N},\omega_{N},\phi_{N})\) is the vector of parameters with total length \(3N\). The global time evolution of the system for control duration \(T\) and control parameters \(\vec{\alpha}\) is formally written as \[U(\vec{\alpha},T)=\mathcal{T}\exp\left[-\frac{i}{\hbar}\int_{0}^{T}d\tau H( \vec{\alpha},\tau)\right] \tag{4}\] While the global evolution is always unitary, the quantum dynamics within a subspace will generally not be. In order to identify controls that are unitary in the computational subspace, we measure the fidelity between the target population inversion in the qubit subspace, \[X_{e} =\left|m_{s}=0\right\rangle\left\langle m_{s}=-1\right|\] \[+\left|m_{s}=-1\right\rangle\left\langle m_{s}=0\right|, \tag{5}\] and the projection of the global final-time unitary evolution: \[u(\vec{\alpha},T)=P_{q}U(\vec{\alpha},T)P_{q} \tag{6}\] where \(P_{q}\) is the projector onto the qubit subspace. We measure the infidelity between the two evolutions using the following function derived from the Hilbert-Schmidt norm: \[g(\vec{\alpha},T)=1-\frac{|\operatorname{Tr}\bigl{(}u(\vec{\alpha},T)X_{e}^{ \dagger}\bigr{)}|^{2}}{2}. \tag{7}\] The infidelity function ranges from 0 to 1 and obtains a minimum when the subspace evolution \(u(\vec{\alpha},T)\) is unitary and equivalent to \(X_{e}\), up to a global phase. We identify optimal controls by minimizing the infidelity function. Finally, it is important to note that the infidelity function is defined only at the final control duration \(T\). When performing optimization this function will not penalize population leakage outside of the qubit subspace during the control duration so long as it returns to the qubit subspace at the conclusion of the pulse. ### Numerical methods We solve the optimal control problem using the Gradient Optimization of Analytic conTrols (GOAT) method [25]. We use the programming language Julia and various open-source packages [26]. A public release of the software can be found on a Github repository [27]. Our implementation uses the Julia package DifferentialEquations.jl to solve the coupled GOAT equations of motion using a order 5/4 Runge-Kutta method with adaptive time stepping [28]. For the gradient-based control optimization of \(\tilde{\alpha}\), we use a limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) algorithm with a backtracking line-search method which are implemented in the Optim.jl package and LineSearches.jl package, respectively [29]. We limit each optimization to 1000 iterations of L-BFGS and define a stopping criteria when the infinity-norm of the gradient falls below 1e-9 or the relative change in the objective function is below 1e-8. For further details on the derivations of gradients via the GOAT algorithm we refer the reader to the original manuscript introducing GOAT, our previous work, and the package documentation [25, 27, 30]. All numerical data and associated codes are available upon request.
2309.04897
Stationary measures for higher spin vertex models on a strip
We introduce a higher spin vertex model on a strip with fused vertex weights. This model can be regarded as a generalization of both the unfused six-vertex model on a strip [Yan22] and an 'integrable two-step Floquet dynamics' model introduced in [Van18]. We solve for the stationary measure using a fused version of the matrix product ansatz and then characterize it in terms of the Askey-Wilson process. Using this characterization, we obtain the limits of the mean density along an arbitrary down-right path. It turns out that all these models share a common phase diagram, which, after an appropriate mapping, matches the phase diagram of open ASEP, thereby establishing a universality result for this phase diagram.
Zongrui Yang
2023-09-09T23:27:59Z
http://arxiv.org/abs/2309.04897v1
# Stationary measures for higher spin vertex models on a strip ###### Abstract. We introduce a higher spin vertex model on a strip with fused vertex weights. This model can be regarded as a generalization of both the unfused six-vertex model on a strip [15] and an 'integrable two-step Floquet dynamics' model introduced in [16]. We solve for the stationary measure using a fused version of the matrix product ansatz and then characterize it in terms of the Askey-Wilson process. Using this characterization, we obtain the limits of the mean density along an arbitrary down-right path. It turns out that all these models share a common phase diagram, which, after an appropriate mapping, matches the phase diagram of open ASEP, thereby establishing a universality result for this phase diagram. ## 1. Introduction and main results ### Preface The higher spin vertex model plays a central role among probabilistic systems in the Kardar-Parisi-Zhang (KPZ) universality class, since it can be degenerated into many other systems in this class, including interacting particle systems and polymer models. For summaries of its degenerations, see [11, Figure 1], [12, Figure 1 and Figure 2]. While most studies focus on vertex models in full space, recent progress has been made towards such models with open boundary, see for example [1, 10, 11]. On a separate note, the matrix product ansatz method, introduced by [1], has been extensively adopted to study stationary measures for Markov chains, particularly for interacting particle systems. This method involves expressing the stationary measure as a product of matrices, one for each occupation number. These matrices need to satisfy certain consistency relations. In the case of open asymmetric simple exclusion process (ASEP), the matrix ansatz is related to Askey-Wilson polynomials [13] and processes [14], enabling a rigorous derivation of the phase diagram, density profile and fluctuations [15, 16, 17]. A physics paper [16] introduced a class of higher spin interacting particle systems called the 'two-step Floquet dynamics'. We refer to the spin of an interacting particle system as \(\frac{f}{2}\) if up to \(I\) many particles are allowed to occupy a single site. The stationary measures of the spin-\(\frac{I}{2}\) version of 'two-step Floquet dynamics' can be solved by a fused version of matrix product ansatz. The matrices that are involved are obtained in [16] through developing a fusion procedure for the so-called Zamolodchikov-Faddeev (ZF) and Ghoshal-Zamolodchikov (GZ) relations. It is known in the physics literature [18, 19, 12, 13] that, for integrable systems with two open boundaries (in the sense of [10]), the ZF and GZ relations are connected to the consistency relations of the matrix ansatz. [16] constructed such systems and their stationary measures for \(I\in\{1,2\}\) cases, and algebraic formulas for certain physical quantities were obtained. A recent work [16] studied the stationary measure of the unfused stochastic six-vertex model on a strip (when \(I=1\)). In this paper we study a higher spin generalization of this model and its stationary measure. In the spin-\(\frac{I}{2}\) version of such model, up to \(I\) many arrows are allowed to occupy a single edge. The higher spin vertex model on a strip has vertex weights given by the fused \(R\) and \(K\) matrices, which are constructed from the (standard) fusion procedure that goes back to [10]. The stationary measure of such a model can be solved using the matrix product ansatz. The matrices involved in this matrix ansatz can be obtained from the fused solutions of ZF and GZ relations that are generalized from [16]. We then utilize with modifications the techniques from [13, 17] to characterize the stationary measure in terms of the Askey-Wilson processes. Using this description, we investigate the limits of a basic (macroscopic) physical quantity of the system known as the mean density, as the size of the system going to infinity. The limits are given by different formulas within different regions, from which we obtain the phase diagram of the system. We remark that the Markov chains defined by the fused vertex model on a strip are indexed by down-right paths on the strip. In this paper we are able to study the stationary measure corresponding to an arbitrary path. It is interesting that the systems indexed by different down-right paths share the same phase diagram (but with different limits of mean density). This phenomenon has not been observed previously in [16], since the asymptotics was only obtained therein for the horizontal path. Moreover, we observe that the system corresponding to a specific down-right path (the zig-zag path) coincides with one of the 'two-step Floquet dynamics' models in [16]. Therefore our fused vertex model on a strip can be considered as a generalization of this 'two-step Floquet dynamics' system. Our results, in particular, answer an open question raised in [16, Section 4] of the mean density and the phase diagram of such (a class of) systems. The family of Markov chains studied in this paper are parameterized by the following: the system size \(N\), the number \(I\) (which controls the spin), bulk and boundary parameters \(q,\kappa,a,\theta,\epsilon,d\), and the shape of a down-right path \(\mathcal{P}\). As size \(N\) approaches infinity, this class of models share a common phase diagram, which, after an appropriate mapping, matches the phase diagram of open ASEP. We remark that the open ASEP models are parameterized by the system size \(N\) and \(q,\alpha,\beta,\gamma,\delta\). This demonstrates the 'universality' of the open ASEP phase diagram, in the sense that a family of systems share this common phase diagram. It is possible that, as mentioned in the first paragraph, the higher spin vertex model on a strip studied in this paper could also be degenerated and analytically continued into particle systems and polymer models (see [10] for an example in half space). If such procedure could also be done for the fused matrix ansatz, then one may get a description of its stationary measure. We plan to explore this direction in future research. ### Outline of the introduction In Section 1.2 we introduce the higher spin vertex model on a strip with general (unspecified) vertex weights and define a Markov chain corresponding to an arbitrary down-right path. We solve the stationary measure of this Markov chain using the matrix product ansatz in Section 1.3, assuming certain consistency relations. In Section 1.4 we introduce the fusion procedure for the \(R\) and \(K\) matrices and define the fused vertex model on a strip. We state the fusion for the ZF and GZ relations in Section 1.5, which gives a concrete matrix ansatz for the fused vertex model on a strip. In Section 1.6 we offer an alternative expression of the stationary measure as the Askey-Wilson Markov processes. We state the limits of the mean density on any down-right path (which exhibits the phase diagram) in Section 1.7. In Section 1.8 we demonstrate that one of the 'two-step Floquet dynamics' models in [12] can be regarded as a special case of our fused vertex model on a strip corresponding to the down-right zig-zag path. Therefore our result in particular implies the limits of mean density and phase diagram for this model in [12]. ### Higher spin vertex model on a strip We introduce the stochastic spin-\(\frac{I}{2}\) vertex model on a strip with general vertex weights and define its stationary measure on any down-right path. Suppose \(I\in\mathbb{Z}_{+}\). We consider certain configurations of arrows on the edges of the strip \[\left\{(x,y)\in\mathbb{Z}^{2}:0\leq y\leq x\leq y+N\right\}, \tag{1.1}\] where each edge can contain \(0\) up to \(I\) arrows. For all \(y\in\mathbb{Z}_{\geq 0}\), we refer vertices \((y,y)\) as left boundary vertices and \((y+N,y)\) as right boundary vertices. Other vertices on the strip are referred to as bulk vertices. For each vertex of the strip, its left and/or bottom edges are called its incoming edges, and its right and/or top edges are called its outgoing edges. We will use the word 'down-right path' to refer to a path \(\mathcal{P}\) that goes from a left boundary vertex of the strip to a right boundary vertex of the strip, with each step going downwards or rightwards by \(1\). Every down-right path on the strip has length \(N\), and there are \(N\) outgoing up/right edges emanating from the path. In the configurations that we will be interested in, each of the outgoing edges of \(\mathcal{P}\) can be occupied by \(0\) up to \(I\) arrows, which gives \((I+1)^{N}\) possible 'outgoing configurations' of \(\mathcal{P}\). We label the \(N\) outgoing edges of \(\mathcal{P}\) from the up-left start of the path to the down-right end of the path: \(p_{1},\ldots,p_{N}\in\{\uparrow,\rightarrow\}\), where \(\uparrow\) denotes a vertical edge and \(\rightarrow\) denotes a horizontal edge. The \((I+1)^{N}\) outgoing configurations of \(\mathcal{P}\) can be encoded as occupation variables \(\tau=\tau_{\mathcal{P}}=(\tau_{1},\ldots,\tau_{N})\in[[0,I]]^{N}\), where \(0\leq\tau_{i}\leq I\) denote the number of arrows occupying edge \(p_{i}\), for \(1\leq i\leq N\). Let \(\mathcal{Q}\) be any down-right path sitting above \(\mathcal{P}\), which may contain edges coinciding with edges of \(\mathcal{P}\). We denote by \(\mathbb{U}(\mathcal{P},\mathcal{Q})\) the set of vertices between \(\mathcal{P}\) and \(\mathcal{Q}\), including those on \(\mathcal{Q}\) but excluding those on \(\mathcal{P}\). Figure 1 illustrates these definitions. We write \([[0,x]]:=\mathbb{Z}\cap[0,x]\). Suppose that there are three matrices: \[\mathrm{R}=\left(\mathrm{R}^{c,d}_{a,b}\right)_{a,b,c,d\in[[0,I]]},\qquad \mathrm{\mathstrut}\mathrm{K}=\left(\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut} \mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{\mathstrut}\mathrm{ \mathstrut}\mathrm{\ The stochastic spin-\(\frac{I}{2}\) vertex model is a Markovian sampling procedure that generates configurations. An 'initial condition' is given as a down-right path \(\mathcal{P}\) and an outgoing configuration on it. At each vertex of the strip, we inductively sample through the following probabilities given by \(\mathrm{R}\), \({}_{\ell}\mathrm{K}\) and \({}_{r}\mathrm{K}\): \[\mathbb{P}\left(\begin{array}{c}c\\ b\end{array}\right)=\mathrm{R}_{a,b}^{c,d},\qquad\mathbb{P}\left(\begin{array} []{c}d\\ a\end{array}\right)=_{\ell}\mathrm{K}_{a}^{d},\qquad\mathbb{P}\left(\begin{array} []{c}c\\ b\end{array}\right)=_{r}\mathrm{K}_{b}^{c}, \tag{1.3}\] where \(0\leq a,b,c,d\leq I\) indicate the number of arrows contained in those edges. More precisely, suppose \(\mathcal{P}\) and \(\mathcal{Q}\) are down-right paths such that \(\mathcal{Q}\) sits above \(\mathcal{P}\). An 'initial condition' is given by an outgoing configuration of \(\mathcal{P}\), i.e. initially there are arrows assigned to the outgoing edges of \(\mathcal{P}\). Suppose we have arrived at a vertex \((x,y)\in\mathbb{U}(\mathcal{P},\mathcal{Q})\) and have sampled through all the vertices \((x^{\prime},y^{\prime})\in\mathbb{U}(\mathcal{P},\mathcal{Q})\) such that either \(y^{\prime}<y\) or \(y^{\prime}=y\) and \(x^{\prime}<x\). Then we have already assigned arrows to the incoming edges of vertex \((x,y)\). We then sample the outgoing edges of \((x,y)\) according to three probabilities given in (1.3) respectively, in the cases when \((x,y)\) is a bulk/left boundary/right boundary vertex. After we sample through all the vertices in \(\mathbb{U}(\mathcal{P},\mathcal{Q})\), we get a probability measure on the set of all outgoing configurations of \(\mathcal{Q}\), whose randomness comes from the sampling procedure. See Figure 2 for an example of a configuration generated by this sampling. We will encode the above sampling procedure as a transition probability matrix \(P_{\mathcal{P},\mathcal{Q}}(\tau,\tau^{\prime})\), where \(\tau,\tau^{\prime}\in[[0,I]]^{N}\) are respectively the occupation variables of outgoing edges of \(\mathcal{P}\) and \(\mathcal{Q}\). **Definition 1.1**.: Assume the condition (1.2) on the vertex weights. Suppose \(\mathcal{P}\) is a down-right path on the strip. Denote by \(\Upsilon_{k}\mathcal{P}\) the up-right translation of \(\mathcal{P}\) by \((k,k)\), for all \(k\in\mathbb{Z}_{\geq 0}\). We look at the outgoing configurations of down-right paths \(\Upsilon_{k}\mathcal{P}\) and regard \(k\in\mathbb{Z}_{\geq 0}\) as time, which gives us a time-homogeneous Markov chain \((\tau(k))_{k\geq 0}\) on the state space \([[0,I]]^{N}\). This Markov chain has initial condition Figure 1. Outgoing edges of \(\mathcal{P}\) and \(\mathcal{Q}\) and set of vertices \(\mathbb{U}(\mathcal{P},\mathcal{Q})\), for \(N=5\) and for the down-right (thick) paths \(\mathcal{P}\) and \(\mathcal{Q}\) as depicted. The gray edges are outgoing edges of \(\mathcal{P}\) and \(\mathcal{Q}\). Outgoing edges of \(\mathcal{P}\) are labelled from the up-left to the down-right: \(p_{1}=\rightarrow\), \(p_{2}=\uparrow\), \(p_{3}=\rightarrow\), \(p_{4}=\uparrow\), \(p_{5}=\uparrow\). The thick nodes are vertices in \(\mathbb{U}(\mathcal{P},\mathcal{Q})\). Initially we have a (deterministic) outgoing configuration of \(\mathcal{P}\). We inductively sample through all vertices in \(\mathbb{U}(\mathcal{P},\mathcal{Q})\) and get a probability measure on the set of all outgoing configurations of \(\mathcal{Q}\). This figure is the same as [10, Figure 1] Figure 2. An example of sampling the spin-\(\frac{I}{2}\) vertex model for \(N=5\) and \(I=2\). Down-right paths \(\mathcal{P}\) and \(\mathcal{Q}\) are the same as in Figure 1 and are omitted. The edges being occupied by one arrow are depicted as thin edges and edges being occupied by two arrows are depicted as thick edges. The outgoing edges of \(\mathcal{P}\) and \(\mathcal{Q}\) are respectively \(\tau_{\mathcal{P}}=(2,1,0,1,1)\) and \(\tau_{\mathcal{Q}}=(0,2,0,2,2)\). given by an outgoing configuration \(\tau(0)\in[[0,I]]^{N}\) of \(\mathcal{P}\) and with the same transition probability matrix \(P_{\Upsilon_{k}\mathcal{P},\Upsilon_{k+1}\mathcal{P}}(\tau,\tau^{\prime})=P_{ \mathcal{P},\Upsilon_{1}\mathcal{P}}(\tau,\tau^{\prime})\) in each step \(k\mapsto k+1\). When the vertex weights (additionally) satisfy: \[\begin{split}&\mathrm{R}^{c,d}_{a,b}\in(0,1),\quad\text{for all }a,b,c,d\in[[0,I]]\text{ such that }a+b=c+d,\\ &\epsilon K^{d}_{a}\text{ and }_{r}\mathrm{K}^{c}_{b}\in(0,1),\quad \text{for all }a,b,c,d\in[[0,I]],\end{split} \tag{1.4}\] one can observe that this Markov chain is irreducible. We will be interested in the (unique) stationary measure of this system, which we refer to as the stationary measure of spin-\(\frac{I}{2}\) vertex model on a strip on \(\mathcal{P}\). **Remark 1.2**.: The conditions (1.2) and (1.4) can be understood as follows: (1) Each vertex in the system is probabilistic. (2) At the bulk, anything happens with positive probability as long as the 'conservation of arrows' property holds, i.e. the total number of arrows exiting a bulk vertex equals the total number of arrows entering this vertex. (3) At the left and right boundaries, anything happens with positive probability. In summary, arrows are conserved in the bulk, but can enter or exit the system at two open boundaries. ### Matrix product ansatz of stationary measure We will develop a matrix product ansatz based on the so-called local moves of down-right paths (which will be defined in (1.6) below), in order to solve for the stationary measure of the spin-\(\frac{I}{2}\) vertex model on a strip. This matrix product ansatz directly generalizes the recent work [10] in the spin-\(\frac{1}{2}\) (\(I=1\)) case to the higher spin cases. Assume \(\{\mu_{\mathcal{P}}\}\) is a collection of probability measures indexed by down-right paths \(\mathcal{P}\) on the strip, where each \(\mu_{\mathcal{P}}\) is supported on the set (with cardinality \((I+1)^{N}\)) of all outgoing configurations of \(\mathcal{P}\). We will make the assumption that, for any pair of down-right paths \(\mathcal{P}\) and \(\mathcal{Q}\) such that \(\mathcal{Q}\) sits above \(\mathcal{P}\), the measure \(\mu_{\mathcal{P}}\) is updated to \(\mu_{\mathcal{Q}}\) under the evolution of the vertex model on a strip: For all \(\tau^{\prime}\in[[0,I]]^{N}\), \[\sum_{\tau\in[[0,I]]^{N}}P_{\mathcal{P},\mathcal{Q}}\left(\tau,\tau^{\prime} \right)\mu_{\mathcal{P}}(\tau)=\mu_{\mathcal{Q}}(\tau^{\prime}). \tag{1.5}\] By taking \(\mathcal{Q}=\Upsilon_{1}\mathcal{P}\), we get that \(\mu_{\mathcal{P}}\) is the stationary measure of the spin-\(\frac{I}{2}\) vertex model on \(\mathcal{P}\). We introduce three types of local moves of down-right paths: (1.6) where the thick lines denote locally the down-right paths. As will be observed in Section 2.4, condition (1.5) can be guaranteed by its special case where \(\mathcal{Q}\) is taken to be a local move of \(\mathcal{P}\). We propose an ansatz of the form that \(\mu_{\mathcal{P}}\) could take: Suppose \(M_{0}^{\uparrow},\dots,M_{1}^{\uparrow}\), \(M_{0}^{\rightarrow},\dots,M_{1^{\rightarrow}}\) are elements in a (possibly noncommutative) abstract algebra \(\mathcal{A}\) and \(\langle W|\in H^{*}\) and \(|V\rangle\in H\) are two boundary vectors, where \(H\) is a linear representation space of \(\mathcal{A}\). We define \(\mu_{\mathcal{P}}\) by the following matrix product states: \[\mu_{\mathcal{P}}(\tau_{1},\dots,\tau_{N})=\frac{\langle W|M_{\tau_{1}}^{p_{1} }\times\cdots\times M_{\tau_{N}}^{p_{N}}|V\rangle}{\langle W|(\sum_{j=0}^{I}M _{j}^{p_{1}})\times\cdots\times(\sum_{j=0}^{I}M_{j}^{p_{N}})|V\rangle}, \tag{1.7}\] where \(p_{i}\in\{\uparrow,\rightarrow\}\), \(1\leq i\leq N\) are outgoing edges of \(\mathcal{P}\) labeled from the up-left of \(\mathcal{P}\) to the down-right of \(\mathcal{P}\), and \(0\leq\tau_{1},\dots,\tau_{N}\leq I\) are occupation variables indicating the number of arrows occupying these edges. Three types of local moves (1.6) provide the following consistency relations: For all \(0\leq c,d\leq I\), we have \[M_{c}^{\uparrow}M_{d}^{\rightarrow}=\sum_{a,b=0}^{I}\mathrm{R}^{c,d}_{a,b}M_{b} ^{\rightarrow}M_{a}^{\uparrow},\quad\langle W|M_{d}^{\rightarrow}=\sum_{a=0}^{I }\left(\epsilon K^{d}_{a}\right)\langle W|M_{a}^{\uparrow},\quad M_{c}^{ \uparrow}|V\rangle=\sum_{b=0}^{I}\left(\epsilon.K^{c}_{b}\right)M_{b}^{ \rightarrow}|V\rangle. \tag{1.8}\] We summarize this matrix product ansatz as the following theorem: **Theorem 1.3** (Matrix product ansatz).: _In this paper we will always use \(\mathcal{A}\) to denote a (possibly noncommutative) algebra over \(\mathbb{C}\), which admits a linear representation on a vector space \(H\) over \(\mathbb{C}\) with a finite or countable basis (the elements in \(H\) are finite linear combinations of basis vectors). We use \(H^{*}\) to denote the dual of \(H\), i.e. the space of linear functions from \(H\) to \(\mathbb{C}\). We will implicitly identify elements of \(\mathcal{A}\) with elements in \(\mathrm{End}(H)\), which is the space of linear transformations from \(H\) to itself._ _Assume the vertex weights \(\mathrm{R}\), \(\iota\mathrm{K}\) and \(\iota_{r}\mathrm{K}\) satisfy (1.2) and (1.4). Suppose there are elements \(M_{j}^{\uparrow}\) and \(M_{j}^{\rightarrow}\) for \(0\leq j\leq I\) in \(\mathcal{A}\) and two boundary vectors \(\langle W|\in H^{*}\) and \(|V\rangle\in H\), satisfying consistency relations (1.8). Assume that the denominator of (1.7) is nonzero._ _Consider the spin-\(\frac{I}{2}\) stochastic vertex model on a strip with width \(N\) and with sampling probabilities given by (1.3). Then for any down-right path \(\mathcal{P}\) on the strip with outgoing edges \(p_{1},\dots,p_{N}\in\{\uparrow,\rightarrow\}\), the matrix product ansatz (1.7) gives the stationary measure of the spin-\(\frac{I}{2}\) stochastic vertex model on a strip on the down-right path \(\mathcal{P}\), where \(0\leq\tau_{1},\dots,\tau_{N}\leq I\) are occupation variables._ The above theorem will be proved in Section 2.4. **Remark 1.4**.: In the numerator of (1.7), \(M_{\tau_{1}}^{p_{1}}\times\cdots\times M_{\tau_{N}}^{p_{N}}\in\mathcal{A}\) is implicitly identified to an element in \(\operatorname{End}(H)\), which gives a scalar after paired with \(\langle W|\) and \(|V\rangle\). The denominator of (1.7) is also a scalar. ### The fused vertex model on a strip For full and half space higher spin vertex models, one particular choice of vertex weights is the most extensively studied. They are referred to as the 'fused' vertex weights and are constructed through the 'fusion procedure'. The fusion procedure allows for the construction of spin-\(\frac{I}{2}\) solutions of the Yang-Baxter equation (referred to as fused \(R\) matrices) and the reflection equation (referred to as fused \(K\) matrices) from their spin-\(\frac{1}{2}\) (i.e. unfused) counterparts. The spin-\(\frac{I}{2}\) vertex models with fused vertex weights (along with their 'colored' or'multi-species' versions) generally enjoy a great amount of exactly solvable or integrable structure. As mentioned in the Preface 1.1, these models can also be degenerated to many other models in the KPZ class, including particle systems and polymer models. The fusion procedure was introduced in the representation-theoretic context for \(R\) matrices in [11, 12, 13, 14, 15], and for \(K\) matrices in [16, 17, 18]. In more recent years, explicit formulas for fused matrices appear in the physics and probability literature, see for example [1, 1, 1, 1, 1, 1, 1, 1, 1, 10, 11, 12] for fused \(R\) matrices and [19, 10] for fused \(K\) matrices. To define the fused vertex model on a strip, in Section 2.1 we will introduce the fusion procedure in detail and define the fused \(R\) and \(K\) matrices. We will only provide a brief introduction in this subsection. We consider the fundamental solution of the Yang-Baxter equation given as follows: \[\begin{array}{c|c}\text{Configuration:}\\ \text{Probability:}\end{array}\] \[1\] \[\begin{array}{c|c}\text{1}\] \[\begin{array}{c}\text{\frac{q(1-u)}{1-qu}}\\ \text{1}\end{array}\] \[\begin{array}{c}\text{\frac{1-qu}{1-qu}}\\ \text{1}\end{array}\] number at the starting point of the path. The operator \(\mathsf{R}_{ij}(u)\) means \(\mathsf{R}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\right)\) acting on the \(i\)-th and \(j\)-th spaces and \(\mathsf{K}_{i}(u)\) means \(\mathsf{K}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\right)\) acting on the \(i\)-th space._ The fusion procedure for \(R\) and \(K\) matrices will be defined in detail in Section 2.1, which involve taking collections of \(I\) columns and \(I\) rows, and view them as one column and row. One consider the composition of operators as shown in the graphs in part (a) and part (b) below in the \(I=3\) case, which respectively corresponds to fusion for a bulk and a boundary vertex. Each intersection of two paths are placed with an \(\mathsf{R}(u)\) and the diagonals (in the boundary case (b)) are placed with \(\mathsf{K}(u)\). The spectral parameters are indicated in the graphs, which are chosen as \(q\)-geometric series on each row and column, from the right to the left, and from top to bottom, except in case (b), where one needs to take square roots at the diagonal. (a) Fusion of bulk vertex (b) Fusion of boundary vertex The state spaces of the combination of \(I\) edges (where each edge can contain one arrow or no arrow) is \(\left(\mathbb{C}^{2}\right)^{\otimes I}\). It turns out that this state space is too big for our purposes. There is a subspace of this state space (referred to as the \(q\)-exchangeable subspace), which can be identified with \(\mathbb{C}^{I+1}\). We will prove that the above composed operators preserve this subspace, in the sense that if the incoming distributions from left and/or below are \(q\)-exchangeable, then so are the outgoing distributions to the right and/or above. By restricting the composed operators to the \(q\)-exchangeable subspaces, we are able to define the fused \(R\) matrix \(\mathsf{R}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\otimes\mathbb{C} ^{I+1}\right)\) and fused \(K\) matrix \(\mathsf{K}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\right)\). We will also make use of another fused operator \(\overline{\mathsf{K}}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\right)\), which is essentially a change of parameters of \(\mathsf{K}^{I}(u)\). The next theorem gives explicit expressions for our fused \(R\) matrices. **Theorem 1.6**.: _The fused \(R\) matrices \(\mathsf{R}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\otimes\mathbb{C} ^{I+1}\right)\) (that will be defined in Definition 2.5 in Section 2.1) has the following explicit formula: For all \(0\leq a,b,c,d\leq I\), we have:_ \[\mathsf{R}^{I}(u)_{a,b}^{c,d}=\mathds{1}_{a+b=c+d}u^{d-b}q^{(d-a)I}\sum_{p=0} ^{\min(b,c)}\Phi_{q^{-1}}\left(c-p,c+d-p;u,q^{I}u\right)\Phi_{q^{-1}}\left(p, b;q^{I}/u,q^{I}\right),\] _where_ \[\Phi_{q^{-1}}(i,j;x,y):=\left(\frac{y}{x}\right)^{i}\frac{\left(x;q^{-1} \right)_{i}\left(\frac{x}{x};q^{-1}\right)_{j-i}}{\left(y;q^{-1}\right)_{j}} \frac{\left(q^{-1};q^{-1}\right)_{j}}{\left(q^{-1};q^{-1}\right)_{i}\left(q^{ -1};q^{-1}\right)_{j-i}},\] _and we use \(q\)-Pochhammer symbol \((x;s)_{n}:=(1-x)(1-sx)\dots(1-s^{n-1}x)\) for \(n\in\mathbb{N}_{0}\) (where \(\mathbb{N}_{0}:=\{0\}\cup\mathbb{Z}_{+}\))._ Proof.: Observing from the fusion procedure in Section 2.1, we notice that our \(\mathsf{R}^{I}(u)\) corresponds to the fused \(R\) matrices in [1] by \(q\mapsto 1/q\). The result follows from a specialization (for \(N=2\) and \(L=M=I\)) of formula (6.2) in [1] (this formula is originally due to [1, 10]). **Remark 1.7**.: There are other explicit formulas for \(\mathsf{R}^{I}(u)\). An explicit formula is given by the \({}_{4}\overline{\phi}_{3}\) hypergeometric functions in [13] and [1, Proposition 3.15]. Another explicit formula appears in [11]. **Remark 1.8**.: An explicit formula for the fused \(K\) matrices was obtained in the physics work [12]. Under a special parameter condition (when the \(K\) matrices are upper-triangular), this formula was rigorously proved by induction in [10]. Our fusion procedure for the \(K\) matrices is the essentially the same as in these works, modulo some change of parameters. We choose not to pursue the general formula for the fused \(K\) matrices, since this is rather technically involved and orthogonal to the focus of this paper. **Remark 1.9**.: In the \(I=2\) case, the fused matrices \(\mathsf{R}^{2}(u)\), \(\mathsf{K}^{2}(u)\) and \(\overline{\mathsf{K}}^{2}(u)\) are given by equations (3.27), (3.31) and (3.32) in [13], where \(z,t^{2},a,b,c,d\) therein correspond to \(u,q,\mathfrak{a},\mathfrak{b},\mathfrak{c},\mathfrak{d}\) in this paper. **Remark 1.10**.: In our fusion of \(K\) matrices, the spectral parameters involve \(u^{2}\) in the bulk and are taken square roots on the diagonal. This is because our reflection equation is slightly different from [10, Proposition 2.5]. Our notation is consistent with the literature on ZF and GZ relations as mentioned in the Preface 1.1. Next we define the fused vertex model on a strip, whose bulk and boundary vertex weights are defined by specializing the spectral parameters in a particular way in the fused operators \(\mathsf{R}^{I}(u)\), \(\mathsf{K}^{I}(u)\) and \(\overline{\mathsf{K}}^{I}(u)\). **Definition 1.11** (Fused vertex model on a strip).: Assume \(I\in\mathbb{Z}_{+}\). Suppose we have a bulk parameter \(q\), boundary parameters \(\mathsf{a},\mathsf{\theta},\mathsf{\epsilon},\mathsf{\mathcal{A}}\) and a'spectral parameter' \(\kappa\) satisfying: \[0<q<1,\quad 0<\kappa<q^{\frac{I-1}{2}},\quad\mathsf{a},\mathsf{\theta}, \mathsf{\epsilon},\mathsf{\mathcal{A}}>0,\quad\mathsf{a}-\mathsf{\epsilon}>q^{ \frac{1-I}{2}}/\kappa,\quad\mathsf{\mathcal{B}}-\mathsf{\mathcal{A}}>q^{\frac {1-I}{2}}/\kappa. \tag{1.9}\] Consider the fused \(R\) matrix \(\mathsf{R}^{I}(u)\in\operatorname{End}\big{(}\mathbb{C}^{I+1}\otimes\mathbb{C }^{I+1}\big{)}\) and \(K\) matrices \(\mathsf{K}^{I}(u),\overline{\mathsf{K}}^{I}(u)\in\operatorname{End}\big{(} \mathbb{C}^{I+1}\big{)}\) (which are matrix-valued functions of \(u\in\mathbb{C}\) depending on parameters \(q,\mathsf{a},\mathsf{\theta},\mathsf{\epsilon},\mathsf{\mathcal{A}}\)) that will be defined in Definition 2.5 in Section 2.1. We define, for all \(0\leq a,b,c,d\leq I\): \[\operatorname{R}^{c,\mathcal{A}}_{a,b}:=\mathsf{R}^{I}\left(\kappa^{2}\right) _{b,a}^{d,c},\quad\operatorname{\epsilon}\!\!\operatorname{K}^{d}_{a}:= \mathsf{K}^{I}\left(\kappa\right)_{a}^{d},\quad{}_{r}\!\!\operatorname{K}^{c}_ {b}:=\overline{\mathsf{K}}^{I}(1/\kappa)_{b}^{c}. \tag{1.10}\] The spin-\(\frac{I}{2}\) vertex model on a strip with these vertex weights will be referred to in this paper as the fused vertex model on a strip. Proposition 1.14 below guarantees that this model is stochastic and irreducible. **Remark 1.12**.: In our definition (1.10) of vertex weights in the fused vertex model, we are specializing the spectral parameters in a special way, and we are also swapping the indices in the bulk vertex weights. As will become clear later on, this choice is to make sure that our model is solvable by the fused matrix ansatz. **Remark 1.13**.: It is possible that the fused vertex model on a strip could be obtained directly from some version of the inhomogeneous six-vertex model on a strip by treating every \(I\) columns and \(I\) rows in the model as one column and one row. However we choose not to adopt this approach in this paper. **Proposition 1.14**.: _When the parameters \(q,\kappa,\mathsf{a},\mathsf{\mathcal{B}},\mathsf{\mathcal{c}},\mathsf{ \mathcal{A}}\) satisfy (1.9), the vertex weights \(\operatorname{R}\), \(\operatorname{\epsilon}\!\!\operatorname{K}\) and \({}_{r}\!\!\operatorname{K}\) defined by (1.10) satisfy conditions (1.2) and (1.4) (which guarantee stochasticity and irreducibility of the model)._ The above proposition will be proved in Section 2.2. **Remark 1.15**.: Besides (1.9), it is possible that there are other regions of the parameters on which the fused vertex weights are stochastic. We choose not to pursue this point and only study the model under (1.9). ### Fusion of ZF and GZ relations and stationary measure In order to study the stationary measure of the fused vertex model on a strip using the matrix product ansatz in Theorem 1.3, one needs to find concrete examples of \(\{M_{j}^{\uparrow},M_{j}^{\rightarrow}\}_{j=0}^{I}\), \(\langle W|\) and \(|V\rangle\) that satisfy the consistency relations: \[M_{c}^{\uparrow}M_{d}^{\rightarrow}=\sum_{a,b=0}^{I}\operatorname{R}^{c,d}_{a, b}M_{b}^{\rightarrow}M_{a}^{\uparrow},\quad\langle W|M_{d}^{\rightarrow}=\sum_{a=0} ^{I}\left(\mathsf{\epsilon}\!\!\operatorname{K}^{d}_{a}\right)\langle W|M_{a} ^{\uparrow},\quad M_{c}^{\uparrow}|V\rangle=\sum_{b=0}^{I}\left(\mathsf{ \epsilon}\!\!\operatorname{K}^{c}_{b}\right)M_{b}^{\rightarrow}|V\rangle, \tag{1.11}\] for the vertex weights \(\operatorname{R}\), \(\operatorname{\epsilon}\!\!\operatorname{K}\) and \({}_{r}\!\!\operatorname{K}\) given by (1.10). These relations look overwhelming to solve at first, however, since the vertex weights are obtained from the fusion procedure, our insight is to obtain the elements \(\{M_{j}^{\uparrow},M_{j}^{\rightarrow}\}_{j=0}^{I}\subset\mathcal{A}\) by defining a corresponding fusion procedure of the matrix product ansatz. Such fusion was developed in the physics work [11], which was explicitly written only in the spin-\(1\) (\(I=2\)) case but it actually works well in general spin. We will rigorously develop this fusion procedure in arbitrary spin. More precisely, the fusion of matrix ansatz involves two steps. We first realize that the consistency relations (1.11) can be seen as a specialization (of the spectral parameters) in the so-called Zamolodchikov-Faddeev (ZF) and Ghoshal-Zamolodchikov (GZ) relations. It is known in the physics literature [10, 12, 13, 14, 15] that these relations respectively govern the matrix ansatz of integrable systems in the bulk and at two open boundaries. We then develop the fusion procedure of ZF and GZ relations generalizing [11]. The specialization of spectral parameters in the fused ZF and GZ relations give us (1.11). We first give the definition of the ZF and GZ relations: **Definition 1.16**.: Assume \(\mathfrak{V}\) is a vector space over \(\mathbb{C}\). Assume that \(R(u)\in\operatorname{End}\big{(}\mathfrak{V}\otimes\mathfrak{V}\big{)}\) and \(K(u),\overline{K}(u)\in\operatorname{End}\big{(}\mathfrak{V}\big{)}\) are analytic functions of \(u\), which respectively have singularities at finite subsets \(\mathcal{P}_{R}\), \(\mathcal{P}_{K}\) and \(\mathcal{P}_{\overline{K}}\) of \(\mathbb{C}\). Suppose \(\mathcal{A}\) is an abstract algebra over \(\mathbb{C}\) which admits a linear representation on a vector space \(H\). A function \(\mathbf{M}(u)\in\mathfrak{V}\otimes\mathcal{A}\) on \(u\in\mathbb{C}^{*}:=\mathbb{C}\setminus\{0\}\) satisfies ZF relation with the \(R\) matrix \(R(u)\) if for all \(x,y\in\mathbb{C}^{*}\) such that \(x/y\notin\mathcal{P}_{R}\), we have: \[\mathbf{M}(y)\otimes\mathbf{M}(x)=\widecheck{R}\left(\frac{x}{y}\right)\mathbf{ M}(x)\otimes\mathbf{M}(y), \tag{1.12}\] where \(\widecheck{R}(u)=PR(u)\) and \(P\in\operatorname{End}(\mathfrak{V}\otimes\mathfrak{V})\) swapps the two factors of the space \(\mathfrak{V}\). We remark that in the above ZF relation, the \(R\) matrix only acts on \(\mathfrak{V}\otimes\mathfrak{V}\) component but leaves the \(\mathcal{A}\) factors alone, and the multiplication in the algebra \(\mathcal{A}\) is done implicitly so that both sides of (1.12) are elements in \(\mathfrak{V}\otimes\mathfrak{V}\otimes\mathcal{A}\). Assume that \(\langle W|\in H^{*}\) and \(|V\rangle\in H\), which we refer to as boundary vectors. We say that \(\mathbf{M}(u)\) satisfies the first GZ relation with the \(K\) matrix \(K(u)\) if for all \(u\in\mathbb{C}^{*}\setminus\mathcal{P}_{K}\), \[\left\langle W\right|\left(K(u)\mathbf{M}\left(\frac{1}{u}\right)-\mathbf{M}( u)\right)=0. \tag{1.13}\] We say that \(\mathbf{M}(u)\) satisfies the second GZ relation with the \(K\) matrix \(\overline{K}(u)\) if for all \(u\in\mathbb{C}^{*}\setminus\mathcal{P}_{\overline{K}}\), \[\left(\overline{K}(u)\mathbf{M}\left(\frac{1}{u}\right)-\mathbf{M}(u)\right) \left|V\right\rangle=0. \tag{1.14}\] We remark that in the above GZ relations (1.13) and (1.14), the \(K\) matrices only act on the \(\mathfrak{V}\) component but leaves \(\mathcal{A}\) alone, and both boundary vectors \(\langle W|\) and \(|V\rangle\) are paired with the \(\mathcal{A}\) factor and leaves \(\mathfrak{V}\) alone. **Remark 1.17**.: We will be interested in the case \(\mathfrak{V}=\mathbb{C}^{I+1}\). Under its natural basis, the (vector-valued) function \(\mathbf{M}(u)\in\mathfrak{V}\otimes\mathcal{A}\) can be written explicitly as: \[\mathbf{M}(u)=[M_{0}(u),\ldots,M_{I}(u)]^{T},\] where \(M_{j}(u)\), \(0\leq j\leq I\) are functions of \(u\in\mathbb{C}^{*}\) with values in \(\mathcal{A}\). The ZF relation (1.12) can be written explicitly: For all \(0\leq c,d\leq I\) and all \(x,y\in\mathbb{C}^{*}\) such that \(x/y\notin\mathcal{P}_{R}\), \[M_{c}(y)M_{d}(x)=\sum_{a,b=0}^{I}R_{b,a}^{d,c}\left(\frac{x}{y}\right)M_{b}(x) M_{a}(y). \tag{1.15}\] The first GZ relation (1.13) can be written explicitly: For all \(0\leq d\leq I\) and all \(u\in\mathbb{C}^{*}\setminus\mathcal{P}_{K}\), \[\langle W|M_{d}(u)=\sum_{a=0}^{I}K_{a}^{d}(u)\bigg{\langle}W\bigg{|}M_{a} \left(\frac{1}{u}\right). \tag{1.16}\] The second GZ relation (1.14) can be written explicitly: For all \(0\leq c\leq I\) and all \(u\in\mathbb{C}^{*}\setminus\mathcal{P}_{\overline{K}}\), \[M_{c}(u)|V\rangle=\sum_{b=0}^{I}\overline{K}_{b}^{c}(u)M_{b}\left(\frac{1}{u} \right)\bigg{|}V\bigg{\rangle}. \tag{1.17}\] The next result realizes the consistency relations (1.11) as specializations of ZF and GZ relations. **Proposition 1.18**.: _Suppose \(\mathfrak{V}=\mathbb{C}^{I+1}\). Suppose \(\mathbf{M}(u)=[M_{0}(u),\ldots,M_{I}(u)]^{T}\in\mathfrak{V}\otimes\mathcal{A}\) satisfies the ZF and GZ relations with the \(R\) and \(K\) matrices \(R(u)\), \(K(u)\) and \(\overline{K}(u)\). Then for any \(\kappa\in\mathbb{C}^{*}\) such that \(\kappa^{2}\notin\mathcal{P}_{R}\), \(\kappa\notin\mathcal{P}_{K}\) and \(1/\kappa\notin\mathcal{P}_{\overline{K}}\), the following elements:_ \[M_{j}^{\uparrow}=M_{j}\left(1/\kappa\right),\quad M_{j}^{\rightarrow}=M_{j}( \kappa),\quad 0\leq j\leq I\] _satisfy the consistency relations (1.11) with respect to the vertex weights:_ \[\mathrm{R}_{a,b}^{c,d}=R_{b,a}^{d,c}\left(\kappa^{2}\right),\quad_{\ell} \mathrm{K}_{a}^{d}=K_{a}^{d}(\kappa),\quad_{r}\mathrm{K}_{b}^{c}=\overline{K} _{b}^{c}\left(1/\kappa\right),\quad 0\leq a,b,c,d\leq I.\] Proof.: This can be seen by taking \(x=\kappa\) and \(y=1/\kappa\) in the ZF relation (1.15), taking \(u=\kappa\) in the first GZ relation (1.16) and taking \(u=1/\kappa\) in the second GZ relation (1.17). We want to find the solutions of ZF and GZ relations for the fused matrices \(\mathsf{R}^{I}(u)\), \(\mathsf{K}^{I}(u)\) and \(\overline{\mathsf{K}}^{I}(u)\). We start with a solution for the unfused matrices (i.e. when \(I=1\)) known in the physics literature, then we perform fusion and obtain such solutions for the fused matrices. **Proposition 1.19** (Section 2.4 in [11], for \(t^{2}\mapsto q\)).: _Assume \(\mathbf{d},\mathbf{e}\in\mathcal{A}\), \(\langle W|\in H^{*}\) and \(|V\rangle\in H\) satisfy:_ \[\mathbf{d}e-q\mathbf{e}\mathbf{d}=1-q,\quad\langle W|\left(a\mathbf{e}-c \mathbf{d}+1\right)=0,\quad\left(\mathbf{\ell}\mathbf{d}-d\mathbf{e}+1\right) |V\rangle=0. \tag{1.18}\] _Define \(\mathsf{M}_{0}(u)=u+\mathbf{e}\) and \(\mathsf{M}_{1}(u)=\frac{1}{u}+\mathbf{d}\). Then the vector-valued function \(\mathbf{M}(u)=[\mathsf{M}_{0}(u),\mathsf{M}_{1}(u)]^{T}\in\mathbb{C}^{2} \otimes\mathcal{A}\) satisfies the ZF and GZ relations with the unfused matrices \(\mathsf{R}(u)\), \(\mathsf{K}(u)\) and \(\overline{\mathsf{K}}(u)\) given in Definition 2.2._ **Remark 1.20**.: We will later make use of a concrete example of \(\mathbf{d}\), \(\mathbf{e}\), \(\langle W|\) and \(|V\rangle\) satisfying (1.18), which is given by the USW representation introduced in [11], see Proposition 3.3 and Proposition 3.6. **Theorem 1.21** (Fusion of ZF and GZ relations).: _Suppose that \(\mathsf{M}_{0}(u)\) and \(\mathsf{M}_{1}(u)\) are given by Proposition 1.19 above. For any \(I\in\mathbb{Z}_{+}\), we define the functions \(\mathsf{M}_{\zeta}^{I}(u)\), \(0\leq\zeta\leq I\) on \(u\in\mathbb{C}\) with values in \(\mathcal{A}\):_ \[\mathsf{M}_{\zeta}^{I}(u):=\sum_{\begin{subarray}{c}\zeta_{1}+\ldots+\zeta_{I} =\zeta\\ \zeta_{1},\ldots,\zeta_{I}\in\{0,1\}\end{subarray}}\prod_{\begin{subarray}{c} \varepsilon\in[1,I]\end{subarray}}^{\longrightarrow}\mathsf{M}_{\zeta_{a}} \left(uq^{-\frac{I+1}{2}+a}\right)\in\mathcal{A}, \tag{1.19}\] _where the right arrow means that the product is taken from left to right in increasing order of index \(a\in[[1,I]]\). Then the vector \(\mathbf{M}^{I}(u):=\left[\mathsf{M}_{0}^{I}(u),\ldots\mathsf{M}_{I}^{I}(u) \right]^{I}\in\mathbb{C}^{I+1}\otimes\mathcal{A}\) satisfies the \(\mathrm{ZF}\) and \(\mathrm{GZ}\) relations with the fused matrices \(\mathsf{R}^{I}(u)\in\mathrm{End}\left(\mathbb{C}^{I+1}\otimes\mathbb{C}^{I+1}\right)\) and \(\mathsf{K}^{I}(u),\overline{\mathsf{K}}^{I}(u)\in\mathrm{End}\left(\mathbb{C} ^{I+1}\right)\) defined in Definition 2.5._ **Remark 1.22**.: The above theorem will be proved in Section 2.3. The key technical ingredient in this proof are alternative expressions for the fused \(R\) and \(K\) matrices in Theorem 2.8 referred to as the 'braided' forms. We believe that braid forms are interesting on their own and potentially have other uses (see Remark 2.9). **Remark 1.23**.: The unfused matrices \(\mathsf{R}(u)\), \(\mathsf{K}(u)\) and \(\overline{\mathsf{K}}(u)\) each have one or two singularities, as can be observed from (2.2). The singularities of the fused matrices \(\mathsf{R}^{I}(u)\), \(\mathsf{K}^{I}(u)\), and \(\overline{\mathsf{K}}^{I}(u)\) are precisely the finite set of points \(u\in\mathbb{C}\) where certain unfused matrices in their definitions (2.3), (2.4), and (2.5) become singular. We are now able to obtain a concrete matrix product ansatz for the fused vertex model on a strip: **Theorem 1.24** (Stationary measure for the fused vertex model on a strip).: _Consider the fused vertex model on a strip with width \(N\) in Definition 1.11, depending on parameters \(q,\kappa,a,\mathfrak{b},\mathfrak{c},\mathfrak{d}\) satisfying:_ \[0<q<1,\quad 0<\kappa<q^{\frac{I-1}{2}},\quad a,\mathfrak{b},\mathfrak{c}, \mathfrak{d}>0,\quad a-\epsilon>q^{\frac{I-1}{2}}/\kappa,\quad\mathfrak{b}- \mathfrak{d}>q^{\frac{1-I}{2}}/\kappa. \tag{1.20}\] _On any down-right path \(\mathcal{P}\) with outgoing edges \(p_{i}\in\{\uparrow,\rightarrow\}\), \(1\leq i\leq N\), the stationary measure is given by_ \[\mu_{\mathcal{P}}(\tau_{1},\ldots,\tau_{N})=\frac{\langle W|M_{\tau_{1}}^{p_{ 1}}\times\cdots\times M_{\tau_{N}}^{p_{N}}|V\rangle}{\langle W|(\sum_{j=0}^{I }M_{j}^{p_{1}})\times\cdots\times(\sum_{j=0}^{I}M_{j}^{p_{N}})|V\rangle}, \tag{1.21}\] _where \(0\leq\tau_{1},\ldots,\tau_{N}\leq I\) are occupation variables, \(M_{j}^{\uparrow}=\mathsf{M}_{j}^{I}\left(1/\kappa\right)\) and \(M_{j}^{\rightarrow}=\mathsf{M}_{j}^{I}\left(\kappa\right)\) for \(0\leq j\leq I\), where the functions \(\mathsf{M}_{j}^{I}(u)\in\mathcal{A}\) of \(u\in\mathbb{C}\) are given in Theorem 1.21 above._ **Remark 1.25**.: We remark that the elements \(M_{j}^{\uparrow}\) and \(M_{j}^{\rightarrow}\) used in the matrix product ansatz above are degree \(I\) polynomial-like expressions of \(\mathbf{d}\) and \(\mathbf{e}\) from (1.18) (where \(\mathbf{d}\) and \(\mathbf{e}\) may not be commutative). Proof.: By Theorem 1.21, \(\mathbf{M}^{I}(u):=\left[\mathsf{M}_{0}^{I}(u),\ldots\mathsf{M}_{I}^{I}(u) \right]^{T}\) satisfies the \(\mathrm{ZF}\) and \(\mathrm{GZ}\) relations with the fused matrices \(\mathsf{R}^{I}(u)\), \(\mathsf{K}^{I}(u)\) and \(\overline{\mathsf{K}}^{I}(u)\). By Proposition 1.18, the elements \(M_{j}^{\uparrow}\) and \(M_{j}^{\rightarrow}\) appearing in the matrix product ansatz satisfy the consistency relations (1.11) with the vertex weights \(\mathrm{R}\), \({}_{\mathfrak{c}}\mathrm{K}\) and \({}_{r}\mathrm{K}\) defined in Definition 1.11. By Theorem 1.3, the unique stationary measure of the system is given by (1.21). ### Stationary measure in terms of the Askey-Wilson process To study the asymptotic behavior of the stationary measure of the fused vertex model using the matrix product ansatz provided in Theorem 1.24 above, we will largely adopt a particular method of handling the matrix ansatz which was developed within a line of research [1, 10, 11] concerning the stationary measure of another model known as the open asymmetric simple exclusion process (ASEP). This method involves expressing the joint generating function of the stationary measure in terms of expectations of an auxiliary Markov process, commonly known as the Askey-Wilson process due to its connections with the so-called Askey-Wilson orthogonal polynomials. We will provide an introduction to the background of the Askey-Wilson process in Section 3.1 and to certain aspects of this method in Section 3.2, however we will not introduce the open ASEP model itself since it is not explicitly needed for our purposes. We refer the interested reader to a nice survey paper [13]. The next theorem expresses the joint generating function of the stationary measure for the fused vertex model on a strip in terms of expectations of the Askey-Wilson process: **Theorem 1.26**.: _Consider the fused vertex model on a strip defined in Definition 1.11, with parameters \(q,\kappa,a,\mathfrak{b},\mathfrak{c},\mathfrak{d}\) satisfying (1.20), where we recall that \(q,a,\mathfrak{b},\mathfrak{c},\mathfrak{d}\) are parameters in the fused \(R\) and \(K\) matrices, and \(\kappa\) plays the role of spectral parameters. We will use an alternative parametrization of the model by \(q,\kappa,A,B,C,D\) defined in Definition 3.7. We assume that \(AC<1\). We use \(\left(Y_{t}\right)_{t\geq 0}\) to denote the Askey-Wilson Markov process with parameters \(\left(A,B,C,D,q\right)\), which will be defined in Section 3.1._ _For any down-right path \(\mathcal{P}\) on the strip, recall that the outgoing edges of \(\mathcal{P}\) are labelled by \(p_{1},\ldots,p_{N}\in\{\uparrow,\rightarrow\}\) from the up-left of \(\mathcal{P}\) to the down-right of \(\mathcal{P}\). We define a set of numbers \(\upsilon_{1},\ldots,\upsilon_{N}\in\{-1,1\}\) as \(\upsilon_{i}=1\) if \(p_{i}=\uparrow\) and \(\upsilon_{i}=-1\) if \(p_{i}=\rightarrow\) for \(1\leq i\leq N\)._ _The joint generating function of stationary measure \(\mu_{\mathcal{P}}\) of the fused vertex model on a strip on the down-right path \(\mathcal{P}\) has the following expression in terms of the Askey-Wilson process: For any \(0<t_{1}\leq\cdots\leq t_{N}\):_ \[\mathbb{E}_{\mu_{\mathcal{P}}}\left[\prod_{i=1}^{N}t_{i}^{\tau_{i}}\right]= \frac{\mathbb{E}\left[\prod_{i=1}^{N}\prod_{a=1}^{I}\left(2\sqrt{t_{i}}Y_{t_{i} }+t_{i}q^{\frac{I+1}{2}-a}\kappa^{\upsilon_{i}}+q^{-\frac{I+1}{2}+a}\kappa^{ -\upsilon_{i}}\right)\right]}{\mathbb{E}\left[\prod_{a=1}^{I}\left(2Y_{1}+q^{ \frac{I+1}{2}-a}\kappa+q^{-\frac{I+1}{2}+a}\kappa^{-1}\right)^{N}\right]}, \tag{1.22}\] _where \(0\leq\tau_{1},\ldots,\tau_{N}\leq I\) are occupation variables indicating the number of arrows occupying edges \(p_{1},\ldots,p_{N}\), and the expectations on the RHS of (1.22) are expectations under the Askey-Wilson process \(\left(Y_{t}\right)_{t\geq 0}\)._ The above theorem will be proved in Section 3.3. ### Mean arrow density and phase diagram We study the asymptotic behavior of a basic (macroscopic) statistical quantity of the stationary measure as the system size \(N\to\infty\). This statistical quantity is known as the'mean arrow density' defined as the total number of arrows within the system divided by the system size \(N\). This quantity is parallel to the'mean particle density' in the context of open ASEP. We will show that, similar to the case of open ASEP (and of the six-vertex model on a strip in [10]), the limit of the mean arrow density of fused vertex model on a strip exhibits a phase diagram involving the high density phase, the low density phase and the maximal current phase. We observe a novel and interesting phenomenon: corresponding to the (sequences of) down-right paths with different limit shapes, the stationary measures share the same phase diagram but have different limits for the mean arrow density. **Theorem 1.27**.: _Consider the fused vertex model on a strip in Definition 1.11. We fix all the parameters of the system but vary the system size (width of the strip) \(N\). Assume \(AC<1\) and \(A,C\notin\{q^{-l}:l\in\mathbb{N}_{0}\}\)._ _Consider a sequence of down-right paths \(\{\mathcal{P}^{N}\}_{N=1}^{\infty}\), where each \(\mathcal{P}^{N}\) lies on the strip with width \(N\). We denote by \(0\leq\phi_{N}\leq N\) the total number of horizontal edges within the down-right path \(\mathcal{P}^{N}\). We assume that as \(N\to\infty\), the sequence \(\phi_{N}/N\) converges to some number \(\lambda\in[0,1]\). Then as \(N\to\infty\), the limit of the mean arrow density of the stationary measure \(\mu_{\mathcal{P}^{N}}\) of fused vertex model on a strip on \(\mathcal{P}^{N}\) is given by:_ \[\lim_{N\to\infty}\mathbb{E}_{\mu_{\mathcal{P}^{N}}}\left[\frac{1}{N}\sum_{i=1} ^{N}\tau_{i}\right]=\lambda G(\kappa)+(1-\lambda)G\left(\frac{1}{\kappa} \right), \tag{1.23}\] _where \(G\) is the following function:_ \[G(x)=\begin{cases}\frac{\sum_{a=1}^{I}\frac{x}{x+q^{-\frac{L+1}{2}+\epsilon}} }{\sum_{a=1}^{I}\frac{Ax}{Ax+q^{-\frac{L+1}{2}+\epsilon}}},&A<1\text{ (high density phase)},\\ \sum_{a=1}^{I}\frac{x}{x+Cq^{-\frac{L+1}{2}+\epsilon}},&C>1\text{ (low density phase)}.\end{cases} \tag{1.24}\] The above theorem will be proved in Section 3.4. **Remark 1.28**.: We note that our phase diagram (Figure 3) corresponds to the phase diagram in [10] for the six-vertex model on a strip (i.e. the \(I=1\) case) but with a different parameterization. **Remark 1.29**.: We have made the assumption \(AC<1\) in Theorem 1.26 and Theorem 1.27. This corresponds to the shaded area in the phase diagram (Figure 3), which we refer to as the 'fan region' of the fused vertex model on a strip. This constraint is needed to guarantee the existence of the Askey-Wilson Markov process. In a recent paper [14], an extension of this method was provided for open ASEP. Instead of expectations of Askey-Wilson processes, the stationary measure of open ASEP was expressed therein in the'shock region' \(AC>1\) as integrations with respect to the so-called Askey-Wilson signed measures. Using such an expression, the density profiles and fluctuations of open ASEP were obtained therein in the shock region. It is possible that, by similar methods, the stationary measure for the fused vertex model on a strip could be characterized in the shock region \(AC>1\) by Askey-Wilson signed measures, from which the limits of mean arrow density could be studied. We choose not to pursue this direction and leave it for future works. Figure 3. Phase diagram of the fused vertex model on a strip. LD, HD, MC respectively stand for low density, high density and maximal current (phases). ### Integrable discrete-time two-step Floquet dynamics We demonstrate that one of the 'two-step Floquet dynamics' models introduced in [21] can be considered as a special case of the fused vertex model on a strip defined in this paper (Definition 1.1 and Definition 1.11 above). Suppose that the width \(N\) of the strip is an odd number, and that the down-right path \(\mathcal{P}\) is the down-right zig-zag path starting from a vertical edge, see (a) in Figure 4 above for the \(N=5\) case. The transition at each time \(k\mapsto k+1\) in the fused vertex model on a strip updates the down-right path from (a) to (c). This can be understood as a two-step update: first from (a) to (b), and then from (b) to (c). Therefore the transition matrix of this Markov chain can be written as (where \(\mathcal{V}=\mathbb{C}^{I+1}\) is the state space of each edge): \[\mathbb{U}^{e}\mathbb{U}^{o}=B_{1}U_{23}U_{45}\ldots U_{N-1,N}\,U_{12}U_{34} \ldots U_{N-2,N-1}\overline{B}_{N}\in\operatorname{End}\left(\mathcal{V}^{ \otimes N}\right), \tag{1.25}\] where \[\mathbb{U}^{o}=\prod_{k=1}^{(N-1)/2}U_{2k-1,2k}\overline{B}_{N},\quad\mathbb{ U}^{e}=B_{1}\prod_{k=1}^{(N-1)/2}U_{2k,2k+1}, \tag{1.26}\] correspond to the two steps of the updates discussed above, \[U=\operatorname{R}P\in\operatorname{End}(\mathcal{V}\otimes\mathcal{V}),\quad B =_{\ell}\operatorname{K}\in\operatorname{End}(\mathcal{V}),\quad\overline{B} =_{r}\operatorname{K}\in\operatorname{End}(\mathcal{V}), \tag{1.27}\] are the local operators (where \(\operatorname{R}\), \(\ell\)K and \({}_{r}\)K are the fused vertex weights and \(P\) swaps the two factors of \(\mathcal{V}\)), and \(U_{i,j}\) (resp. \(B_{i}\) or \(\overline{B}_{i}\)) in (1.25) and (1.26) above denote the operators \(U\) (resp. \(B\) or \(\overline{B}\)) acting on the \((i,j)\)-th (resp. \(i\)-th) copies of \(\mathcal{V}\) in \(\mathcal{V}^{\otimes N}\). By (1.27) and Definition 1.11, we have \[U=\operatorname{R}P=P\mathsf{R}^{I}(\kappa^{2}),\quad B=_{\ell}\operatorname{ K}=\mathsf{K}^{I}(\kappa),\quad\overline{B}=_{r}\operatorname{K}=\overline{ \mathsf{K}}^{I}\left(1/\kappa\right). \tag{1.28}\] To summarize, on the down-right zig-zag path, the Markov chain of the fused vertex model on a strip can be alternatively defined by transfer matrices by (1.25) and (1.28). This Markov chain has been previously studied by [21] for \(I\in\{1,2\}\) and referred to therein as the 'two-step Floquet dynamics' (in the open boundary and asymmetric case). See Section 2.2 and Section 3.1 of [21] for the definition, in particular (3.35) and (3.36) therein correspond to (1.28) and (1.25). The fusion procedure of vertex weights and of the matrix product ansatz was explicitly given therein in the \(I=2\) case. The partition function and the mean current of the stationary measure was calculated by the matrix ansatz, see (3.62) and (3.63) of [21]. As a special case of Theorem 1.27 for the down-right zig-zag path \(\mathcal{P}\), we are able to obtain the asymptotics of mean density and the phase diagram of the two-step Floquet dynamics with open boundaries (in the asymmetric case). This in particular answers an open question in [21, Section 4]. **Theorem 1.30**.: _Consider the integrable two-step Floquet dynamics with open boundaries (in the asymmetric case) introduced by [21], which is an interacting particle system on the lattice \(\{1,\ldots,L\}\), where up to \(I\in\{1,2\}\) many particles can occupy the same site. The definition appears in [21] in pages 306-307, together with pages 312-313 for \(I=1\) case and pages 320-323 for \(I=2\) case. The parameters \(t^{2},\kappa,a,b,c,d\) in [21] correspond to our parameters \(q,\kappa,a,\mathfrak{b},\mathfrak{c},\mathfrak{d}\)._ _As the size of the system \(L\to\infty\), the limit of the mean particle density is given by (1.23) for \(\lambda=1/2\), and the phase diagram is given by Figure 3, where we recall the definition of \(A\) and \(C\) in Definition 3.7._ **Outline of the paper.** In Section 2 we define the fusion for \(R\) and \(K\) matrices and fusion for ZF and GZ relations. The fused \(R\) and \(K\) matrices allow the definition of fused vertex model on a strip in Definition 1.11, and the fused ZF and GZ relations allow the construction of its stationary measure by matrix product ansatz in Theorem 1.24. In Section 3 we first prove Theorem 1.26 expressing the stationary measure in terms of Askey-Wilson processes. We then use this expression to obtain limits of mean density in Theorem 1.27. Figure 4. Two-step update for a down-right zig-zag path, when \(N=5\). The thick paths are the down-right paths, and the gray arrows denote the outgoing edges on those paths. **Acknowledgements.** The author thanks his advisor, Ivan Corwin, for suggesting this problem and for helpful discussions. The author thanks Amol Aggarwal, Ivan Corwin, Zoe Himwich, and Alisa Knizel for carefully reading the draft and providing valuable suggestions. The author was supported by Ivan Corwin's NSF grant DMS-1811143 as well as the Fernholz Foundation's "Summer Minerva Fellows" program. ## 2. Higher spin vertex model on a strip and matrix ansatz In Section 2.1 we introduce the fusion procedure and define the fused \(R\) and \(K\) matrices. In Section 2.2 we prove Proposition 1.14 that guarantee the stochasticity and irreducibility of the fused vertex model on a strip. In Section 2.3 we develop the fusion procedure for ZF and GZ relations, proving Theorem 1.21. In Section 2.4 we prove Theorem 1.3 on the matrix product ansatz for higher spin vertex models on a strip. ### Fusion of vertex weights In this subsection, we introduce the fusion procedure and define the spin-\(\frac{I}{2}\) fused \(R\) and \(K\) matrices from their spin-\(\frac{1}{2}\) (unfused) counterparts. **Definition 2.1**.: We first define some notions that will be useful in fusion. 1. We denote the natural basis of \(\mathbb{C}^{I+1}\) by \(\{\mathrm{e}_{0},\ldots,\mathrm{e}_{I}\}\). We identify \(\mathbb{C}^{I+1}\) with the state space of an edge of spin-\(\frac{I}{2}\) vertex model, with \(\mathrm{e}_{j}\) corresponding to the state with \(j\) arrows occupying that edge. 2. We define a surjection \(\Pi:\big{(}\mathbb{C}^{2}\big{)}^{\otimes I}\to\mathbb{C}^{I+1}\) by \[\Pi(\mathrm{e}_{a_{1}}\otimes\ldots\otimes\mathrm{e}_{a_{I}}):=\mathrm{e}_{ \sum a_{j}}\] for all \(a_{1},\ldots,a_{I}\in\{0,1\}\). 3. A vector \[v=\sum_{a_{1},\ldots,a_{I}\in\{0,1\}}v(a_{1},\ldots,a_{I})\mathrm{e}_{a_{1}} \otimes\ldots\otimes\mathrm{e}_{a_{I}}\in\big{(}\mathbb{C}^{2}\big{)}^{\otimes I}\] (2.1) is called \(q\)-exchangeable if its coefficients satisfy \[qv(\ldots,0,1,\ldots)=v(\ldots,1,0,\ldots)\] on any two nearby positions. The subspace of \(q\)-exchangeable vectors in \(\big{(}\mathbb{C}^{2}\big{)}^{\otimes I}\) is called the \(q\)-exchangeable subspace, denoted by \(\mathrm{Sym}_{q}^{I}\). We can regard the coefficients \(v(a_{1},\ldots,a_{I})\) in vector (2.1) as a \(\mathbb{C}\)-valued measure on \(\{0,1\}^{I}\). This measure is \(q\)-exchangeable if \(v\) is \(q\)-exchangeable. 4. For any \(I\)-tuple \((a_{1},\ldots,a_{I})\in\{0,1\}^{I}\), define \(\mathrm{inv}(a_{1},\ldots,a_{I})=\#\{1\leq i<j\leq I:a_{i}>a_{j}\}\), \(\overline{\mathrm{inv}}(a_{1},\ldots,a_{I})=\#\{1\leq i<j\leq I:a_{i}<a_{j}\}\). It is well-known that, for \(0\leq a\leq I\), \[Z_{q}(I;a)=\sum_{\begin{subarray}{c}a_{1},\ldots,a_{I}\in\{0,1\}\\ \sum a_{j}=a\end{subarray}}q^{\mathrm{inv}(a_{1},\ldots,a_{I})}=\sum_{ \begin{subarray}{c}a_{1},\ldots,a_{I}\in\{0,1\}\\ \sum a_{j}=a\end{subarray}}q^{\mathrm{inv}(a_{1},\ldots,a_{I})}=\frac{(q;q)_{I }}{(q;q)_{a}(q;q)_{I-a}}.\] 5. We define an injection \(\widehat{\Pi}:\mathbb{C}^{I+1}\to\big{(}\mathbb{C}^{2}\big{)}^{\otimes I}\) by \[\widehat{\Pi}(\mathrm{e}_{a}):=\frac{1}{Z_{q}(I;a)}\sum_{\begin{subarray}{c}a_ {1},\ldots,a_{I}\in\{0,1\}\\ \sum a_{j}=a\end{subarray}}q^{\mathrm{inv}(a_{1},\ldots,a_{I})}\mathrm{e}_{a_{1 }}\otimes\ldots\otimes\mathrm{e}_{a_{I}},\] for all \(0\leq a\leq I\). This injection maps \(\mathbb{C}^{I+1}\) onto the \(q\)-exchangeable subspace \(\mathrm{Sym}_{q}^{I}\) of \(\big{(}\mathbb{C}^{2}\big{)}^{\otimes I}\). We can observe that \(\Pi\circ\widehat{\Pi}=\mathrm{id}_{\mathbb{C}^{I+1}}\), and that \(\widehat{\Pi}\circ\Pi=F\in\mathrm{End}\left(\big{(}\mathbb{C}^{2}\big{)}^{ \otimes I}\right)\) is the projector (i.e. \(F^{2}=F\)) onto the subspace \(\mathrm{Sym}_{q}^{I}\). **Definition 2.2**.: We will use the following unfused (spin-\(\frac{1}{2}\)) \(R\) and \(K\) matrices \(\mathsf{R}(u)\in\mathrm{End}\left(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\right)\) and \(\mathsf{K}(u),\overline{\mathsf{K}}(u)\in\mathrm{End}\left(\mathbb{C}^{2}\right)\) with parameters \(q,a,\mathpzc{b},\mathpzc{c},\mathpzc{d}\) : \[\mathsf{R}(u)=\begin{bmatrix}1&0&0&0\\ 0&\frac{q(1-u)}{1-qu}&\frac{u(1-q)}{1-qu}&0\\ 0&\frac{1-q}{1-qu}&\frac{1-qu}{1-qu}&0\\ 0&0&0&1\end{bmatrix},\quad\mathsf{K}(u)=\begin{bmatrix}\frac{(t-a)u^{2}+u}{ \omega u^{2}+u-a}&\frac{\ell(u^{2}-1)}{\omega u^{2}+u-a}\\ \frac{\ell(u^{2}-1)}{\omega u^{2}+u-a}&\frac{\ell-qu}{\omega u^{2}+u-a}\end{bmatrix}, \quad\overline{\mathsf{K}}(u)=\begin{bmatrix}\frac{(\mathpzc{b}-\mathpzc{d})u^{2}-u }{\omega u^{2}-u-\mathpzc{d}}&\frac{\ell(u^{2}-1)}{\omega u^{2}-u-\mathpzc{d}} \\ \frac{\ell(u^{2}-1)}{\omega u^{2}-u-\mathpzc{d}}&\frac{\ell-\mathpzc{d}-u}{\omega u ^{2}-u-\mathpzc{d}}\end{bmatrix}. \tag{2.2}\] The \(R\) matrix \(\mathsf{R}(u)\) above is written under the basis \(\{\mathrm{e}_{0}\otimes\mathrm{e}_{0},\mathrm{e}_{0}\otimes\mathrm{e}_{1}, \mathrm{e}_{1}\otimes\mathrm{e}_{0},\mathrm{e}_{1}\otimes\mathrm{e}_{1}\}\) of \(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\). The \(K\) matrices \(\mathsf{K}(u)\) and \(\overline{\mathsf{K}}(u)\) above are written under the basis \(\{\mathrm{e}_{0},\mathrm{e}_{1}\}\) of \(\mathbb{C}^{2}\). From the unfused operators \(\mathsf{R}(u)\), \(\mathsf{K}(u)\) and \(\overline{\mathsf{K}}(u)\) above, we will next define the fused operators \(\hat{\mathsf{R}}^{I}(u)\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^{ \otimes I}\otimes\left(\mathbb{C}^{2}\right)^{\otimes I}\right)\) and \(\hat{\mathsf{K}}^{I}(u),\hat{\overline{\mathsf{K}}}^{I}(u)\in\operatorname{ End}\left(\left(\mathbb{C}^{2}\right)^{\otimes I}\right)\). These are not yet the spin-\(\frac{I}{2}\) fused \(R\) and \(K\) matrices that we will make use of, because they are not yet projected to \(q\)-exchangeable subspaces. **Definition 2.3**.: We write left/right arrows on product signs to mean that the products are taken from left to right in decreasing/increasing orders of the index. When there are two product signs next to each other we always perform the inner one first. We write \(\mathsf{R}_{ij}(u)\) to mean \(\mathsf{R}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\right)\) acting on the \(i\)-th and \(j\)-th factors of \(\left(\mathbb{C}^{2}\right)^{\otimes I}\). We write \(\mathsf{K}_{i}(u)\) (resp. \(\overline{\mathsf{K}}_{i}(u)\)) to mean \(\mathsf{K}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\right)\) (resp. \(\overline{\mathsf{K}}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\right)\)) acting on the \(i\)-th factor of \(\left(\mathbb{C}^{2}\right)^{\otimes I}\). We define the permutation operator \(P_{\omega}\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^{\otimes I}\right)\) corresponding to any \(\omega\) in the symmetric group of \(\{1,\ldots,I\}\). We define the fused \(R\) operator \(\hat{\mathsf{R}}^{I}(u)\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^ {\otimes I}\otimes\left(\mathbb{C}^{2}\right)^{\otimes I}\right)\) as: (2.3) In the above graph the vertical (or horizontal) path labelled \(i\) represent the \(i\)-th (or \((I+i)\)-th) factor in \(\left(\mathbb{C}^{2}\right)^{\otimes I}\otimes\left(\mathbb{C}^{2}\right)^{ \otimes I}\) respectively, for \(1\leq i\leq I\). The vertex at the intersection of a vertical path labelled \(i\) and a horizontal path labelled \(j\) represents an operator \(\mathsf{R}_{i,j+I}\), with spectral parameter given in the graph. We define the fused \(K\) operator \(\hat{\mathsf{K}}^{I}(u)\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^ {\otimes I}\right)\) as: (2.4) In the above graph the path labelled \(i\) (which first goes rightwards, hit the diagonal and then goes upwards) represent the \(i\)-th factor in \(\left(\mathbb{C}^{2}\right)^{\otimes I}\), for \(1\leq i\leq I\). The vertex at the intersection of the paths labelled \(i\) and \(j\) represents an operator \(\mathsf{R}_{i,j}\), for \(1\leq i<j\leq I\). The vertex at the turning point of the path labelled by \(i\) represents an operator \(\mathsf{K}_{i}\). The spectral parameters of these operators are given in the graph. We define the fused \(K\) operator \(\hat{\mathsf{K}}^{I}(u)\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^ {\otimes I}\right)\) as: (2.5) In the above graph the path labelled \(i\) represent the \(i\)-th factor in \(\left(\mathbb{C}^{2}\right)^{\otimes I}\), for \(1\leq i\leq I\). The vertex at the intersection of paths labelled \(i\) and \(j\) represents an operator \(\mathsf{R}_{i,j}^{-1}\), for \(1\leq j<i\leq I\). The vertex at the turning point of path labelled by \(i\) represents an operator \(\overline{\mathsf{K}}_{i}\). Spectral parameters are given in the graph. **Lemma 2.4**.: _We have commutativity relations:_ \[\left[\hat{\mathsf{R}}^{I}(u),F_{1}\otimes F_{2}\right]=0,\quad\left[\hat{ \mathsf{K}}^{I}(u),F\right]=0,\quad\left[\stackrel{{\longrightarrow }}{{\overline{\mathsf{K}}}}^{I}(u),F\right]=0\] _where \(F=\widehat{\Pi}\circ\Pi\in\operatorname{End}\left(\left(\mathbb{C}^{2}\right)^ {\otimes I}\right)\) is the projector to the \(q\)-exchangeable subspace \(\operatorname{Sym}_{q}^{I}\). In other words, \(\hat{\mathsf{R}}^{I}(u)\) has invariant subspace \(\operatorname{Sym}_{q}^{I}\otimes\operatorname{Sym}_{q}^{I}\), and \(\hat{\mathsf{K}}^{I}(u)\) and \(\hat{\mathsf{K}}^{I}(u)\) have invariant subspace \(\operatorname{Sym}_{q}^{I}\)._ Proof.: This is a standard result that was proved for the \(R\) matrix for example in [1, Appendix B] and for the \(K\) matrix in [1, Section 6.2], although they appear in slightly different notations. Observe from (2.9) and (2.10) that \(\mathsf{R}^{-1}(u)=\mathsf{R}(u)_{\sigma\mapsto 1/q}\) and \(\overline{\mathsf{K}}(u)=\mathsf{K}(u)_{\sigma\mapsto-\ell,\mu\mapsto-\epsilon}\), hence the result for the \(\overline{K}\) matrix follows from the corresponding result for the \(K\) matrix by a change of variables. **Definition 2.5**.: Define the fused matrices \(\mathsf{R}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\otimes\mathbb{C} ^{I+1}\right)\) and \(\mathsf{K}^{I}(u),\overline{\mathsf{K}}^{I}(u)\in\operatorname{End}\left( \mathbb{C}^{I+1}\right)\): \[\mathsf{R}^{I}(u)=(\Pi_{1}\Pi_{2})\circ\hat{\mathsf{R}}^{I}(u)\circ(\widehat {\Pi}_{1}\widehat{\Pi}_{2}),\quad\mathsf{K}^{I}(u)=\Pi\circ\hat{\mathsf{K}}^{ I}(u)\circ\widehat{\Pi},\quad\overline{\mathsf{K}}^{I}(u)=\Pi\circ\hat{ \mathsf{K}}^{I}(u)\circ\widehat{\Pi},\] i.e. \[\mathsf{R}^{I}(u)=(\Pi_{1}\Pi_{2})\circ\left(\stackrel{{ \longleftarrow}}{{\prod}}\stackrel{{\longrightarrow}}{{\prod}} \stackrel{{\longrightarrow}}{{\prod}}\mathsf{R}_{b,a+I}\left(uq^{b- a}\right)\right)\circ(\widehat{\Pi}_{1}\widehat{\Pi}_{2}), \tag{2.6}\] \[\mathsf{K}^{I}(u)=\Pi\circ P_{\{1,\ldots,1\}}\circ\left(\stackrel{{ \longleftarrow}}{{\prod}}\stackrel{{\longleftarrow}}{{\prod}} \mathsf{K}_{a}\left(uq^{\frac{I+1}{2}-a}\right)\stackrel{{ \longleftarrow}}{{\prod}}\mathsf{R}_{b,a}\left(u^{2}q^{I+1-a-b}\right) \right)\circ\widehat{\Pi}, \tag{2.7}\] \[\overline{\mathsf{K}}^{I}(u)=\Pi\circ P_{\{1,\ldots,1\}}\circ\left(\stackrel{{ \longleftarrow}}{{\prod}}\stackrel{{\longrightarrow}}{{\prod}} \overline{\mathsf{K}}_{a}\left(uq^{\frac{I+1}{2}-a}\right)\stackrel{{ \longrightarrow}}{{\prod}}\stackrel{{\longrightarrow}}{{\prod}} \mathsf{R}_{b,a}^{-1}\left(u^{2}q^{I+1-a-b}\right)\right)\circ\widehat{\Pi}. \tag{2.8}\] **Remark 2.6**.: One can observe from the fusion procedure that \(\overline{\mathsf{K}}^{I}(u)=\mathsf{K}^{I}(u)_{q\mapsto 1/q,\sigma\mapsto-\ell,\mu \mapsto-\epsilon}\). ### Stochasticity and irreducibility of the model: Proof of Proposition 1.14 In this subsection we prove Proposition 1.14 which states that under the following condition on parameters: \[0<q<1,\quad 0<\kappa<q^{\frac{l-1}{2}},\quad\mathsf{a},\mathfrak{b},\mathfrak{c}, d>0,\quad\mathsf{a}-\epsilon>q^{\frac{1-l}{2}}/\kappa,\quad\mathfrak{b}- \mathfrak{d}>q^{\frac{1-l}{2}}/\kappa,\] the vertex weights for the fused vertex model: \[\mathrm{R}_{a,b}^{c,d}:=\mathsf{R}^{I}\left(\kappa^{2}\right)_{b,a}^{d,c}, \quad\iota\mathsf{K}_{a}^{d}:=\mathsf{K}^{I}\left(\kappa\right)_{a}^{d},\quad \iota\mathsf{K}_{b}^{c}:=\overline{\mathsf{K}}^{I}(1/\kappa)_{b}^{c},\quad \text{ for all }0\leq a,b,c,d\leq I,\] satisfy the conditions (1.2) and (1.4), i.e. that the matrices \(\mathrm{R}\), \(\iota\mathsf{K}\) and \({}_{r}\mathsf{K}\) are stochastic, and that all of their entries are strictly positive unless \(a+b\neq c+d\) in the matrix \(\mathrm{R}\), in which case the entry equals \(0\). We recall that the matrices \(\mathsf{R}^{I}\left(\kappa^{2}\right)\), \(\mathsf{K}^{I}\left(\kappa\right)\) and \(\overline{\mathsf{K}}^{I}(1/\kappa)\) have been defined by fusion. One can observe that we only need to prove these properties for all of the (unfused) \(R\) and \(K\) matrices that appear as vertices in the graphs (2.3) (2.4) and (2.5) in Definition 2.3. We begin by recalling the unfused matrices: \[\mathsf{R}(u)=\begin{bmatrix}1&0&0&0\\ 0&\frac{1-u}{1-qu}&u\frac{1-qu}{1-qu}&0\\ 0&\frac{1-u}{1-qu}&\frac{1-qu}{1-qu}&0\\ 0&0&0&1\end{bmatrix},\quad\mathsf{R}^{-1}(u)=\begin{bmatrix}1&0&0\\ 0&\frac{1-u}{q-u}&u\frac{q-1}{q-u}&0\\ 0&\frac{q-1}{q-u}&q\frac{1-qu}{q-u}&0\\ 0&0&0&1\end{bmatrix}, \tag{2.9}\] \[\mathsf{K}(u)=\begin{bmatrix}\frac{(\epsilon-u)^{u}+u}{\epsilon^{u }-u}&\frac{\epsilon(u^{2}-1)}{\epsilon^{u}-u}\\ \frac{(u^{2}-1)}{\epsilon^{u}+u-d}&\frac{\epsilon(u^{2}-1)}{\epsilon^{u}+u-d}& \end{bmatrix},\quad\overline{\mathsf{K}}(u)=\begin{bmatrix}\frac{(\epsilon-d)^{u }-u}{\delta u^{2}-u}&\frac{\delta(u^{2}-1)}{\delta u^{2}-u-d}\\ \frac{(u^{2}-1)}{\delta u^{2}-u-d}&\frac{\delta-u}{\delta u^{2}-u-d}\end{bmatrix}. \tag{2.10}\] We first prove the following claim for the unfused \(R\) and \(K\) matrices: **Claim 2.7**.: Suppose \(q\in(0,1)\) and \(\mathsf{a},\mathfrak{b},\epsilon,\mathfrak{d}>0\). We have the following: * The middle \(4\) entries of \(\mathsf{R}(u)\) are positive if \(u\in(0,1)\). * The middle \(4\) entries of \(\mathsf{R}^{-1}(u)\) are positive if \(u>1\). * The entries of \(\mathsf{K}(u)\) are positive if \(u\in(0,1)\) and \(\mathsf{a}-\epsilon>1/u\). * The entries of \(\overline{\mathsf{K}}(u)\) are positive if \(u>1\) and \(\updelta-\updelta>u\). Proof of Claim 2.7.: The first and second can be directly observed. When \(\updelta-\upepsilon>1/u\) and \(u\in(0,1)\) we have \((\upepsilon-\updelta)u+1<0\). Hence \((\upepsilon-\updelta)u^{2}+u<0\), \(\upepsilon u^{2}+u-\updelta<\upepsilon u^{2}+u-au^{2}<0\) and \(\upepsilon-\updelta+u<\upepsilon-\updelta+1/u<0\). Moreover \(\updelta(u^{2}-1)<0\) and \(\upepsilon(u^{2}-1)<0\), therefore the entries of \(K(u)\) are positive. When \(\updelta-\updelta>u\) and \(u>1\) we have \(\updelta-\updelta-u>0\), \(\updelta u^{2}-u-\updelta>\updelta-\updelta-u>0\) and \((\updelta-\updelta)u^{2}-u>u^{3}-u>0\). Moreover \(\updelta(u^{2}-1)>0\) and \(\updelta(u^{2}-1)>0\), therefore the entries of \(\overline{K}(u)\) are positive. We conclude the proof. We record the matrices appearing as vertices of \(\mathsf{R}^{I}(\kappa^{2})\), \(\mathsf{K}^{I}(\kappa)\) and \(\overline{\mathsf{K}}^{I}(\kappa)\) as follows: * In \(\mathsf{R}^{I}(\kappa^{2})\) there are \(\mathsf{R}(u)\) for \(u=\kappa^{2}q^{i}\), \(1-I\leq i\leq I-1\). * In \(\mathsf{K}^{I}(\kappa)\) there are \(\mathsf{R}(u)\) for \(u=\kappa q^{i}\), \(2-I\leq i\leq I-2\) and \(\mathsf{K}(u)\) for \(u=\kappa q^{i}\), \(\frac{1-I}{2}\leq i\leq\frac{I-1}{2}\). * In \(\mathsf{K}^{I}(1/\kappa)\) there are \(\mathsf{R}^{-1}(u)\) for \(u=q^{i}/\kappa^{2}\), \(2-I\leq i\leq I-2\) and \(\overline{\mathsf{K}}(u)\) for \(u=q^{i}/\kappa\), \(\frac{1-I}{2}\leq i\leq\frac{I-1}{2}\). Since \(\kappa\in(0,q^{\frac{I-1}{2}})\), those \(\mathsf{R}(u)\) that appear have \(u\in(0,1)\) and those \(\mathsf{R}^{-1}(u)\) that appear have \(u>1\). Since \(\updelta-\upepsilon>q^{\frac{1-I}{2}}/\kappa\) and \(\updelta-\updelta>q^{\frac{I-I}{2}}/\kappa\), those \(\mathsf{K}(u)\) that appear have \(u\in(0,1)\), \(\updelta-\upepsilon>1/u\) and those \(\overline{\mathsf{K}}(u)\) that appear have \(u>1\), \(\updelta-\updelta>u\). By Claim 2.7 we conclude the proof. ### Fusion of ZF and GZ relations: Proof of Theorem 1.21 In this subsection we prove Theorem 1.21 on the fusion of ZF and GZ relations. In the proof we will need the following alternative form of the fused \(R\) and \(K\) matrices. **Theorem 2.8**.: _We denote by \(P_{I,I}\) the operator that swaps the first and second factors in \(\mathbb{C}^{I+1}\otimes\mathbb{C}^{I+1}\) or in \(\left(\mathbb{C}^{2}\right)^{\otimes I}\otimes\left(\mathbb{C}^{2}\right)^{\otimes I}\). Let \(\bar{\mathsf{K}}^{I}(u)=P_{I,I}\mathsf{R}^{I}(u)\) and \(\mathring{\bar{\mathsf{R}}}^{I}(u)=P_{I,I}\bar{\mathsf{R}}^{I}(u)\). Recall that \(\bar{\mathsf{R}}(u)=P\mathsf{R}(u)\in\mathrm{End}\left(\mathbb{C}^{2}\otimes \mathbb{C}^{2}\right)\). We write \(\bar{\mathsf{K}}_{u}(u)\) to mean operator \(\bar{\mathsf{R}}(u)\) acting on the \((a,a+1)\) factors of \(\left(\mathbb{C}^{2}\right)^{\otimes I}\otimes\left(\mathbb{C}^{2}\right)^{\otimes I}\), for \(1\leq a\leq 2I-1\). We write \(\mathsf{K}_{a}(u)\) or \(\overline{\mathsf{K}}_{a}(u)\) to mean operator \(\mathsf{K}(u)\) or \(\overline{\mathsf{K}}(u)\) acting on \(a\)-th factor of \(\left(\mathbb{C}^{2}\right)^{\otimes I}\), for \(1\leq a\leq I\). Then we have the following expressions of the fused operators introduced in Definition 2.3:_ \[\mathring{\bar{\mathsf{R}}}^{I}(u) =\prod_{a\in[[1,I]]}^{\longleftarrow}\prod_{b\in[[a,a+I-1]]}^{ \longrightarrow}\bar{\mathsf{K}}_{b}\left(uq^{b+1-2a}\right), \tag{2.11}\] \[\mathring{\bar{\mathsf{K}}}^{I}(u) =\prod_{a\in[[1,I]]}^{\longleftarrow}\mathsf{K}_{1}\left(uq^{ \frac{I+1}{2}-a}\right)\prod_{b\in[[a,a-1]]}^{\longleftarrow}\bar{\mathsf{K} }_{a-b}\left(u^{2}q^{I+1-a-b}\right),\] (2.12) \[\mathring{\overline{\mathsf{K}}}^{I}(u) =\prod_{a\in[[1,I]]}^{\longrightarrow}\overline{\mathsf{K}}_{I} \left(uq^{\frac{I+1}{2}-a}\right)\prod_{b\in[[a+1,I]]}^{\longrightarrow}\bar{ \mathsf{K}}_{I+a-b}^{-1}\left(u^{2}q^{I+1-a-b}\right). \tag{2.13}\] _As a result, the fused \(R\) and \(K\) matrices in Definition 2.5 admit the following alternative expressions:_ \[\widetilde{\mathsf{K}}^{I}(u) =(\Pi_{1}\Pi_{2})\circ\left(\prod_{a\in[[1,I]]}^{\longleftarrow} \prod_{b\in[[a,a+I-1]]}^{\longrightarrow}\bar{\mathsf{K}}_{b}\left(uq^{b+1-2a} \right)\right)\circ(\widehat{\Pi}_{1}\widehat{\Pi}_{2}), \tag{2.14}\] \[\mathsf{K}^{I}(u) =\Pi\circ\left(\prod_{a\in[[1,I]]}^{\longleftarrow}\mathsf{K} _{1}\left(uq^{\frac{I+1}{2}-a}\right)\prod_{b\in[[1,a-1]]}^{\longleftarrow} \bar{\mathsf{K}}_{a-b}\left(u^{2}q^{I+1-a-b}\right)\right)\circ\widehat{\Pi},\] (2.15) \[\overline{\mathsf{K}}^{I}(u) =\Pi\circ\left(\prod_{a\in[[1,I]]}^{\longrightarrow}\overline{ \mathsf{K}}_{I}\left(uq^{\frac{I+1}{2}-a}\right)\prod_{b\in[[a+1,I]]}^{ \longrightarrow}\bar{\mathsf{K}}_{I+a-b}^{-1}\left(u^{2}q^{I+1-a-b}\right) \right)\circ\widehat{\Pi}. \tag{2.16}\] **Remark 2.9**.: The alternative forms for fused \(R\) and \(K\) matrices provided in (2.14), (2.15) and (2.16) above are known as the 'braided' forms. The braid form for fused \(R\) matrix can be found in [1, Section 5.2] in different notations. We are unable to find the braid form for the fused \(K\) matrix in the literature. An explicit formula was obtained in [13] for the fused \(R\) matrix using the braided form combined with techniques of Hecke algebras. We expect that the braid form for fused \(K\) matrix could also yield such explicit formula. Proof.: The proof is essentially a multiple (inductive) use of two basic identities: \[P_{\omega}\mathsf{R}_{ab}(u)P_{\omega^{-1}}=\mathsf{R}_{\omega(a)\omega(b)}(u), \quad P_{\omega}\mathsf{K}_{a}(u)P_{\omega^{-1}}=\mathsf{K}_{\omega(a)}(u),\] for any \(\omega\) in the symmetric group and indices \(a,b\). We can see the spectral parameters are always unchanged, so they do not play a role as long as they are in the correct order. We omit spectral parameters in the proof. We begin with proof of braided form (2.11) for the fused \(R\) matrix. We compare it with (2.3) and we only need to prove: \[\overset{\longleftarrow}{\prod}\prod_{a\in[[1,I]]}\overset{\longrightarrow}{ \prod}\overset{\longrightarrow}{\underset{l+1,\ldots,2,I,1,\ldots,I}{\overset{ \longrightarrow}{\prod}}}\overset{\longleftarrow}{\prod}\overset{ \longrightarrow}{\underset{l+1,\ldots,2,I,1,\ldots,I}{\overset{ \longrightarrow}{\prod}}}\overset{\longrightarrow}{\underset{k,a+I}{\overset{ \longrightarrow}{\prod}}}\overset{\longrightarrow}{\underset{k, This can be seen as starting with \(\widetilde{\mathsf{K}}_{1}=\mathsf{R}_{21}P_{12}\) and multiplying the following identity through \(2\leq i\leq a-1\): \[P_{(1,i-1|1)}\widetilde{\mathsf{K}}_{i}=\mathsf{R}_{i+1,1}P_{(1,i|1)}.\] The above identity reduces to \(\widetilde{\mathsf{K}}_{i}=P_{i,i+1}\mathsf{R}_{i,i+1}\). Hence we conclude the proof of (2.12). To prove the braided form of fused \(\overline{K}\) matrix (2.13) we note that \[\widetilde{\mathsf{K}}_{a}^{-1}=\left(P_{a,a+1}\mathsf{R}_{a,a+1}\right)^{-1} =\mathsf{R}_{a,a+1}^{-1}P_{a,a+1}=P_{a,a+1}\mathsf{R}_{a+1,a}^{-1}\] the proof is then word-by-word parallel with the proof of braided form of fused \(K\) matrix (2.12) if we replace each index \(a\) by \(I+1-a\) for \(1\leq a\leq I\) and each operator \(\mathsf{R}\) by \(\mathsf{R}^{-1}\). We now begin the proof of Theorem 1.21. Proof of Theorem 1.21.: We want to show that \(\boldsymbol{\mathsf{M}}^{I}(u):=\left[\mathsf{M}_{0}^{I}(u),\ldots\mathsf{M}_ {I}^{I}(u)\right]^{I}\in\mathbb{C}^{I+1}\otimes\mathcal{A}\) defined as: \[\mathsf{M}_{\zeta}^{I}(u):=\sum_{\begin{subarray}{c}\zeta_{1}+\cdots+\zeta_{ I}=\zeta\\ \zeta_{1},\ldots,\zeta_{I}\in\{0,1\}\end{subarray}}\prod_{a\in[[1,I]]}^{ \longrightarrow}\mathsf{M}_{\zeta_{a}}\left(uq^{-\frac{I+1}{2}+a}\right) \in\mathcal{A},\quad 0\leq\zeta\leq I, \tag{2.20}\] satisfies the ZF and GZ relations with the fused matrices \(\mathsf{R}^{I}(u)\in\operatorname{End}\left(\mathbb{C}^{I+1}\otimes\mathbb{C }^{I+1}\right)\) and \(\mathsf{K}^{I}(u),\overline{\mathsf{K}}^{I}(u)\in\operatorname{End}\left( \mathbb{C}^{I+1}\right)\) defined in Definition 2.5. We are able to write \(\boldsymbol{\mathsf{M}}^{I}(u)\) in an alternate form: \[\boldsymbol{\mathsf{M}}^{I}(u)=\Pi\circ\bigotimes_{a\in[[1,I]]}^{ \longrightarrow}\boldsymbol{\mathsf{M}}\left(uq^{-\frac{I+1}{2}+a}\right) \in\mathbb{C}^{I+1}\otimes\mathcal{A}, \tag{2.21}\] where \(\boldsymbol{\mathsf{M}}(u)=[\mathsf{M}_{0}(u),\mathsf{M}_{1}(u)]^{T}\in \mathbb{C}^{2}\otimes\mathcal{A}\). We remark that in the above equation, the factor \(\mathcal{A}^{\otimes I}\) get (implicitly) contracted to \(\mathcal{A}\) by the multiplication in the algebra \(\mathcal{A}\), and then \(\Pi\) projects \(\left(\mathbb{C}^{2}\right)^{\otimes I}\) to \(\mathbb{C}^{I+1}\). We recall from Proposition 1.19 that \(\boldsymbol{\mathsf{M}}(u)\) satisfies the ZF and GZ relations with the unfused matrices \(\mathsf{R}(u)\in\operatorname{End}\left(\mathbb{C}^{2}\otimes\mathbb{C}^{2}\right)\) and \(\mathsf{K}(u),\overline{\mathsf{K}}(u)\in\operatorname{End}\left(\mathbb{C}^{ 2}\right)\). We make use of the braided forms of the fused \(R\) and \(K\) matrices given in Theorem 2.8. By the commutativity relations in Lemma 2.4 we can get rid of the various \(\Pi\) and \(\tilde{\Pi}\). To prove the fused ZF relation \[\widetilde{\mathsf{R}}^{I}\left(\frac{x}{y}\right)\boldsymbol{\mathsf{M}}^{I} (x)\otimes\boldsymbol{\mathsf{M}}^{I}(y)=\boldsymbol{\mathsf{M}}^{I}(y) \otimes\boldsymbol{\mathsf{M}}^{I}(x), \tag{2.22}\] we only need to prove \[\prod_{i\in[[1,I]]}^{\longleftarrow}\widetilde{\mathsf{K}}_{i} \left(\frac{x}{y}q^{1-i}\right)\ldots\widetilde{\mathsf{K}}_{i+I-1}\left(\frac {x}{y}q^{I-i}\right)\bigotimes_{a\in[[1,I]]}^{\longrightarrow}\boldsymbol{ \mathsf{M}}\left(xq^{-\frac{I+1}{2}+a}\right)\otimes\bigotimes_{b\in[[1,I]]}^{ \longrightarrow}\boldsymbol{\mathsf{M}}\left(yq^{-\frac{I+1}{2}+b}\right)\] \[=\bigotimes_{b\in[[1,I]]}^{\longrightarrow}\boldsymbol{\mathsf{ M}}\left(yq^{-\frac{I+1}{2}+b}\right)\otimes\bigotimes_{a\in[[1,I]]}^{ \longrightarrow}\boldsymbol{\mathsf{M}}\left(xq^{-\frac{I+1}{2}+a}\right).\] This can be seen as an induction through the following identity for \(1\leq i\leq I\): \[\widetilde{\mathsf{K}}_{i}\left(\frac{x}{y}q^{1-i}\right)\ldots \widetilde{\mathsf{K}}_{i+I-1}\left(\frac{x}{y}q^{I-i}\right)\bigotimes_{b\in[[1,i-1]]}^{\longrightarrow}\boldsymbol{\mathsf{M}}\left(yq^{-\frac{I+1}{2}+b} \right)\otimes\bigotimes_{a\in[[1,I]]}^{\longrightarrow}\boldsymbol{\mathsf{M}} \left(xq^{-\frac{I+1}{2}+a}\right)\otimes\bigotimes_{b\in[[i,I]]}^{ \longrightarrow}\boldsymbol{\mathsf{M}}\left(yq^{-\frac{I+1}{2}+b}\right)\] \[=\bigotimes_{b\in[[1,i]]}^{\longrightarrow}\boldsymbol{\mathsf{ M}}\left(yq^{-\frac{I+1}{2}+b}\right)\otimes\bigotimes_{a\in[[1,I]]}^{ \longrightarrow}\boldsymbol{\mathsf{M}}\left(xq^{-\frac{I+1}{2}+a}\right) \otimes\bigotimes_{b\in[[i+1,I]]}^{\longrightarrow}\boldsymbol{\mathsf{M}} \left(yq^{-\frac{I+1}{2}+b}\right),\] which is itself a result of using of the unfused ZF relation: \[\widetilde{\mathsf{K}}\left(\frac{x}{y}\right)\boldsymbol{\mathsf{M}}(x) \otimes\boldsymbol{\mathsf{M}}(y)=\boldsymbol{\mathsf{M}}(y)\otimes \boldsymbol{\mathsf{M}}(x) \tag{2.23}\] \(I\) times, where each step transfers the \((I+i)\)-th factor \(\boldsymbol{\mathsf{M}}\left(yq^{-\frac{I+1}{2}+i}\right)\) in the tensor product leftwards by one. We conclude the proof of the identity and hence the proof of fused ZF relation (2.22). To prove the fused GZ relation \[\left\langle W\middle|\mathsf{K}^{I}(u)\boldsymbol{\mathsf{M}}^{I}\left(\frac{1} {u}\right)=\left\langle W\middle|\boldsymbol{\mathsf{M}}^{I}(u),\right. \tag{2.24}\] we only need to prove \[\left\langle W\right|\prod_{i\in[[1,I]]}^{\longleftarrow}\mathsf{K} _{1}\left(uq^{\frac{I+1}{2}-i}\right)\breve{\mathsf{R}}_{1}\left(u^{2}q^{I+2-2i }\right)\ldots\breve{\mathsf{R}}_{i-1}\left(u^{2}q^{I-i}\right)\bigotimes_{a \in[[1,I]]}^{\longrightarrow}\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+a}\right)\] \[=\left\langle W\right|\bigotimes_{a\in[[1,I]]}^{\longrightarrow} \mathsf{M}\left(uq^{-\frac{I+1}{2}+a}\right).\] This can be seen as an induction through the following identity for \(1\leq i\leq I\): \[\left\langle W\right|\mathsf{K}_{1}\left(uq^{\frac{I+1}{2}-i} \right)\breve{\mathsf{R}}_{1}\left(u^{2}q^{I+2-2i}\right)\ldots\breve{\mathsf{ R}}_{i-1}\left(u^{2}q^{I-i}\right)\bigotimes_{a\in[[1,i-1]]}^{\longleftarrow} \mathsf{M}\left(uq^{\frac{I+1}{2}-a}\right)\otimes\bigotimes_{a\in[[i,I]]}^{ \longrightarrow}\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+a}\right) \tag{2.25}\] \[=\left\langle W\right|\bigotimes_{a\in[[1,i]]}^{\longleftarrow} \mathsf{M}\left(uq^{\frac{I+1}{2}-a}\right)\otimes\bigotimes_{a\in[[i+1,I]]}^{ \longleftarrow}\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+a}\right)\] The above equation can be seen as follows. Firstly the \(R\) matrices transfer the \(i\)-th factor \(\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+i}\right)\) in the tensor product all the way to the left, where each step uses the unfused ZF relation (2.23), i.e. \[\breve{\mathsf{R}}_{1}\left(u^{2}q^{I+2-2i}\right)\ldots\breve{ \mathsf{R}}_{i-1}\left(u^{2}q^{I-i}\right)\bigotimes_{a\in[[1,i-1]]}^{ \longleftarrow}\mathsf{M}\left(uq^{\frac{I+1}{2}-a}\right)\otimes\bigotimes _{a\in[[i,I]]}^{\longrightarrow}\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+ a}\right)\] \[=\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+i}\right)\otimes \bigotimes_{a\in[[1,i-1]]}^{\longleftarrow}\mathsf{M}\left(uq^{\frac{I+1}{2 }-a}\right)\otimes\bigotimes_{a\in[[i+1,I]]}^{\longrightarrow}\mathsf{M}\left( \frac{1}{u}q^{-\frac{I+1}{2}+a}\right)\] Then the \(K\) matrix \(\mathsf{K}_{1}\left(uq^{\frac{I+1}{2}-i}\right)\) acts on the first factor \(\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+i}\right)\) of the tensor product and converts the spectral parameter to its inverse \(\mathsf{M}\left(uq^{\frac{I+1}{2}-i}\right)\) by the unfused GZ relation: \[\left\langle W\right|\mathsf{K}(u)\mathsf{M}\left(\frac{1}{u}\right)=\left\langle W \right|\mathsf{M}(u). \tag{2.26}\] i.e. \[\left\langle W\right|\mathsf{K}_{1}\left(uq^{\frac{I+1}{2}-i} \right)\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+i}\right)\otimes\bigotimes _{a\in[[1,i-1]]}^{\longleftarrow}\mathsf{M}\left(uq^{\frac{I+1}{2}-a}\right) \otimes\bigotimes_{a\in[[i+1,I]]}^{\longrightarrow}\mathsf{M}\left(\frac{1 }{u}q^{-\frac{I+1}{2}+a}\right)\] \[=\left\langle W\right|\bigotimes_{a\in[[1,i]]}^{\longleftarrow} \mathsf{M}\left(uq^{\frac{I+1}{2}-a}\right)\otimes\bigotimes_{a\in[[i+1,I]]}^{ \longrightarrow}\mathsf{M}\left(\frac{1}{u}q^{-\frac{I+1}{2}+a}\right).\] We conclude the proof of the identity (2.25) and hence the proof of the fused GZ relation (2.24). We remark that in the above proof we have implicitly used two basic facts: * If \(m_{1},m_{2}\in\mathcal{V}\otimes\mathcal{A}\) and \(\left\langle W\right|m_{1}=\left\langle W\right|m_{2}\) then \(\left\langle W\right|\Phi m_{1}=\left\langle W\right|\Phi m_{2}\) for any \(\Phi\in\operatorname{End}(\mathcal{V})\). * If \(m_{1},m_{2}\in\mathcal{V}\otimes\mathcal{A}\), \(m\in\mathcal{V}^{\prime}\otimes\mathcal{A}\) and \(\left\langle W\right|m_{1}=\left\langle W\right|m_{2}\) then \(\left\langle W\right|m_{1}\otimes m=\left\langle W\right|m_{2}\otimes m\). The first fact allows us to induct on the identity (2.25) and the second fact allows us to apply the unfused GZ equation (2.26) only on the first factor of a tensor product. The proof of the second fused GZ relation \[\overline{\mathsf{K}}^{I}(u)\mathsf{M}^{I}\left(\frac{1}{u}\right)\left|V \right\rangle=\mathsf{M}^{I}(u)|V\rangle \tag{2.27}\] follows from an induction that is parallel with the above proof of the first fused GZ relation. ### Proof of Theorem 1.3 In this subsection we offer the deferred proof of Theorem 1.3. This proof is very similar with the proof of [10, Theorem 2.6] in the spin-\(\frac{1}{2}\)\((I=1)\) case. Assume there are elements \(M_{j}^{\uparrow}\) and \(M_{j}^{-\uparrow}\), \(0\leq j\leq I\) in the \(\mathbb{C}\)-algebra \(\mathcal{A}\) and boundary vectors \(\left\langle W\right|\in H^{*}\) and \(\left|V\right\rangle\in H\) (recall that \(H\) is a linear representation space of \(\mathcal{A}\)) satisfying consistency relations: \[M_{c}^{\uparrow}M_{d}^{\rightarrow}=\sum_{a,b=0}^{I}\mathsf{R}_{a,b}^{c,d}M_{b}^ {\rightarrow}M_{a}^{\uparrow},\quad\left\langle W\right|M_{d}^{\rightarrow}=\sum_ {a=0}^{I}\left(\epsilon\mathsf{K}_{a}^{\rm d}\right)\left\langle W\right|M_{a }^{\uparrow},\quad M_{c}^{\uparrow}\left|V\right\rangle=\sum_{b=0}^{I}\left( \epsilon\mathsf{K}_{b}^{\rm c}\right)M_{b}^{\rightarrow}\left|V\right\rangle. \tag{2.28}\] We consider the following collection of \(\mathbb{C}\)-valued measures \(\{\mu_{\mathcal{P}}\}\) (each measure has total mass \(1\)) on the state space \([[0,r]]^{N}\), indexed by down-right paths \(\mathcal{P}\) on the strip, given by the matrix product state: \[\mu_{\mathcal{P}}(\tau_{1},\ldots,\tau_{N})=\frac{\langle W|M_{\tau_{1}}^{p_{1} }\times\cdots\times M_{\tau_{N}}^{p_{N}}|V\rangle}{\langle W|(\sum_{j=0}^{I}M_ {j}^{p_{1}})\times\cdots\times(\sum_{j=0}^{I}M_{j}^{p_{N}})|V\rangle}, \tag{2.29}\] where \(p_{i}\in\{\uparrow,\rightarrow\}\), \(1\leq i\leq N\) are outgoing edges of \(\mathcal{P}\) labelled from the up-left of \(\mathcal{P}\) to the down-right of \(\mathcal{P}\). We prove the following stationarity of collection \(\mu_{\mathcal{P}}\) under the evolution of spin-\(\frac{I}{2}\) vertex model on a strip: **Claim 2.12**.: For any down-right paths \(\mathcal{P}\) and \(\mathcal{Q}\) such that \(\mathcal{Q}\) sits above \(\mathcal{P}\), we have for all \(\tau^{\prime}\in[[0,I]]^{N}\), \[\sum_{\tau\in[[0,I]]^{N}}P_{\mathcal{P},\mathcal{Q}}(\tau,\tau^{\prime})\mu_ {\mathcal{P}}(\tau)=\mu_{\mathcal{Q}}(\tau^{\prime}). \tag{2.30}\] Observe that for the translated path \(\Upsilon_{1}\mathcal{P}=\mathcal{P}+(1,1)\), the measure \(\mu_{\Upsilon_{1}\mathcal{P}}\) is the same as \(\mu_{\mathcal{P}}\) (as \(\mathbb{C}\)-valued measures on \([[0,I]]^{N}\)), since the outgoing edges of \(\Upsilon_{1}\) are also \(p_{1},\ldots,p_{N}\in\{\uparrow,\rightarrow\}\) (so that the elements \(M_{\tau_{i}}^{p_{i}}\) in (2.29) are the same). Assume that the Claim 2.12 holds, we can take \(\mathcal{Q}=\Upsilon_{1}\mathcal{P}\) and hence \(\mu_{\mathcal{P}}\) is an eigenvector with eigenvalue \(1\) of the transition matrix \(P_{\mathcal{P},\Upsilon_{1}\mathcal{P}}(\tau,\tau^{\prime})\) of the (irreducible) Markov chain indexed by \(\mathcal{P}\). By Perron-Frobenius theorem, \(\mu_{\mathcal{P}}\) is the unique stationary probability measure of this system. We introduce three types of 'local moves' of a down-right path, where the thick lines denote locally the down-right path: By sequentially performing three local moves, a path \(\mathcal{P}\) can be updated to any other path \(\mathcal{Q}\) above it. Hence (2.30) can be guaranteed by its special case when \(\mathcal{Q}=\widetilde{\mathcal{P}}\) is a local move of \(\mathcal{P}\): For all \(\tau^{\prime}\in[[0,I]]^{N}\), \[\sum_{\tau\in[[0,I]]^{N}}P_{\mathcal{P},\widetilde{\mathcal{P}}}(\tau,\tau^ {\prime})\mu_{\mathcal{P}}(\tau)=\mu_{\widetilde{\mathcal{P}}}(\tau^{\prime}). \tag{2.31}\] As we put matrix product form (2.29) of \(\mu_{\mathcal{P}}\) and \(\mu_{\widetilde{\mathcal{P}}}\) in the above equation, all of the terms coincide except for two that went through the local move in the bulk, or one that went through local move at the left/right boundary. We only need to keep track of the terms that have been updated. In the following diagrams the thick paths denote locally down-right paths \(\mathcal{P}\) and \(\widetilde{\mathcal{P}}\), and gray arrows denote locally outgoing configurations. * The bulk local move \[\xy(0,0)*{\bullet};(0,0)*{\bullet};(0,0)*{\bullet};(0,0)*{\bullet};(0,0)*{ \bullet};(0,0)*{\bullet};(0,0)*{\bullet};(0,0) ## 3. Stationary measure in terms of Askey-Wilson process and phase diagram In Section 3.1 we introduce the backgrounds of Askey-Wilson measures and processes. It is known that certain types of matrix product states can be written as expectations under the Askey-Wilson process, which will be introduced in Section 3.2. In Section 3.3 we prove Theorem 1.26 expressing the joint generating function of the stationary measure of the fused vertex model on a strip in terms of the Askey-Wilson process. In Section 3.4 we use this expression to prove Theorem 1.27 on the asymptotic behavior of the'mean arrow density' of the stationary measure as the system size \(N\to\infty\), whereby recovering the phase diagram. ### Backgrounds on Askey-Wilson process The Askey-Wilson measures (originally introduced by [1]) depend on five parameters \((\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)\), where \(q\in(-1,1)\) and \(\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d}\) are either real, or two of them form a complex conjugate pair, or they form two complex conjugate pairs. In addition we require: \[\mathtt{ac},\mathtt{ad},\mathtt{bc},\mathtt{bd},q\mathtt{ac},q\mathtt{ad},q \mathtt{bc},q\mathtt{bd},\mathtt{abcd},q\mathtt{abcd}\in\mathbb{C}\setminus[1, \infty).\] The Askey-Wilson measure is a probability measure of mixed type on \(\mathbb{R}\): \[\nu(dy;\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)=f(y,\mathtt{a}, \mathtt{b},\mathtt{c},\mathtt{d},q)dy+\sum_{z\in F(\mathtt{a},\mathtt{b}, \mathtt{c},\mathtt{d},q)}p(z)\delta_{z}(dy). \tag{3.1}\] The absolutely continuous part of (3.1) is supported on \([-1,1]\) with density \[f(y,\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)=\frac{\left(q,\mathtt{ab}, \mathtt{ac},\mathtt{ad},\mathtt{bc},\mathtt{bd},\mathtt{cd};q\right)_{\infty}}{ 2\pi(\mathtt{abcd};q)_{\infty}\sqrt{1-y^{2}}}\left|\frac{\left(e^{2i\theta_{y}} ;q\right)_{\infty}}{(\mathtt{a}e^{i\theta_{y}},\mathtt{b}e^{i\theta_{y}}, \mathtt{c}e^{i\theta_{y}},\mathtt{d}e^{i\theta_{y}};q)_{\infty}}\right|^{2}, \tag{3.2}\] where \(y=\cos\theta_{y}\) and we set \(f(y,\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)=0\) for \(|y|\geq 1\). Here and below, for complex \(z\) and \(n\in\mathbb{Z}_{+}\cup\{\infty\}\), we use the \(q\)-Pochhammer symbol: \[(z;q)_{n}=\prod_{j=0}^{n-1}\left(1-zq^{j}\right),\quad(z_{1},\cdots,z_{k};q)_ {n}=\prod_{i=1}^{k}(z_{i};q)_{n}.\] The atomic part of (3.1) is supported on a finite or empty set \(F(\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)\) of atoms generated by numbers \(\chi\in\{\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d}\}\) such that \(|\chi|>1\). In this case \(\chi\) must be real and generates its own set of atoms: \[y_{j}=\frac{1}{2}\left(\chi q^{j}+\frac{1}{\chi q^{j}}\right)\text{ for }j=0,1\dots\text{ such that }|\chi q^{j}|>1. \tag{3.3}\] When \(\chi=\mathtt{a}\), the corresponding masses are \[p(y_{0};\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q) =\frac{\left(\mathtt{a}^{-2},\mathtt{bc},\mathtt{bd},\mathtt{cd} ;q\right)_{\infty}}{\left(\mathtt{b}/\mathtt{a},\mathtt{c}/\mathtt{a}, \mathtt{d}/\mathtt{a},\mathtt{abcd};q\right)_{\infty}},\] \[p(y_{j};\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q) =p(y_{0};\mathtt{a},\mathtt{b},\mathtt{c},\mathtt{d},q)\frac{ \left(\mathtt{a}^{2},\mathtt{ab},\mathtt{ac},\mathtt{ad};q\right)_{j}\left(1- \mathtt{a}^{2}q^{2j}\right)}{\left(q,q\mathtt{a}/\mathtt{b},q\mathtt{a}/ \mathtt{c},q\mathtt{a}/\mathtt{d};q\right)_{j}\left(1-\mathtt{a}^{2}\right)} \left(\frac{q}{\mathtt{abcd}}\right)^{j},\quad j\geq 1.\] The Askey-Wilson processes introduced by [1, 16] depend on five parameters \((\mathtt{A},\mathtt{B},\mathtt{C},\mathtt{D},q)\), where \(q\in(-1,1)\) and \(\mathtt{A},\mathtt{B},\mathtt{C},\mathtt{D}\) are either real or \((\mathtt{A},\mathtt{B})\) or \((\mathtt{C},\mathtt{D})\) form complex conjugate pairs, and in addition \[\mathtt{AC},\mathtt{AD},\mathtt{BC},\mathtt{BD},q\mathtt{AC},q\mathtt{AD},q \mathtt{BC},q\mathtt{BD},\mathtt{ABCD},q\mathtt{ABCD}\in\mathbb{C}\setminus[1, \infty).\] The Askey-Wilson process \(\left(Y_{t}\right)_{t\in I}\) is a time-inhomogeneous Markov process defined on the interval \(I=\left[\max\{0,\mathtt{CD},q\mathtt{CD}\},\frac{1}{\max\{0,\mathtt{AB},q \mathtt{AB}\}}\right),\) with marginal distributions \[\pi_{t}(dx)=\nu\left(dx;\mathtt{A}\sqrt{t},\mathtt{B}\sqrt{t},\mathtt{C}/ \sqrt{t},\mathtt{D}/\sqrt{t},q\right),\quad\forall t\in I,\] and transition probabilities \[\mathsf{P}_{s,t}(x,dy)=\nu\left(dy;\mathtt{A}\sqrt{t},\mathtt{B}\sqrt{t}, \sqrt{s/t}\left(x+\sqrt{x^{2}-1}\right),\sqrt{s/t}\left(x-\sqrt{x^{2}-1} \right)\right),\quad\forall s<t,\quad s,t\in I.\] The marginal distribution \(\pi_{t}(dx)\) may have atoms at \[\frac{1}{2}\left(\mathtt{A}\sqrt{t}q^{j}+\frac{1}{\mathtt{A}\sqrt{t}q^{j}} \right),\quad\frac{1}{2}\left(\mathtt{B}\sqrt{t}q^{j}+\frac{1}{\mathtt{B} \sqrt{t}q^{j}}\right),\quad\frac{1}{2}\left(\frac{\mathtt{C}q^{j}}{\sqrt{t}}+ \frac{\sqrt{t}}{\mathtt{C}q^{j}}\right),\quad\frac{1}{2}\left(\frac{\mathtt{D }q^{j}}{\sqrt{t}}+\frac{\sqrt{t}}{\mathtt{D}q^{j}}\right), \tag{3.4}\] and the transition probabilities \(\mathsf{P}_{s,t}(x,dy)\) may also have atoms. ### Matrix product states in terms of Askey-Wilson processes We introduce the result in [10] which expresses a certain type of matrix product states in terms of expectations of Askey-Wilson process. Suppose that there are real numbers \(q,\alpha,\beta,\gamma,\delta\) satisfying: \[\alpha,\beta>0,\quad\gamma,\delta\geq 0,\quad 0\leq q<1. \tag{3.5}\] We consider the following relations involving matrices \(\mathbf{D}\) and \(\mathbf{E}\), row vector \(\langle W|\) and column vector \(|V\rangle\): \[\mathbf{D}\mathbf{E}-q\mathbf{E}\mathbf{D}=\mathbf{D}+\mathbf{E},\quad\langle W |(\alpha\mathbf{E}-\gamma\mathbf{D})=\langle W|,\quad(\beta\mathbf{D}-\delta \mathbf{E})|V\rangle=|V\rangle, \tag{3.6}\] We require that these matrices and vectors have the same (possibly infinite) dimension. (3.6) is commonly referred to as the DEHP algebra and was introduced in the seminal work [1]. **Remark 3.1**.: In the context of open ASEP, parameters \(q,\alpha,\beta,\gamma,\delta\) play the role of particle jump rates, and the DEHP algebra (3.6) plays the role of consistency relations of the matrix product ansatz. **Definition 3.2**.: We will use the following (alternative) parameterization: \[A=\kappa_{+}(\beta,\delta),\quad B=\kappa_{-}(\beta,\delta),\quad C=\kappa_{+} (\alpha,\gamma),\quad D=\kappa_{-}(\alpha,\gamma), \tag{3.7}\] where \[\kappa_{\pm}(u,v)=\frac{1}{2u}\left(1-q-u+v\pm\sqrt{(1-q-u+v)^{2}+4uv}\right).\] One can check that for any given \(q\in[0,1)\), (3.7) gives a bijection \[\left\{(\alpha,\beta,\gamma,\delta):\alpha,\beta>0,\gamma,\delta\geq 0\right\} \stackrel{{\sim}}{{\longleftrightarrow}}\left\{(A,B,C,D):A,C>0,B, D\in(-1,0]\right\}.\] The following result gives a concrete example of \(\mathbf{D}\), \(\mathbf{E}\), \(\langle W|\) and \(|V\rangle\) satisfying the DEHP algebra (3.6), which is commonly referred to as the USW representation and was first introduced in [11]. **Proposition 3.3** (USW representation of the DEHP algebra, see [11, 10]).: _Suppose that the parameters \(q,\alpha,\beta,\gamma,\delta\) satisfy (3.5). Suppose that \(\alpha_{n},\beta_{n},\gamma_{n},\delta_{n},\varepsilon_{n},\varphi_{n}\) are given in terms of \(q,A,B,C,D\) by the formulas in [10, page 1243], for \(n\in\mathbb{N}_{0}\). Consider the following tridiagonal matrices:_ \[\mathbf{x}=\begin{bmatrix}\gamma_{0}&\varepsilon_{1}&0&\ldots\\ \alpha_{0}&\gamma_{1}&\varepsilon_{2}&\ldots\\ 0&\alpha_{1}&\gamma_{2}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix},\quad\mathbf{y}=\begin{bmatrix} \delta_{0}&\varphi_{1}&0&\ldots\\ \beta_{0}&\delta_{1}&\varphi_{2}&\ldots\\ 0&\beta_{1}&\delta_{2}&\ldots\\ \vdots&\vdots&\vdots&\ddots\end{bmatrix},\] \[\mathbf{E}=\frac{1}{1-q}\mathbf{I}+\frac{1}{\sqrt{1-q}}\mathbf{y},\quad\mathbf{D}=\frac{1}{1-q}\mathbf{I}+\frac{1}{\sqrt{1-q}}\mathbf{x},\] _and boundary vectors_ \[\langle W|=(1,0,0,\ldots),\quad|V\rangle=(1,0,0,\ldots)^{T}.\] _Then \(\mathbf{D}\), \(\mathbf{E}\), \(\langle W|\) and \(|V\rangle\) satisfy the DEHP algebra (3.6)._ **Theorem 3.4** (Theorem 1 in [10]).: _Assume that \(q,\alpha,\beta,\gamma,\delta\) satisfy (3.5) and that \(AC<1\) (where \(A,B,C,D\) are defined in Definition 3.2). Suppose that \(\mathbf{D}\), \(\mathbf{E}\), \(\langle W|\), \(|V\rangle\) form the USW representation of the DEHP algebra given in Proposition 3.3. Then for any \(0<t_{1}\leq\cdots\leq t_{N}\), we have_ \[\left\langle W\bigg{|}\prod_{i=1}^{N}(\mathbf{E}+t_{i}\mathbf{D})\bigg{|}V \right\rangle=\frac{1}{(1-q)^{N}}\mathbb{E}\left[\prod_{i=1}^{N}\left(1+t_{i} +2\sqrt{t_{i}}Y_{t_{i}}\right)\right],\] _where \(\left(Y_{t}\right)_{t\geq 0}\) is the Askey-Wilson process with parameters \((A,B,C,D,q)\)._ We will utilize the following corollary of the above theorem, which itself has not appeared in the literature. **Corollary 3.5**.: _Our assumptions are the same as in Theorem 3.4. Then for any \(v_{1},\ldots,v_{N}\in\mathbb{R}\) and \(0<t_{1}\leq\cdots\leq t_{N}\), we have_ \[\left\langle W\bigg{|}\prod_{i=1}^{N}\left((1-q)\left(\mathbf{E}+t_{i}\mathbf{ D}\right)+v_{i}\right)\bigg{|}V\right\rangle=\mathbb{E}\left[\prod_{i=1}^{N} \left(1+t_{i}+2\sqrt{t_{i}}Y_{t_{i}}+v_{i}\right)\right].\] Proof.: This follows from expanding the bracket and using Theorem 3.4 multiple times. ### Stationary measure in terms of the Askey-Wilson process In this subsection we prove Theorem 1.26 expressing the stationary measure of the fused vertex model on a strip in terms of Askey-Wilson process. We begin by recalling the matrix product ansatz of stationary measure given in Theorem 1.24. Suppose \(\mathcal{A}\) is a C-algebra with linear representation space \(H\). Assume \(\mathbf{d},\mathbf{e}\in\mathcal{A}\), \(\langle W|\in H^{*}\) and \(|V\rangle\in H\) satisfy: \[\mathbf{d}\mathbf{e}-q\mathbf{e}\mathbf{d}=1-q,\quad\langle W|\left(a \mathbf{e}-c\mathbf{d}+1\right)=0,\quad\left(\mathbf{\delta}\mathbf{d}-d \mathbf{e}+1\right)|V\rangle=0. \tag{3.8}\] We then define for \(0\leq\zeta\leq I\), \[\mathsf{M}_{\zeta}^{I}(u):=\sum_{\begin{subarray}{c}\zeta_{1}+\dots+\zeta_{I} =\zeta\\ \zeta_{1},\dots,\zeta_{I}\in\{0,1\}\end{subarray}}\prod_{a\in[[1,I]]}^{ \longrightarrow}\mathsf{M}_{\zeta_{a}}\left(uq^{-\frac{I+1}{2}+a}\right) \in\mathcal{A}, \tag{3.9}\] where \(\mathsf{M}_{0}(u)=u+\mathbf{e}\) and \(\mathsf{M}_{1}(u)=\frac{1}{u}+\mathbf{d}\). Define \(M_{j}^{\uparrow}=\mathsf{M}_{j}^{I}\left(1/\kappa\right)\) and \(M_{j}^{\rightarrow}=\mathsf{M}_{j}^{I}\left(\kappa\right)\) for \(0\leq j\leq I\). Then the stationary measure of the fused vertex model on a strip on any down-right path \(\mathcal{P}\) is given by: \[\mu_{\mathcal{P}}(\tau_{1},\dots,\tau_{N})=\frac{\langle W|M_{1}^{p_{1}}\times \dots\times M_{p_{N}}^{p_{N}}|V\rangle}{\langle W|(\sum_{j=0}^{I}M_{j}^{p_{1}} )\times\dots\times(\sum_{j=0}^{I}M_{j}^{p_{N}})|V\rangle}, \tag{3.10}\] where \(0\leq\tau_{1},\dots,\tau_{N}\leq I\) are occupation variables on outgoing edges \(p_{1},\dots p_{N}\in\{\uparrow,\rightarrow\}\) of \(\mathcal{P}\). We first observe that the relations (3.8) is a linear transformation of the DEHP algebra. **Proposition 3.6**.: _Relations (3.8) between \(\mathbf{d}\), \(\mathbf{e}\), \(\langle W|\) and \(|V\rangle\) are equivalent to the DEHP algebra (3.6) between \(\mathbf{D}=\frac{1}{1-q}(1+\mathbf{d})\), \(\mathbf{E}=\frac{1}{1-q}(1+\mathbf{e})\), \(\langle W|\) and \(|V\rangle\), with parameters:_ \[(\alpha,\beta,\gamma,\delta)=\left(\frac{(1-q)\mathpzc{a}}{\mathpzc{a}-\epsilon -1},\frac{(1-q)\mathpzc{a}}{\mathpzc{b}-\mathpzc{d}-1},\frac{(1-q)\mathpzc{c} }{\mathpzc{a}-\epsilon-1},\frac{(1-q)\mathpzc{d}}{\mathpzc{b}-\mathpzc{d}-1} \right). \tag{3.11}\] Proof.: This can be seen by taking \(\mathbf{d}=(1-q)\mathbf{D}-1\) and \(\mathbf{e}=(1-q)\mathbf{E}-1\) into (3.8). **Definition 3.7**.: We recall the fused vertex model on a strip defined in Definition 1.11, which has parameters \(q,\kappa,a,\mathpzc{b},\mathpzc{c},\mathpzc{d}\) satisfying: \[0<q<1,\quad 0<\kappa<q^{\frac{I-1}{2}},\quad\mathpzc{a},\mathpzc{b},\mathpzc{c}, \mathpzc{d}>0,\quad\mathpzc{a}-\mathpzc{c}>q^{\frac{1-I}{2}}/\kappa,\quad \mathpzc{b}-\mathpzc{d}>q^{\frac{1-I}{2}}/\kappa.\] We will use the alternative parameterization \(q,\kappa,A,B,C,D\) given by (3.11) and then by (3.7): \[(\mathpzc{a},\mathpzc{b},\mathpzc{c},\mathpzc{d})\stackrel{{ \eqref{eq:def_def_def_def}}}{{\longrightarrow}}(\alpha,\beta, \gamma,\delta)\stackrel{{\eqref{eq:def_def_def_def}}}{{ \longrightarrow}}(A,B,C,D).\] For any fixed \(0<q<1\) and \(0<\kappa<q^{\frac{I-1}{2}}\), this is a bijection from \(\{(\mathpzc{a},\mathpzc{b},\mathpzc{c},\mathpzc{d}):\mathpzc{a},\mathpzc{b}, \mathpzc{c},\mathpzc{d}>0,\mathpzc{a}-\mathpzc{c}>q^{\frac{1-I}{2}}/\kappa, \mathpzc{b}-\mathpzc{d}>q^{\frac{1-I}{2}}/\kappa\}\) to a certain sub-region of \(\{(A,B,C,D):A,C>0,B,D\in(-1,0]\}\). Proof of Theorem 1.26.: By (3.10), we have that for any \(t_{1},\dots,t_{N}>0\), \[\mathbb{E}_{\mu_{\mathcal{P}}}\left[\prod_{i=1}^{N}t_{i}^{\tau_{i}}\right]= \sum_{0\leq\tau_{1},\dots,\tau_{N}\leq I}\mu_{\mathcal{P}}(\tau_{1},\dots,\tau_ {N})\prod_{i=1}^{N}t_{i}^{\tau_{i}}=\frac{\left\langle W\left|\prod_{i\in[[1,N] ]}^{\longrightarrow}\left(\sum_{\zeta=0}^{I}t_{i}^{\zeta}\mathsf{M}_{\zeta}^{p_ {i}}\right)\left|V\right\rangle}{\left\langle W\left|\prod_{i\in[[1,N]]}^{ \longrightarrow}\left(\sum_{\zeta=0}^{I}M_{\zeta}^{p_{i}}\right)\left|V\right \rangle\right\rangle}. \tag{3.12}\] For any \(t>0\) we have: \[\sum_{\zeta=0}^{I}t^{\zeta}\mathsf{M}_{\zeta}^{\rightarrow} =\sum_{\zeta=0}^{I}t^{\zeta}\mathsf{M}_{\zeta}^{I}\left(\kappa \right)=\sum_{\zeta=0}^{I}t^{\zeta}\sum_{\begin{subarray}{c}\zeta_{1}+\dots+ \zeta_{I}=\zeta\\ \zeta_{i}\in\{0,1\}\end{subarray}}\prod_{a\in[[1,I]]}^{\longrightarrow} \mathsf{M}_{\zeta_{a}}\left(q^{-\frac{I+1}{2}+a}\kappa\right)\] \[=\sum_{\zeta_{1},\dots,\zeta_{I}\in\{0,1\}}\prod_{a\in[[1,I]]}^{ \longrightarrow}\left(t\mathsf{d}\varsigma_{a}\mathsf{M}_{\zeta_{a}}\left(q^{- \frac{I+1}{2}+a}\kappa\right)\right)\] \[=\prod_{a\in[[1,I]]}^{\longrightarrow}\left(\mathsf{M}_{0}\left(q ^{-\frac{I+1}{2}+a}\kappa\right)+t\mathsf{M}_{1}\left(q^{-\frac{I+1}{2}+a} \kappa\right)\right)\] \[=\prod_{a\in[[1,I]]}^{\longrightarrow}\left(t\mathbf{d}+ \mathbf{e}+tq^{\frac{I+1}{2}-a}\kappa^{-1}+q^{-\frac{I+1}{2}+a}\kappa\right)\] \[=\prod_{a\in[[1,I]]}^{\longrightarrow}\left((t\mathbf{D}+ \mathbf{E})(1-q)+tq^{\frac{I+1}{2}-a}\kappa^{-1}+q^{-\frac{I+1}{2}+a}\kappa-t-1 \right).\] The formula for \(\sum_{\zeta=0}^{I}t^{\zeta}M_{\zeta}^{\uparrow}\) is parallel with above, with the only difference that \(\kappa\) on the RHS is replaced by \(1/\kappa\). Therefore we have: \[\sum_{\zeta=0}^{I}t^{\zeta}M_{\zeta}^{p_{i}}=\prod_{a\in[[1,I]]}^{\longrightarrow} \left((t\mathbf{D}+\mathbf{E})(1-q)+tq^{\frac{I+1}{2}-a}\kappa^{v_{i}}+q^{- \frac{I+1}{2}+a}\kappa^{-v_{i}}-t-1\right),\] where we recall that \(v_{i}=1\) if \(p_{i}=\uparrow\) and \(v_{i}=-1\) if \(p_{i}=\rightarrow\). For any \(0<t_{1}\leq\cdots\leq t_{N}\), by Corollary 3.5, we have: \[\left\langle W\bigg{|}\prod_{i\in[[1,N]]}^{\longrightarrow} \left(\sum_{\zeta=0}^{I}t_{i}^{\zeta}M_{\zeta}^{p_{i}}\right)\bigg{|}V\right\rangle =\left\langle W\bigg{|}\prod_{i\in[[1,N]]}^{\longrightarrow} \prod_{a\in[[1,I]]}^{\longrightarrow}\left((t_{i}\mathbf{D}+\mathbf{E})(1-q)+ t_{i}q^{\frac{I+1}{2}-a}\kappa^{v_{i}}+q^{-\frac{I+1}{2}+a}\kappa^{-v_{i}}-t_{i}-1 \right)\bigg{|}V\right\rangle \tag{3.13}\] \[=\mathbb{E}\left[\prod_{i=1}^{N}\prod_{a=1}^{I}\left(2\sqrt{t_{i} }Y_{t_{i}}+t_{i}q^{\frac{I+1}{2}-a}\kappa^{v_{i}}+q^{-\frac{I+1}{2}+a}\kappa^ {-v_{i}}\right)\right].\] Taking \(t_{1}=\cdots=t_{N}=1\) in the above equation, we get: \[\left\langle W\bigg{|}\prod_{i\in[[1,N]]}^{\longrightarrow}\left( \sum_{\zeta=0}^{I}M_{\zeta}^{p_{i}}\right)\bigg{|}V\right\rangle =\mathbb{E}\left[\prod_{i=1}^{N}\prod_{a=1}^{I}\left(2Y_{1}+q^{ \frac{I+1}{2}-a}\kappa^{v_{i}}+q^{-\frac{I+1}{2}+a}\kappa^{-v_{i}}\right)\right] \tag{3.14}\] \[=\mathbb{E}\left[\prod_{a=1}^{I}\left(2Y_{1}+q^{\frac{I+1}{2}-a} \kappa+q^{-\frac{I+1}{2}+a}\kappa^{-1}\right)^{N}\right],\] where the last equality above follows from \[\prod_{a=1}^{I}\left(2Y_{1}+q^{\frac{I+1}{2}-a}\kappa+q^{-\frac{I+1}{2}+a} \kappa^{-1}\right)=\prod_{a=1}^{I}\left(2Y_{1}+q^{\frac{I+1}{2}-a}\kappa^{-1}+ q^{-\frac{I+1}{2}+a}\kappa\right),\] which follows from changing the index \(a\) to \(I+1-a\). Theorem 1.26 follows from putting (3.13) and (3.14) into (3.12). We conclude the proof. ### Mean arrow density and phase diagram: Proof of Theorem 1.27 We prove Theorem 1.27 on the limits of the mean arrow density of the stationary measure of the fused vertex model on a strip. We recall from the statement of the theorem that \(\mathcal{P}^{N}\) is a down-right path on the strip with width \(N\), with \(0\leq\phi_{N}\leq N\) many horizontal edges, and that we are assuming \(\phi_{N}/N\) converges to \(\lambda\in[0,1]\) as \(N\rightarrow\infty\). The next result offers an expression of the mean density in terms of the 'partition function': **Lemma 3.8**.: _For each \(N\in\mathbb{Z}_{+}\), we define the following function on \(t>0\):_ \[Z_{N}(t)=\mathbb{E}\left[\prod_{a=1}^{I}\left(2\sqrt{t}Y_{t}+tq^{\frac{I+1}{2} -a}\kappa+q^{-\frac{I+1}{2}+a}\kappa^{-1}\right)^{\phi_{N}}\prod_{a=1}^{I} \left(2\sqrt{t}Y_{t}+tq^{\frac{I+1}{2}-a}\kappa^{-1}+q^{-\frac{I+1}{2}+a}\kappa \right)^{N-\phi_{N}}\right]. \tag{3.15}\] _Then we have:_ \[\mathbb{E}_{\mu_{\mathcal{P}^{N}}}\left[\frac{1}{N}\sum_{i=1}^{N}\tau_{i} \right]=\frac{\partial_{t}Z_{N}(t)|_{t=1}}{NZ_{N}(1)}. \tag{3.16}\] Proof.: Taking \(t_{1}=\cdots=t_{N}=t\) in (3.13), we have: \[Z_{N}(t)=\left\langle W\bigg{|}\prod_{i\in[[1,N]]}^{\longrightarrow}\left( \sum_{\zeta=0}^{I}t^{\zeta}M_{\zeta}^{p_{i}}\right)\bigg{|}V\right\rangle= \sum_{0\leq\tau_{1},\ldots,\tau_{N}\leq I}t^{\sum_{i=1}^{N}\tau_{i}}\langle W|M _{\tau_{1}}^{p_{1}}\ldots M_{\tau_{N}}^{p_{N}}|V\rangle\] Therefore we have: \[\partial_{t}Z_{N}(t)|_{t=1}=\sum_{0\leq\tau_{1},\ldots,\tau_{N}\leq I}\left( \sum_{i=1}^{N}\tau_{i}\right)\langle W|M_{\tau_{1}}^{p_{1}}\ldots M_{\tau_{N}}^{ p_{N}}|V\rangle,\quad Z_{N}(1)=\left\langle W\bigg{|}\left(\sum_{\zeta=0}^{I}M_{ \zeta}^{p_{1}}\right)\ldots\left(\sum_{\zeta=0}^{I}M_{\zeta}^{p_{N}}\right) \bigg{|}V\right\rangle\] From the matrix product ansatz expression (3.10) of the stationary measure \(\mu_{\mathcal{P}^{N}}\) we conclude (3.16). We now begin the proof of Theorem 1.27. Proof of Theorem 1.27.: By Lemma 3.8, to obtain the limits of the mean arrow density, we need to analyze the \(N\to\infty\) asymptotic behaviors of \(Z_{N}(1)\) and \(\partial_{t}Z_{N}(t)|_{t=1}\). We first define some functions that will be useful in the proof. For \(1\leq a\leq I\), we define: \[h_{a}^{\uparrow}(t,y)=2\sqrt{t}y+tq^{\frac{f_{1}+1}{2}-a}\kappa+q^{-\frac{f_{1 }+1}{2}+a}\kappa^{-1},\quad h_{a}^{\rightarrow}(t,y)=2\sqrt{t}y+tq^{\frac{f_{1 }+1}{2}-a}\kappa^{-1}+q^{-\frac{f_{1}+1}{2}+a}\kappa, \tag{3.17}\] and then define \(h^{\uparrow}(t,y)=\prod_{a=1}^{I}h_{a}^{\uparrow}(t,y)\) and \(h^{\rightarrow}(t,y)=\prod_{a=1}^{I}h_{a}^{\rightarrow}(t,y)\). Denote \(\psi_{N}:=N-\phi_{N}\). By (3.15) we can write: \[Z_{N}(t)= \mathbb{E}\left[h^{\uparrow}(t,Y_{t})^{\phi_{N}}h^{\rightarrow} (t,Y_{t})^{\psi_{N}}\right]=\int_{-\infty}^{\infty}h^{\uparrow}(t,y)^{\phi_{N }}h^{\rightarrow}(t,y)^{\psi_{N}}\nu\left(dy;A\sqrt{t},B\sqrt{t},C/\sqrt{t}, D/\sqrt{t},q\right)\] \[= \int_{-1}^{1}h^{\uparrow}(t,y)^{\phi_{N}}h^{\rightarrow}(t,y)^{ \psi_{N}}f\left(y,A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q\right)dy\] \[+\sum_{y_{j}(t)\in F\left(A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{ t},q\right)}h^{\uparrow}(t,y_{j}(t))^{\phi_{N}}h^{\rightarrow}(t,y_{j}(t))^{ \psi_{N}}p\left(y_{j}(t);A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q\right), \tag{3.18}\] where \(F(A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q)\) is the set of atoms generated by \(A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t}\) and \[f\left(y,A\sqrt{t},B\sqrt{t},\frac{C}{\sqrt{t}},\frac{D}{\sqrt{t}},q\right)= \frac{\left(q,tAB,AC,AD,BC,BD,CD/t;q\right)_{\infty}}{2\pi(ABCD;q)_{\infty} \sqrt{1-y^{2}}}\left|\frac{\left(e^{2i\theta_{y}};q\right)_{\infty}}{\left(A \sqrt{te^{i\theta_{y}}},B\sqrt{te^{i\theta_{y}}},\frac{C}{\sqrt{t}}e^{i\theta_ {y}},\frac{D}{\sqrt{t}}e^{i\theta_{y}};q\right)_{\infty}}\right|^{2} \tag{3.19}\] is the continuous part density, where \(y=\cos\theta_{y}\in[-1,1]\). We make some observations of the Askey-Wilson measure \(\nu\left(dy;A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q\right)\) for \(t\) close to \(1\). Recall that \(A,C>0\), \(-1<B,D<0\) and \(AC<1\). Hence the atoms are generated by \(A\sqrt{t}\) in the high density phase and by \(C/\sqrt{t}\) in the low density phase. Since \(A,C\notin\{q^{-l}:l\in\mathbb{N}_{0}\}\), for \(t\) in small neighborhood of \(1\), the number of atoms is constant, and the positions of atoms \(y_{j}(t)\) and the masses \(p\left(y_{j}(t);A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q\right)\) are smooth functions of \(t\). The Askey-Wilson measure is supported in \([-1,\infty)\), where \(h^{\uparrow}(t,y)\) and \(h^{\rightarrow}(t,y)\) are strictly positive and strictly increasing functions of \(y\). In the rest of the proof we denote the continuous part density (3.19) by \(g(t,y)\). By Fact 3.13 in the proof of [13, Theorem 3.11], there exists a smooth function \(\theta(t,z)\) defined on a small neighborhood of \(\{1\}\times\{|z|=1\}\subset\mathbb{R}\times\mathbb{C}\) which cannot take \(0\) as its value, such that: \[g(t,y):=f\left(y,A\sqrt{t},B\sqrt{t},C/\sqrt{t},D/\sqrt{t},q\right)=\sqrt{1-y^ {2}}\theta(t,z),\] where \(z=e^{i\theta_{y}}\) for \(y=\cos\theta_{y}\in[-1,1]\). As a corollary, there exists a small \(\varepsilon>0\), such that functions \(g(t,y)\), \(\partial_{t}g(t,y)\) and \(\partial_{t}g(t,y)/g(t,y)\) are bounded on the region \((t,y)\in(1-\varepsilon,1+\varepsilon)\times[-1,1]\). In particular, one can take differentiation under the integral sign in \(\partial_{t}\left(\int_{-1}^{1}h^{\uparrow}(t,y)^{\phi_{N}}h^{\rightarrow}(t,y) ^{\psi_{N}}g(t,y)dy\right)\Big{|}_{t=1}\). By the observation above, we are now able to obtain the limits of mean density: 1. (High density phase \(A>1\)). When \(t\) is close to \(1\), atoms are generated by \(A\sqrt{t}\). We can observe that as \(N\to\infty\) both \(Z_{N}(1)\) and \(\partial_{t}Z_{N}(t)|_{t=1}\) are dominated by the largest atom \(y_{0}(t)=\frac{1}{2}\left(A\sqrt{t}+\frac{1}{A\sqrt{t}}\right)\): \[Z_{N}(1) \sim h^{\uparrow}(1,y_{0}(1))^{\phi_{N}}h^{\rightarrow}(1,y_{0}(1))^{ \psi_{N}}p(y_{0}(1);A,B,C,D,q),\] (3.20) \[\partial_{t}Z_{N}(t)|_{t=1} \sim\partial_{t}\left(h^{\uparrow}(t,y_{0}(t))^{\phi_{N}}h^{ \rightarrow}(t,y_{0}(t))^{\psi_{N}}\right)|_{t=1}p(y_{0}(1);A,B,C,D,q),\] (3.21) where we use \(u(N)\sim v(N)\) to denote \(\lim_{N\to\infty}u(N)/v(N)=1\). Hence we obtain: \[\lim_{N\to\infty}\frac{\partial_{t}Z_{N}(t)|_{t=1}}{NZ_{N}(1)}=\lambda\frac{ \partial_{t}h^{\uparrow}(t,y_{0}(t))|_{t=1}}{h^{\uparrow}(1,y_{0}(1))}+(1- \lambda)\frac{\partial_{t}h^{\rightarrow}(t,y_{0}(t))|_{t=1}}{h^{\rightarrow}(1,y_{0}(1))},\] (3.22) where we have used \(\lim_{N\to\infty}\phi_{N}/N=\lambda\) and \(\lim_{N\to\infty}\psi_{N}/N=1-\lambda\). For each \(1\leq a\leq I\) we have: \[\frac{\partial_{t}h_{a}^{\uparrow}(t,y_{0}(t))|_{t=1}}{h_{a}^{\uparrow}(1,y_{0} (1))} =\frac{\partial_{t}\left(2\sqrt{t}y_{0}(t)+tq^{\frac{f_{1}+1}{2}-a} \kappa+q^{-\frac{f_{1}+1}{2}+a}\kappa^{-1}\right)|_{t=1}}{2y_{0}(t)+q^{\frac{f_ {1}+1}{2}-a}\kappa+q^{-\frac{f_{1}+1}{2}+a}\kappa^{-1}}\] \[=\frac{\partial_{t}\left(At+1/A+tq^{\frac{f_{1}+1}{2}-a}\kappa+q^{- \frac{f_{1}+1}{2}+a}\kappa^{-1}\right)|_{t=1}}{A+1/A+q^{\frac{f_{1}+1}{2}-a} \kappa+q^{-\frac{f_{1}+1}{2}+a}\kappa^{-1}}=\frac{A\kappa}{A\kappa+q^{-\frac{f_ {1}+1}{2}+a}}.\] Therefore \[\frac{\partial_{t}h^{\uparrow}(t,y_{0}(t))|_{t=1}}{h^{\uparrow}(1,y_{0}(1))}=\sum_ {a=1}^{I}\frac{\partial_{t}h^{\uparrow}_{a}(t,y_{0}(t))|_{t=1}}{h^{\uparrow}_{ a}(1,y_{0}(1))}=\sum_{a=1}^{I}\frac{A\kappa}{A\kappa+q^{-\frac{I+1}{2}+a}}.\] Similarly, \[\frac{\partial_{t}h^{\rightarrow}(t,y_{0}(t))|_{t=1}}{h^{\rightarrow}(1,y_{0} (1))}=\sum_{a=1}^{I}\frac{A\kappa^{-1}}{A\kappa^{-1}+q^{-\frac{I+1}{2}+a}}.\] Hence by Lemma 3.8 and (3.22), we have \[\lim_{N\rightarrow\infty}\mathbb{E}_{\mu_{p^{N}}}\left[\frac{1}{N}\sum_{i=1} ^{N}\tau_{i}\right]=\lim_{N\rightarrow\infty}\frac{\partial_{t}Z_{N}(t)|_{t=1 }}{NZ_{N}(1)}=\lambda\sum_{a=1}^{I}\frac{A\kappa}{A\kappa+q^{-\frac{I+1}{2}+a }}+(1-\lambda)\sum_{a=1}^{I}\frac{A\kappa^{-1}}{A\kappa^{-1}+q^{-\frac{I+1}{2} +a}}.\] 2. (Low density phase \(C>1\)). When \(t\) is close to \(1\), atoms are generated by \(\frac{C}{\sqrt{t}}\). Both \(Z_{N}(1)\) and \(\partial_{t}Z_{N}(t)|_{t=1}\) are dominated by the largest atom \(y_{0}(t)=\frac{1}{2}\left(\frac{C}{\sqrt{t}}+\frac{\sqrt{t}}{C}\right)\). By a calculation that is very similar with the high density phase, we are able to obtain: \[\lim_{N\rightarrow\infty}\mathbb{E}_{\mu_{p^{N}}}\left[\frac{1}{N}\sum_{i=1} ^{N}\tau_{i}\right]=\lim_{N\rightarrow\infty}\frac{\partial_{t}Z_{N}(t)|_{t=1 }}{NZ_{N}(1)}=\lambda\sum_{a=1}^{I}\frac{\kappa}{\kappa+Cq^{-\frac{I+1}{2}+a} }+(1-\lambda)\sum_{a=1}^{I}\frac{\kappa^{-1}}{\kappa^{-1}+Cq^{-\frac{I+1}{2}+ a}}.\] 3. (Maximal current phase \(A<1\), \(C<1\)). When \(t\) is close to \(1\) there is no atom, i.e. \[Z_{N}(t)=\int_{-1}^{1}h^{\uparrow}(t,y)^{\phi_{N}}h^{\rightarrow}(t,y)^{\phi_ {N}}g(t,y)dy.\] (3.23) We can observe that as \(N\rightarrow\infty\) we have: \[\lim_{N\rightarrow\infty}\frac{\partial_{t}Z_{N}(t)|_{t=1}}{NZ_{N}(1)} =\lim_{N\rightarrow\infty}\frac{\int_{-1}^{1}\partial_{t}\left(h^ {\uparrow}(t,y)^{\phi_{N}}h^{\rightarrow}(t,y)^{\psi_{N}}\right)|_{t=1}g(1,y) dy}{N\int_{-1}^{1}h(1,y)^{N}g(1,y)dy}\] \[=\lambda\lim_{N\rightarrow\infty}\frac{\int_{-1}^{1}\frac{ \partial_{t}h^{\uparrow}(t,y)|_{t=1}}{h(1,y)}h(1,y)^{N}g(1,y)dy}{\int_{-1}^{1} h(1,y)^{N}g(1,y)dy}+(1-\lambda)\lim_{N\rightarrow\infty}\frac{\int_{-1}^{1} \frac{\partial_{t}h^{\rightarrow}(t,y)|_{t=1}}{h(1,y)}h(1,y)^{N}g(1,y)dy}{ \int_{-1}^{1}h(1,y)^{N}g(1,y)dy},\] where we notice that \(h^{\uparrow}(1,y)=h^{\rightarrow}(1,y)\) and denote them both by \(h(1,y)\). We observe that: \[\frac{\partial_{t}h^{\uparrow}(t,y)|_{t=1}}{h(1,y)}=\sum_{a=1}^{I}\frac{ \partial_{t}h^{\uparrow}_{a}(t,y)|_{t=1}}{h^{\downarrow}_{a}(1,y)}=\sum_{a=1}^ {I}\left(\frac{1}{2}+\frac{q^{\frac{I+1}{2}-a}\kappa-q^{-\frac{I+1}{2}+a} \kappa^{-1}}{2h^{\uparrow}_{a}(1,y)}\right).\] Hence we have: \[\frac{\int_{-1}^{1}\frac{\partial_{t}h^{\uparrow}(t,y)|_{t=1}}{h(1,y)}h(1,y)^ {N}g(1,y)dy}{\int_{-1}^{1}h(1,y)^{N}g(1,y)dy}=\sum_{a=1}^{I}\left(\frac{1}{2}+ \frac{q^{\frac{I+1}{2}-a}\kappa-q^{-\frac{I+1}{2}+a}\kappa^{-1}}{2}\frac{\int _{-1}^{1}\frac{h(1,y)^{N}}{h^{\downarrow}_{a}(1,y)}g(1,y)dy}{\int_{-1}^{1}h(1,y )^{N}g(1,y)dy}\right).\] (3.25) We use a similar method as [1, section 4.2] (see also [13]) to take limits. We write: \[g(1,y) =\frac{(q,AB,AC,AD,BC,BD,CD;q)_{\infty}}{2\pi(ABCD;q)_{\infty} \sqrt{1-y^{2}}}\left|\frac{(e^{2i\theta};q)_{\infty}}{(Ae^{i\theta_{y}},Be^{i \theta_{y}},Ve^{i\theta_{y}},De^{i\theta_{y}};q)_{\infty}}\right|^{2}\] \[=\sqrt{1-y^{2}}\frac{2(q,AB,AC,AD,BC,BD,CD;q)_{\infty}}{\pi(ABCD;q) _{\infty}|(Ae^{i\theta_{y}},Be^{i\theta_{y}},Ce^{i\theta_{y}},De^{i\theta_{y}} ;q)_{\infty}|^{2}}|(qe^{2i\theta_{y}};q)_{\infty}|^{2}.\] Set \(y=1-\frac{u}{2N}\). Fix \(u>0\), then as \(N\rightarrow\infty\) we have \(e^{i\theta_{y}}\to 1\) hence \((qe^{2i\theta_{y}};q)_{\infty}\rightarrow(q;q)_{\infty}\). Therefore \[g\left(1,1-\frac{u}{2N}\right)\sim 2\mathfrak{c}\sqrt{\frac{u}{N}},\quad\text{and} \quad g\left(1,1-\frac{u}{2N}\right)\leq M\sqrt{\frac{u}{N}},\] where \(M\) is a large enough constant and \[\mathfrak{c}=\frac{(q;q)_{\infty}^{3}(AB,AC,AD,BC,BD,CD;q)_{\infty}}{\pi(ABCD;q) _{\infty}(A,B,C,D;q)_{\infty}}.\] In the rest of the proof we write \(r_{a}=2+q^{\frac{I+1}{2}-a}\kappa+q^{-\frac{I+1}{2}+a}\kappa^{-1}\). We have \[h\left(1,1-\frac{u}{2N}\right)=\prod_{a=1}^{I}\left(r_{a}-\frac{u}{N}\right)= \prod_{a=1}^{I}r_{a}\prod_{a=1}^{I}\left(1-\frac{u}{Nr_{a}}\right).\] Therefore \[\int_{-1}^{1}h(1,y)^{N}g(1,y)dy =\int_{0}^{\infty}1_{u\leq 4N}h\left(1,1-\frac{u}{2N}\right)^{N}g \left(1,1-\frac{u}{2N}\right)\frac{du}{2N}\] \[=\frac{\left(\prod_{a=1}^{I}r_{a}\right)^{N}}{2N^{\frac{3}{2}}} \int_{0}^{\infty}1_{u\leq 4N}\prod_{a=1}^{I}\left(1-\frac{u}{Nr_{a}}\right)^{N}g \left(1,1-\frac{u}{2N}\right)\sqrt{N}du.\] By \(g\left(1,1-\frac{u}{2N}\right)\leq M\sqrt{\frac{u}{N}}\) the right hand side can be bounded by a constant times \(\exp\left(-\sum_{a=1}^{I}\frac{u}{r_{a}}\right)\sqrt{u}\), which is integrable over \(u\in(0,\infty)\). Hence we can use dominated convergence theorem to take \(N\to\infty\). We use \(g\left(1,1-\frac{u}{2N}\right)\sim 2\mathfrak{c}\sqrt{\frac{u}{N}}\) to get: \[\int_{-1}^{1}h(1,y)^{N}g(1,y)dy \sim\frac{\left(\prod_{a=1}^{I}r_{a}\right)^{N}}{2N^{\frac{3}{2} }}\int_{0}^{\infty}\exp\left(-\sum_{a=1}^{I}\frac{u}{r_{a}}\right)2\mathfrak{ c}\sqrt{u}du. \tag{3.26}\] This integral can be explicitly evaluated using the formula \(\int_{0}^{\infty}e^{-su}\sqrt{u}du=\frac{1}{8}\sqrt{\frac{u}{s^{3}}}\) for \(s>0\). However we do not need to explicitly evaluate this integral. For any \(1\leq b\leq N\), we have: \[\int_{-1}^{1}\frac{h(1,y)^{N}}{h_{b}^{\uparrow}(1,y)}g(1,y)dy =\int_{0}^{\infty}1_{u\leq 4N}\frac{h\left(1,1-\frac{u}{2N} \right)^{N}}{h_{b}^{\uparrow}\left(1,1-\frac{u}{2N}\right)}g\left(1,1-\frac{ u}{2N}\right)\frac{du}{2N}\] \[=\frac{\left(\prod_{a=1}^{I}r_{a}\right)^{N}}{2N^{\frac{3}{2}}r_ {b}}\int_{0}^{\infty}1_{u\leq 4N}\frac{\prod_{a=1}^{I}\left(1-\frac{u}{Nr_{a}} \right)^{N}}{\left(1-\frac{u}{Nr_{b}}\right)}g\left(1,1-\frac{u}{2N}\right) \sqrt{N}du.\] The right hand side can be bounded by a constant times \(\exp\left(-\sum_{a=1}^{I}\frac{u}{r_{a}}+\frac{u}{2r_{b}}\right)\sqrt{u}\), which is integrable over \(u\in(0,\infty)\). Hence we can use dominated convergence theorem to take \(N\to\infty\): \[\int_{-1}^{1}\frac{h(1,y)^{N}}{h_{b}^{\uparrow}(1,y)}g(1,y)dy \sim\frac{\left(\prod_{a=1}^{I}r_{a}\right)^{N}}{2N^{\frac{3}{2}}r_{b}}\int_{ 0}^{\infty}\exp\left(-\sum_{a=1}^{I}\frac{u}{r_{a}}\right)2\mathfrak{c}\sqrt{u }du. \tag{3.27}\] Combining (3.26) and (3.27), we get for any \(1\leq b\leq N\): \[\lim_{N\to\infty}\frac{\int_{-1}^{1}\frac{h(1,y)^{N}}{h_{b}(1,y)}g(1,y)dy}{ \int_{-1}^{1}h(1,y)^{N}g(1,y)dy}=\frac{1}{r_{b}}.\] By (3.25), one can evaluate: \[\lim_{N\to\infty}\frac{\int_{-1}^{1}\frac{\partial_{h}h^{\uparrow}(t,y)|_{t=1 }}{h(1,y)}h(1,y)^{N}g(1,y)dy}{\int_{-1}^{1}h(1,y)^{N}g(1,y)dy}=\sum_{a=1}^{I} \left(\frac{1}{2}+\frac{q^{\frac{t+1}{2}-a}\kappa-q^{-\frac{t+1}{2}+a}\kappa^ {-1}}{2r_{a}}\right)=\sum_{a=1}^{I}\frac{\kappa}{\kappa+q^{-\frac{t+1}{2}+a}}.\] Similarly, we also have \[\lim_{N\to\infty}\frac{\int_{-1}^{1}\frac{\partial_{h}h^{\rightarrow}(t,y)|_{t=1 }}{h(1,y)}h(1,y)^{N}g(1,y)dy}{\int_{-1}^{1}h(1,y)^{N}g(1,y)dy}=\sum_{a=1}^{I} \frac{\kappa^{-1}}{\kappa^{-1}+q^{-\frac{t+1}{2}+a}}.\] Therefore, by (3.24), we have: \[\lim_{N\to\infty}\mathbb{E}_{\mu_{\mathcal{P}^{N}}}\left[\frac{1}{N}\sum_{i=1 }^{N}\tau_{i}\right]=\lim_{N\to\infty}\frac{\partial_{t}Z_{N}(t)|_{t=1}}{ NZ_{N}(1)}=\lambda\sum_{a=1}^{I}\frac{\kappa}{\kappa+q^{-\frac{t+1}{2}+a}}+(1- \lambda)\sum_{a=1}^{I}\frac{\kappa^{-1}}{\kappa^{-1}+q^{-\frac{t+1}{2}+a}}.\] Combining the three phases above, we conclude the proof.
2309.15742
T5APR: Empowering Automated Program Repair across Languages through Checkpoint Ensemble
Automated program repair (APR) using deep learning techniques has become an important area of research in recent years, aiming to automatically generate bug-fixing patches that can improve software reliability and maintainability. However, most existing methods either target a single language or require high computational resources to train multilingual models. In this paper, we propose T5APR, a novel neural program repair approach that provides a unified solution for bug fixing across multiple programming languages. T5APR leverages CodeT5, a powerful pre-trained text-to-text transformer model, and adopts a checkpoint ensemble strategy to improve patch recommendation. We conduct comprehensive evaluations on six well-known benchmarks in four programming languages (Java, Python, C, JavaScript), demonstrating T5APR's competitiveness against state-of-the-art techniques. T5APR correctly fixes 1,985 bugs, including 1,442 bugs that none of the compared techniques has fixed. We further support the effectiveness of our approach by conducting detailed analyses, such as comparing the correct patch ranking among different techniques. The findings of this study demonstrate the potential of T5APR for use in real-world applications and highlight the importance of multilingual approaches in the field of APR.
Reza Gharibi, Mohammad Hadi Sadreddini, Seyed Mostafa Fakhrahmad
2023-09-27T15:54:08Z
http://arxiv.org/abs/2309.15742v3
# T5APR: Empowering Automated Program Repair Across Languages through Checkpoint Ensemble ###### Abstract Automated program repair (APR) using deep learning techniques has become an important area of research in recent years, aiming to automatically generate bug-fixing patches that can improve software reliability and maintainability. However, most existing methods either target a single language or require high computational resources to train multilingual models. In this paper, we propose T5APR, a novel neural program repair approach that provides a unified solution for bug fixing across multiple programming languages. T5APR leverages CodeT5, a powerful pre-trained text-to-text transformer model, and adopts a checkpoint ensemble strategy to improve patch recommendation. We conduct comprehensive evaluations on six well-known benchmarks in four programming languages (Java, Python, C, JavaScript), demonstrating T5APR's competitiveness against state-of-the-art techniques. T5APR correctly fixes 1,985 bugs, including 1,442 bugs that none of the compared techniques has fixed. We further support the effectiveness of our approach by conducting detailed analyses, such as comparing the correct patch ranking among different techniques. The findings of this study demonstrate the potential of T5APR for use in real-world applications and highlight the importance of multilingual approaches in the field of APR. Automated program repair Neural program repair Deep learning Transformer ## 1 Introduction Software bugs are unavoidable in software development and can lead to security breaches, system failures, and user dissatisfaction, making it crucial to detect and fix them efficiently. However, manual debugging is time-consuming, particularly when dealing with large and intricate software systems. The growing need for high-quality and reliable software, coupled with the increasing complexity of software systems, has led to a surge of interest in automated program repair (APR). APR is an evolving research area that automatically fixes software bugs to enhance software reliability and maintainability. APR can potentially save developers time and effort, improve software quality and maintenance, and enable faster and more frequent software releases (Huang et al., 2023; Gao et al., 2022; Gazzola et al., 2018; Monperrus, 2018; Le Goues et al., 2019). Recent advancements in machine learning have shown promise in improving APR by adopting deep learning techniques, such as sequence-to-sequence, neural machine translation (NMT), and graph-to-sequence models, to automatically generate correct patches for buggy source code (Zhang et al., 2023; Zhong et al., 2022). These techniques can learn patterns from large code repositories and generate bug-fixing patches with state-of-the-art performance. APR tools that use these techniques are called neural program repair tools. Neural program repair has been mostly implemented with supervised learning on past bug-fixing commits to generate patches as sequences of tokens or edits, given a buggy code and its context (Chakraborty et al., 2022; Chen et al., 2019; Ding et al., 2020; Jiang et al., 2021; Li et al., 2020; Lutellier et al., 2020; Ye et al., 2022; Zhu et al., 2021). However, most existing APR methods are limited by their language-specificity and their high computational cost. They are either expensive to train for multiple programming languages (Lutellier et al., 2020) or focus on a single language domain. Although they may generalize to other languages, they are rarely implemented and evaluated for other languages. This restricts their applicability and scalability across different languages and domains of code. For example, CoCoNuT (Lutellier et al., 2020) and CIRCLE (Yuan et al., 2022) are two of the few approaches that are evaluated on multiple programming languages. To achieve multilingual repair, CoCoNuT trains separate models for each programming language, which requires large amounts of resources. CIRCLE uses continual learning on a single model but still needs measures to prevent catastrophic forgetting of the model and also a re-repairing post-processing strategy to provide patches for different languages. To address these limitations, this paper proposes T5APR, a novel multilingual neural repair method that leverages the power of transformer sequence-to-sequence models (Vaswani et al., 2017). T5APR is based on CodeT5 (Wang et al., 2021), a pre-trained model for code generation and understanding. T5APR fine-tunes CodeT5 in multitask learning style on a dataset of buggy and fixed code snippets and uses checkpoint ensemble (Chen et al., 2017) from different training steps to generate candidate patches. As shown in Figure 1, we train a unified model to fix bugs from various programming languages and use language control codes in our input prompt to distinguish between different languages. This approach enables us to achieve efficient and scalable training with low resource consumption. T5APR then ranks and validates the generated candidate patches using the project's test suite to select the most suitable one. We evaluate our method on six benchmarks including Defects4J (Just et al., 2014), Bears (Madeiral et al., 2019), QuixBugs (Lin et al., 2017), Codelfaws (Tan et al., 2017), ManyBugs (Le Goues et al., 2015), and BugAID (Hanam et al., 2016), and compare its performance with 11 existing APR methods. Results show that T5APR can generate correct patches for various types of bugs in different languages, and achieves state-of-the-art performance in terms of both repair effectiveness (i.e., correct fixes) and efficiency (i.e., ranking of fixes). Across all the benchmarks and from 5,257 bugs, T5APR fixes 1,985 of them; meaning the first plausible patch that it generates is semantically equivalent to the developer's patch. The main contributions of this paper are as follows: * We introduce T5APR, a novel multilingual neural repair approach that offers a solution for bug fixing across various programming languages (Section 2). * We address the challenge of resource efficiency in training without introducing additional models for each language by modeling multilingual APR with multitask learning (Section 2.6). * We propose a checkpoint ensemble strategy, enhancing the effectiveness of generated patches (Section 2.7). * We evaluate T5APR on six benchmarks and compare its performance with 11 state-of-the-art APR methods, achieving competitive performance (Sections 3 and 4). * We provide an open-source implementation of T5APR, and our results are publicly available to foster further research and practical applications: [https://github.com/h4iku/T5APR](https://github.com/h4iku/T5APR). Figure 1: Illustration of T5APR for multilingual program repair. We discuss related work in Section 5, and Section 6 concludes the paper and suggests future directions. ## 2 Approach ### Overview Figure 2 shows the overview of T5APR's structure. T5APR leverages CodeT5 as its base model and combines the outputs of multiple checkpoints to achieve improved performance in automated program repair (APR). T5APR involves two stages: training and inference. During the training stage, the CodeT5 model is fine-tuned on a large-scale multilingual dataset of buggy and fixed code snippets. In the inference stage, multiple checkpoints are used to generate candidate patches given buggy code and its context. The checkpoints are selected from different steps of the fine-tuning process. Finally, we rank the candidate patches based on a combination of their rank in each checkpoint and their likelihood score, and validate them using the project's test suite to select the most suitable patch. APR tools usually return the highest-ranked patch that compiles and passes the test cases, known as the first plausible patch. The following sections will describe this process in detail. ### Data extraction The process of training T5APR involves the extraction of vast amounts of data consisting of the buggy lines (source), their surrounding context to enhance the model's understanding of the faulty code, and their corresponding fixed versions (target). The training data is collected from multiple open-source projects in different programming languages, ensuring a broad representation of real-world code scenarios. Figure 2: Overview of T5APR. For the buggy context, we adopt the "immediate buggy context," which refers to the function or method containing the buggy lines. Although other context choices, such as the entire file content or context obtained through data flow analysis or program slicing are possible, the immediate buggy context is often short and can easily be obtained for a multilingual model. The bigger the context, the more likely it is to have the needed ingredients to fix the bug (Ding et al., 2020; Yang et al., 2021), but we have to give the model a longer sequence, which increases the computation time and resources. Therefore, we aim to create a balanced training environment that efficiently captures essential information for APR without sacrificing computational resources. The context-independence of our model allows for future incorporation of different context types to potentially improve performance (Chen et al., 2019). In our experiments, we train the T5APR model on the dataset collected by CoCoNuT (Lutellier et al., 2020). This dataset follows the same criteria drawn above and consists of tuples of buggy, context, and fixed hunks of code extracted from the commit history of various open-source projects. A hunk is a set of consecutive lines of code change extracted from the commit history. For example, in Figure 2(a), the lines that start with (-) are buggy lines, lines that start with (+) are fixed lines, and the whole flatten function with the buggy lines is the context. The same applies to Figure 2(b), except that the fixed hunk has more than one line here. CoCoNuT uses a keyword-based heuristic to identify bug-fixing commits from their commit messages (Ray et al., 2016; Mockus and Votta, 2000). They manually examined random commits and confirmed the effectiveness of this filtering process. However, we further clean the data during preprocessing to ensure high-quality training instances. ### Preprocessing This section describes the steps taken to prepare the input training data for the T5APR model. Before feeding the data to the model, we apply several preprocessing steps to enhance data quality and reduce computational resource requirements (Raffel et al., 2020): * **Comment removal**: Comments are removed from both sources and targets. This step ensures that the model focuses solely on the functional aspects of the code. * **Deduplication**: We deduplicate the training data across source, context, and target based on their string representation, disregarding whitespace characters. This eliminates duplicate instances with the same functionality that only differ in whitespace characters, significantly reducing the dataset size without compromising the diversity of the code snippets. * **Identical source and target removal**: Instances with identical source and target are discarded. This also includes instances with both empty source and target, and instances where their source and target only differ in comments. Such instances do not represent actual bug fixes, and they provide no meaningful information for learning. * **Empty target filtering**: Instances with empty targets are removed from the training data. Although this may negatively affect the model's ability to generate deletion operator patches, we demonstrate that the model can still effectively generate empty patches. In cases where the model does not produce an empty patch because it is a single operator, we manually add an empty patch to the beginning of the patch list. * **Source length filtering**: We filter instances based on the after-tokenization length of their source (excluding context). This step ensures that the code snippets are compatible with the model's input length constraints and only complete patches are used. Figure 3: Examples of buggy lines, buggy context, and fixed lines. ### Code representation and tokenization We represent buggy lines and their associated context in a unified format using a special delimiter token : for the tokenization process: input = prefix buggy_lines : context where prefix is a language-specific control code that distinguishes different programming languages and is added to the beginning of each example, following previous works that use T5-based models (Berabi et al., 2021; Raffel et al., 2020; Wang et al., 2021). In the case of multiline bugs, we concatenate lines using whitespace and put them right after each other. This input is then tokenized and truncated if necessary to ensure that its size remains below the model's maximum limit. We only use instances that their prefix + buggy_lines and target_lines lengths after tokenization are less than or equal to the model's maximum input and output sizes. Therefore, the truncation only affects the context part of each instance, if necessary. We also tokenize the target in the same manner but independently and without a prefix or context. To tokenize inputs and targets, we use a pre-trained RoBERTa-based subword tokenizer, which uses byte-level byte-pair-encoding (BPE) (Sennrich et al., 2016). We use the tokenizer that comes with CodeT5 and is trained to be efficient in tokenizing source code. By using a tokenizer trained specifically on code, we can reduce the number of generated tokens, which in turn improves model's training performance and output generation. Subword tokenization algorithms split rare words into smaller, meaningful pieces, while leaving common words intact. A BPE tokenizer allows us to include rare project-specific tokens in our vocabulary by breaking them into smaller pieces (Jiang et al., 2021). Subword tokenization gives the model a reasonable vocabulary size while trying to minimize the out-of-vocabulary (OOV) problem (Karampatsis et al., 2020). The final output comprises tokenized source-context pairs and tokenized target labels. We represent the encoded input in the following format: <s>prefix \(b_{1}b_{2}...b_{n}\):\(c_{1}c_{2}...c_{m}\)</s> where \(n\) and \(m\) denote the number of buggy and context tokens, respectively. Tokens <s> and </s> mark the beginning and end of a sequence. Here are some examples to illustrate the encoding process: For a Python source a = 1 + 2 with a context b = 3, the encoded input would be <s> (Python) (a) ( CodeT5 undergoes four pre-training tasks to acquire its code-aware capabilities: 1. Masked span prediction that randomly selects spans of arbitrary lengths to mask and then uses the decoder to predict these masked spans marked with some sentinel tokens. 2. Identifier tagging that trains the model to understand whether a code token is an identifier or not. 3. Masked Identifier prediction that masks all identifiers in the code and uses a unique mask token for all occurrences of one specific identifier. 4. Bimodal dual generation that considers the generation of natural text from source code and source code from natural text (NL \(\leftrightarrow\) PL). All these tasks are formulated as a sequence-to-sequence task. Tasks 1, 2, and 3 are part of identifier-aware denoising pre-training that enhances the model's understanding of code syntax, structure, and semantics. Task 4 is effective for conversions between text and code, like generating comments for a code or generating code snippets based on a description like GitHub Copilot. ### Fine-tuning CodeT5 Fine-tuning is the process of adapting a pre-trained model to a specific task using task-specific data. In our approach, we fine-tune the CodeT5 model for multilingual APR. This involves training a unified model on a multilingual dataset comprising multiple programming languages; in our case Java, Python, C, and JavaScript. Our training data contains three columns: the buggy hunk (source), the surrounding buggy context function, and the corresponding fixed hunk (target). The fine-tuning process adjusts the parameters of the pre-trained CodeT5 model to better suit the specific task of APR by minimizing a cross-entropy loss function that measures the discrepancy between the model's predictions and the ground-truth target fixes. As the model encounters task-specific data, it continues to learn and update its parameters to improve performance on the repair task. We fine-tune all languages simultaneously in batches that contain samples from all programming languages while using a prefix to identify each language. This approach leverages multitask learning (i.e., considering repairing each language as a separate task), enlarging the dataset, and allowing knowledge transfer between bug-fixing tasks in different languages. By combining data from different languages (Section 2.4), we facilitate multilingual learning, where the model learns from bugs and fixes across various programming languages. Notably, this strategy proves particularly effective for handling bugs that are common across all languages (Berabi et al., 2021). We fine-tune the model for a specified number of epochs denoted as \(i\) while saving a checkpoint every \(j\) steps, resulting in \(k\) checkpoints as shown in Figure 2. ### Checkpoint ensemble Because of the diverse nature of bugs and fixes, a single model with optimal parameters may not generalize well (Lutellier et al., 2020). Ensemble learning has been a popular technique in machine learning for enhancing model performance and robustness (Dietterich, 2000). In the context of transformer models, ensemble learning involves training multiple instances of the same model with different initialization or hyperparameters and then combining their results to obtain a final output. Prior approaches in APR have utilized ensemble models, which combine multiple distinct models to generate bug-fixing patches (Jiang et al., 2021, 2023; Lutellier et al., 2020). While this method has shown effectiveness in fixing more bugs, it often entails a significant computational cost due to training multiple specialized models with different inputs, hyperparameters, and, in some cases, even distinct architectures. In contrast, we adopt a checkpoint ensemble approach for T5APR, which not only improves performance but also reduces training overhead (Chen et al., 2017; Das et al., 2020). Instead of training separate models, we exploit the diverse capabilities of the model at different training steps by saving and utilizing multiple checkpoints. We save \(k\) checkpoints during the model training process, where \(k\) represents the number of checkpoints used in the ensemble. The saved checkpoints have complementary abilities to generate patches for different types of bugs and contribute to the quality of patch ranking. ### Patch generation and ranking Having obtained \(k\) checkpoints from the trained model, we now proceed with generating patches for each bug. We apply the same data preparation and tokenization steps to the localized buggy hunk and its context as described in Section 2.4, with the only difference being that we truncate all long instances to match the model size without discarding any of them. In some cases, the buggy hunk is not within a function. Although our training data always contains functions, we do not discard these bugs. Instead, we let the context be empty and try to generate patches using only the buggy lines. Next, we generate candidate patches for a given example using the model. To achieve this, we use beam search with a specific beam size \(t\) on each checkpoint, resulting in the generation of \(t\) best patches from each checkpoint based on the maximum likelihood estimation score of each sequence. In total, we obtain \(k\times t\) patches through the checkpoint ensemble as shown in Figure 2. To consolidate the generated patches from different checkpoints, we combine, deduplicate, and rerank patches by applying the following steps: 1. We normalize whitespace characters in the generated patches to ensure consistency. 2. We merge and sort the patches for each hunk according to their checkpoint ranks, breaking ties using the sequence scores (i.e., the likelihood score of each sequence generated by each checkpoint). 3. We remove patches that are identical to the buggy source, as they do not contribute to the repair process. 4. We deduplicate the patches, keeping only the unique ones, and retain the first patch in the list in case of duplicates. 5. Finally, to account for the possibility of removing buggy lines, we add an empty patch at the beginning of the list for any bug that lacks one, as an empty patch corresponds to removing the buggy lines. For single-hunk bugs, the generated list is ready for validation. However, for multi-hunk bugs (bugs that require changes in more than one code location), we undertake additional processing to reduce the search space of patches (Saha et al., 2019). Specifically, we focus on multi-hunk patches that exhibit the same changes across all hunks (Maderial and Durieux, 2021). We identify identical patches among those generated for all hunks of a bug and retain only the patches present in all hunks. The patches are then sorted based on the maximum of sequence score among each hunk's patches. Consequently, we obtain a single patch that can be applied to all hunks, significantly reducing the number of patches to validate. These final candidate patches undergo further validation to select the correct fixes. ### Patch validation Test-suite-based APR uses test-suite as program correctness specification. In this stage, we validate the candidate patches obtained from the previous stage by applying them to the original source code, compiling the patched code, and running the developer-written test suite. The goal is to filter out patches that do not compile or fail to pass the test cases in the project's test suite. To validate the candidate patches, we follow these steps: We apply each patch to the buggy location of the source code by replacing the buggy lines with the corresponding fixed lines from the generated patch. We then compile the patched code and run the test suite. To make this process faster, if possible, we first run the triggering test cases, and if all of them pass, we proceed to run the rest of the test cases that make the buggy version pass to avoid regression. We make sure to omit flaky tests from this process. The patched program is considered valid if it passes all the test cases that the buggy project passed and passes the triggering test cases that previously failed on the buggy project. This validation approach aligns with common practices in many APR studies (Lutellier et al., 2020). The resulting patches that pass the validation process are referred to as plausible patches. However, it is essential to compare these plausible patches to the ground truth (i.e., developer-written patches) to assess whether they correctly fix the bug or only overfit to the test cases (Qi et al., 2015; Smith et al., 2015). ## 3 Experimental setup ### Research questions The research questions that we aim to answer in this paper are: * **RQ1 (Effectiveness and generalizability)**: How does T5APR compare with state-of-the-art APR methods in terms of repair effectiveness and generalizability? * **RQ2 (Multiple plausible patches)**: How does the consideration of multiple plausible patches improve T5APR's repair effectiveness? * **RQ3 (Ablation study)**: What is the impact of checkpoint ensemble on T5APR's performance? * **RQ4 (Multilingual and monolingual)**: How does the effectiveness of T5APR's multilingual model compare with monolingual models for each programming language? ### Datasets Training dataWe use the same dataset provided by CoCoNuT (Lutellier et al., 2020) on GitHub1 for training the T5APR model. The dataset consists of tuples of buggy, context, and fixed hunks of code from the commit history of various open-source projects hosted on platforms such as GitHub, GitLab, and BitBucket. The dataset covers multiple programming languages including Java, Python, C, and JavaScript, making it ideal for training a multilingual APR model. Table 1 provides a summary of the dataset statistics, including the cutoff year of collected data, the number of projects, instances before preprocessing, and the number of instances after preprocessing and tokenization for each programming language. Footnote 1: [https://github.com/lin-tan/CoCoNu-Artifact](https://github.com/lin-tan/CoCoNu-Artifact) CoCoNuT finds the date of the earliest bug in each evaluation benchmark and collects commits that were made before that date and discards instances committed after that to avoid overlapping train and evaluation data. The cutoff year in Table 1 is the year that the data is collected until that year. Other work has also used this data (Jiang et al., 2021, 2023; Ye et al., 2022; Yuan et al., 2022). In our experiment, we set the maximum input length of the model to 512 tokens, and the maximum output length to 256. Figure 4 shows the distribution of the source and target of training data instances based on their number of tokens after preprocessing but before size filtering. The \(x\)-axis indicates the token length range, while the \(y\)-axis represents the count of instances. Instances with shorter token lengths are more abundant, and as token length increases, the count of instances gradually decreases. This figure shows that our choice of maximum input and output length is reasonable and covers most of the training data. Table 1 shows that from the total of 2,644,305 instances after preprocessing, we retain 2,324,030 instances after size filtering, which is 87.89% of the data. \begin{table} \begin{tabular}{l r r r r r} \hline \hline Language & Cutoff year & Projects & Instances & After preprocessing & After size filtering \\ \hline Java & 2006 & 45,180 & 3,241,966 & 1,125,599 & 1,009,268 \\ Python & 2010 & 13,899 & 480,777 & 302,727 & 264,842 \\ C & 2005 & 12,577 & 2,735,506 & 671,119 & 586,893 \\ JavaScript & 2010 & 10,163 & 2,254,253 & 544,860 & 463,027 \\ \hline \hline \end{tabular} \end{table} Table 1: Summary of training data instances before and after preprocessing. Figure 4: Distribution of training data instances by their token length. Bug benchmarksWe evaluate the performance of T5APR on a diverse set of benchmarks, spanning multiple programming languages and encompassing various types of bugs (Sobreira et al., 2018; Ye et al., 2021). We use the following benchmarks in our evaluation: Defects4J (Java) (Just et al., 2014), Bears (Java) (Madeiral et al., 2019), QuixBugs (Java and Python) (Lin et al., 2017), Codeflaws (C) (Tan et al., 2017), ManyBugs (C) (Le Goues et al., 2015), and BugAID (JavaScript) (Hanam et al., 2016). These benchmarks collectively cover a wide range of real-world software defects and coding challenges. Defects4J is a database and framework of real-world bugs from 17 well-known open-source Java projects. We follow prior work (Jiang et al., 2023; Ye et al., 2022; Zhu et al., 2021) and separate Defects4J into two versions: Defects4J (v1.2) and Defects4J (v2.0). Defects4J (v1.2) contains 395 bugs and Defects4J (v2.0) contains 444 additional bugs that are only available in the v2.0 version. Bears benchmark is a collection of bugs from 72 Java projects hosted on GitHub and extracted using their continues integration status history. QuixBugs contains 40 bugs from the Quixey Challenge problems in both Java and Python. The programs are small classic algorithms in a single file. Codeflaws is a set of bugs from Codeforces programming competition in C where each program is a single file. ManyBugs contains bugs from large popular open-source C projects. BugAID benchmark consists of 12 examples of common bug patterns in JavaScript described in Hanam et al. (2016). Table 2 provides detailed statistics for each benchmark. The table includes the number of bugs present in each benchmark, the count of bugs that are removed from consideration because they are either duplicates of other bugs or their buggy and fixed version has no change, the remaining number of bugs eligible for evaluation, and the total number of bugs that we attempt to repair. These statistics offer insights into the scale of each benchmark and the scope of our experimental evaluation. ### Implementation details and parameters ImplementationWe implement T5APR in Python and use the Hugging Face Transformers library (Wolf et al., 2020) with PyTorch (Paszke et al., 2019) backend for training the model. Data preparation and preprocessing are performed using the Hugging Face Datasets library (Lhoest et al., 2021), which is based on Apache Arrow for efficient data processing. We use the CodeT5 checkpoints that were trained using identifier-aware denoising pre-training objective for 100 epochs. CodeT5 has multiple variants with different sizes and number of parameters. We fine-tune the small model (CodeT5-small) that has a total of 60M parameters. Although bigger models tend to perform better, it has been shown that the small model is also relatively capable (Berabi et al., 2021; Wang et al., 2021). We leave using other model sizes of CodeT5 to future work due to resource limitations. The CodeT5 tokenizer's vocabulary size is 32,100, of which 32,000 tokens are obtained from the pre-training dataset with non-printable characters and low-frequency tokens (occurring less than three times) filtered and 100 special tokens for padding (<pad>), masking (<mask>), marking the beginning and end of a sequence (<s>, </s>), and representing unknown tokens (<unk>). The choice of hyperparameters, such as the learning rate, batch size, or number of training epochs can have a significant impact on the performance of the fine-tuned model. These hyperparameters are typically tuned using a separate validation set, which is held out from the training data and used to evaluate the model's performance on unseen examples. \begin{table} \begin{tabular}{l r r r r} \hline \hline Benchmark & Bugs & Removed & Remained & Attempted to Repair \\ \hline Defects4J (v1.2) & 395 & 2 & 393 & 331 \\ Defects4J (v2.0) & 444 & 0 & 444 & 357 \\ Bears & 251 & 0 & 251 & 83 \\ QuixBugs (Java) & 40 & 0 & 40 & 37 \\ QuixBugs (Python) & 40 & 0 & 40 & 40 \\ Codeflaws & 3,903 & 7 & 3,896 & 3,863 \\ ManyBugs & 185 & 4 & 181 & 130 \\ BugAID & 12 & 0 & 12 & 10 \\ \hline Total & 5,270 & 13 & 5,257 & 4,851 \\ \hline \hline \end{tabular} \end{table} Table 2: Evaluation benchmark statistics. We employ the Optuna optimization framework (Akiba et al., 2019) and use the AdamW optimizer (Loshchilov and Hutter, 2018) to conduct hyperparameter search. We randomly divide our Python training dataset into a training and a validation set, with 5,000 instances in the validation set and the rest in the training set. This separation is only for hyperparameter tuning and later for training, we use the entire Python dataset. The evaluation criteria for hyperparameter tuning are the exact match and the BLEU score (Papineni et al., 2002). Exact match is the ratio of instances where the predicted sequence exactly matches the ground truth sequence. BLEU score looks at how many n-grams in the model's output match the n-grams in the ground truth sequence and is a measure of the similarity between the model output and the ground truth sequence, which we compute using the sacreBLEU library (Post, 2018). We define the objective metric for hyperparameter optimization as the sum exact match and BLEU score as follows: \[\text{objective metric}=\text{exact match}\times 100+\text{BLEU score} \tag{1}\] The BLEU score from sacreBLEU ranges from 0 to 100, with 100 being the best possible score. Therefore, we multiply exact match by 100 to have the same range values for both metrics. The hyperparameter search space is defined as follows: The learning rate ranges from \(1e-5\) to \(1e-3\), training epochs range from 1 to 5 epochs, training batch size has a search range from 4 to 16, beam size to 5, and learning rate scheduler type includes constant, cosine, linear, and polynomial. After hyperparameter tuning, we set the final hyperparameters as follows: the train batch size is set to 8, the training epochs to 1, the learning rate to \(1e-4\), and the learning rate scheduler type to constant. We also use mixed precision of FP16 for faster training. During training, we set \(k=5\) and save five checkpoints at each 20% steps of the training epoch. In our decision to use five checkpoints, we draw inspiration from the previous successful approaches of CoCoNuT, CURE, and KNOD, which also employ five to ten models in their ensemble. This number has proven effective in related work, and we adopt it as a reasonable starting point for our ensemble. We show that more checkpoints result in more bug fixes, as also shown by CoCoNuT and CURE. For the final inference and patch generation of benchmarks, we set the beam size to 100 to generate 100 patches from each checkpoint. A larger beam size would improve the results (Tufano et al., 2019; Ye et al., 2022), but due to resource limitations, we chose a beam size of 100. For parsing source files and extracting buggy context, we utilize the Tree-sitter2 parsing library and the lexers available in Pygments3. These libraries can tokenize and parse many programming languages, making them suitable for our multilingual approach. We also use Unidiff4 to parse the diff of the buggy and fixed codes and extract the location of buggy hunks. Footnote 2: [https://tree-sitter.github.io/tree-sitter/](https://tree-sitter.github.io/tree-sitter/) Footnote 3: [https://pygments.org/](https://pygments.org/) Footnote 4: [https://github.com/matiasb/python-unidiff](https://github.com/matiasb/python-unidiff) InfrastructureWe train our model on a server with 4 cores of an Intel Xeon Platinum 8259CL CPU, 16 GB RAM, and an NVIDIA T4 GPU with 16 GB VRAM. For evaluation, we use another system with a 6-core Intel Core i7-8750H CPU, 16 GB RAM, and an NVIDIA GeForce GTX 1060 GPU with 6 GB VRAM. ### Patch assessment Patches that can be compiled and pass the project's test suite are called plausible. However, a plausible patch may not fix the bug if the test suite is weak and does not cover all the cases (Qi et al., 2015). This is called the overfitting problem when the patch only works for the test cases and not for the problem (Smith et al., 2015). Therefore, we use the following criteria to determine if a plausible patch is correct (Ye and Monperrus, 2023): * It is identical to the developer-provided patch. * It is identical to correct patches generated by existing techniques that have undergone public review by the community in open-source repositories. * We judge it semantically equivalent to the developer-provided patch using rules described by Liu et al. (2020). To reduce the potential for errors in this process, we make all generated patches publicly available for public judgment and review. ### Analysis procedure We compare T5APR against recent approaches that report their results under perfect fault localization setting. Techniques that use perfect fault localization are given the exact location of the bug. We identify the location of each bug with the help of human-written patches and their diff. Some approaches use different fault localization algorithms or implementations to find the buggy location, which makes it difficult to only compare the repair capabilities of each approach. Recent studies suggest that perfect fault localization is the preferred way to evaluate APR approaches, as it allows a fair comparison of APR techniques without depending on the fault localization method (Liu et al., 2019, 2020). We compare T5APR with 11 state-of-the-art tools including eight Java APR tools: SequenceR (Chen et al., 2019), TBar (Liu et al., 2019), DLFix (Li et al., 2020), CURE (Jiang et al., 2021), Recoder (Zhu et al., 2021), RewardRepair (Ye et al., 2022), and KNOD (Jiang et al., 2023). One C tool: SOSRepair (Afzal et al., 2019). Two tools that use large language models: Codex (Prenner et al., 2022) and ChatGPT (Sobania et al., 2023). Lastly, CoCoNuT (Lutellier et al., 2020), which is evaluated on all four programming languages. We did not include CIRCLE (Yuan et al., 2022) in the evaluation since they have not validated their candidate patches using benchmarks' test suite and only reported exact match results across their generated patches. To compare with these approaches, we follow the previous works and only consider the first plausible patch generated by T5APR that successfully compiles and passes the test suite (Durieux et al., 2019; Liu et al., 2019; Lutellier et al., 2020; Zhong et al., 2023). There might be correct patches further down the plausible patch list, but we analyze those in another section. We obtain results from each tool's paper or repository. For those who do not report results with perfect localization or for some benchmark in their paper, we use results from other publications that evaluate said tools under perfect localization setting (Liu et al., 2020; Zhong et al., 2023). To compute patch ranking, compilable patch rate, and unique bugs that T5APR can fix compared with other approaches, we obtain the list of candidate patches and fixed bugs for each approach on each benchmark from their respective repositories (Those that provide it). ## 4 Results and discussion ### RQ1: Effectiveness and generalizability Table 3 presents the results of evaluating the performance of T5APR on multiple benchmarks and against a selection of state-of-the-art APR tools. The table shows the name and the total number of considered bugs for each benchmark below its name. The results are displayed as \(c/p\), where \(c\) is the number of correct patches that are ranked first as the first plausible patch by an APR technique, and \(p\) is the total number of plausible patches. We also show, in parentheses, the number of bugs that have identical patches to the developer-written patch. A dash (-) indicates that the tool has not been evaluated on the benchmark or does not support the programming language of the benchmark, to the best of our knowledge. For the ManyBugs and BugAID benchmarks, we could not validate their patches; therefore we cannot show the number of plausible patches, and only report the number of correct patches that we manually identified. We highlight T5APR's performance on these benchmarks as follows. Overall, T5APR fixes 1,985 bugs across all the benchmarks with 1,413 of them patched identical to the developer's patch. Results show that T5APR outperforms or equals all other approaches on all the evaluated benchmarks except for Defects4J (v1.2) and ManyBugs. For Defects4J, we observe that T5APR achieves competitive results, particularly in Defects4J (v2.0) where it generates correct fixes for 56 bugs and outperforms all other approaches. In Defects4J (v1.2), KNOD performs better than T5APR by fixing 71, while T5APR fixes 67 bugs. It should be noted that CoCoNuT, CURE, and KNOD use a substantially larger beam size of 1000 versus 100 that T5APR uses, and it has been shown that using larger beam size leads to more correct patches (Tufano et al., 2019; Ye et al., 2022). Notice that if we consider all the generated plausible patches (Table 7), T5APR reaches 72 correct bugs. This shows the need for a better patch ranking strategy in future studies (Kang and Yoo, 2022). In the case of Bears, T5APR demonstrates its potential by correctly repairing 24 bugs, outperforming all the compared tools. Results in these two benchmarks indicate T5APR's capability in addressing real-world bugs in Java programs. In the QuixBugs benchmark, which encompasses both Java and Python programs, T5APR shows robust performance, repairing 25 Java bugs and achieving correct repair for 29 Python bugs. The correct to plausible patch ratio for both versions is about 96%, where only one of the generated plausible patches is not correct. In the QuixBugs (Java) version, T5APR fails to fix the SQRT bug, and in the QuixBugs (Python) version, it fails to fix DEPTH_FIRST_SEARCH bugs where it is correctly fixed further down in the patch list. Similarly, in Codeflaws, a C programming language benchmark, T5APR showcases its robustness by achieving a substantial repair of 1,764 bugs, outperforming its only other contender CoCoNuT. Turning to the ManyBugs benchmark, T5APR performance remains competitive by repairing 15 bugs, positioning itself among the top-performing tools but outperformed by SOSRepair. In the BugAID benchmark, T5APR achieves the highest repair rate, successfully fixing 5 out of the 12 bugs, demonstrating its competence in addressing JavaScript bugs. Out of the bugs fixed by T5APR for Defects4J (v1.2), Defects4J (v2.0), Bears, Codeflaws, and ManyBugs benchmarks, 10, 4, 4, 36, and 2 of them are multi-hunk, respectively. These are fixed using the strategy described in Section 2.8. The remaining benchmarks either do not contain multi-hunk bugs or are not successfully fixed by T5APR. Comparing T5APR against existing state-of-the-art methods, we consistently observe competitive or superior performance across various benchmarks. The overall results suggest T5APR's effectiveness in repairing a wide range of software defects and its effectiveness in handling bugs from different programming languages. Patch rankingTo provide a comprehensive understanding of the effectiveness of T5APR's patch ranking strategy, we analyze the ranking of correct patches generated by each approach. Figure 5 presents the patch ranking information at different thresholds. Each line in the plot corresponds to a different approach, including T5APR, CURE, RewardRepair, and KNOD. The \(x\)-axis represents the ranking thresholds, while the \(y\)-axis indicates the number of correctly fixed bugs in each threshold. We only consider tools that we have access to their patch ranking information and generated candidate patches. T5APR outperforms all other approaches in correct patch ranking except for Top-200 on QuixBugs (Java) where CURE performs better. We can also see that although KNOD reaches better results than T5APR for Defects4J (v1.2) (71 vs. 67 bugs in Table 3), T5APR fixes more bugs up to the Top-500 generated candidate patches. KNOD generates fixes for the rest of the bugs in ranks higher than 500 due to using a larger beam size. Overall, 310 of the correct patches generated by T5APR are ranked first in the candidate patch list. Unique bug fixesFigure 6 presents the unique and overlapped number of bugs repaired by individual approaches from Table 3 for all the benchmarks. For benchmarks with more than four tools, we select the three best tools for that benchmark and combine the fixed bugs of the remaining tools under "Others". Across all the benchmarks, T5APR fixes 1,442 bugs that other tools do not fix. On the Defects4J benchmark, T5APR and KNOD complement each other by fixing 21 and 25 unique bugs for v1.2 and v2.0, respectively. On the ManyBugs benchmark, T5APR has good complementary quality and together with SOSRepair fixes 20 unique bugs. Overall, results show that T5APR complements compared existing works on all the evaluated benchmarks. We provide a few examples of the bugs that T5APR can fix. Figure 7 shows the fix generated by T5APR and the ground-truth patch for Codec 5 bug from Defects4J (v2.0) benchmark that other tools do not fix. To fix this bug, T5APR notices that the header of the context method has throws ParseException, which prompts T5APR to synthesize an exception throw statement. The only difference between T5APR and the developer's patch is the exception message, where T5APR uses the string parameter of the method while the developer writes a custom message. \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline Tool & Defects4J (v1.2) & Defects4J (v2.0) & Bears & QuixBugs (Java) & QuixBugs (Python) & Codeflaws & ManyBugs & BugAID \\ & 393 bugs & 444 bugs & 251 bugs & 40 bugs & 40 bugs & 3,896 bugs & 181 bugs & 12 bugs \\ \hline SOSRepair (Arlal et al., 2019) & - & - & - & - & - & **16/23** & - \\ SequenceK (Chen et al., 2019) & 12/19 & - & 16/26 & 15/16 & - & - & - & - \\ TBar (Liu et al., 2019b) & 53/84 & - & - & - & - & - & - \\ DLFi (Liu et al., 2020) & 39/68 & - & - & - & - & - & - & - \\ CoCoNuT (Lutellier et al., 2020) & 44/85 & 21/31 & 19/33 & 13/20 & 19/21 & 423/716 & 7/- & 3/- \\ CURE (Jiang et al., 2021) & 57/104 & 19/- & - & **25/34** & - & - & - & - \\ Recoder (Zhu et al., 2021) & 64/69 & - & 5/17 & 17/17 & - & - & - & - \\ RewardRepair (Ye et al., 2022) & 45/- & 45/- & - & 20/- & - & - & - & - \\ Cooks (Premner et al., 2022) & - & - & - & 14/- & 23/- & - & - & - \\ ChaTOf (Sobania et al., 2023) & - & - & - & - & 19/- & - & - & - \\ KNOD (Jiang et al., 2023) & **71/85** & 50/82 & - & **25/30** & - & - & - & - \\ \hline TSAPR & 67/94 (46) & **56/103** (35) & **24/33** (12) & **25/26** (18) & **29/30** (24) & **1,764/2,359** (1,259) & 15/- (14) & **5/- (5)** \\ \hline Total & & & & & & & & \\ \hline \hline \end{tabular} \end{table} Table 3: The number of correctly fixed bugs and comparison with state-of-the-art approaches. Results are from the first plausible patch by each method and are shown as _correct/plausible_. Values in parentheses are bugs with identical patches to the developer’s. The highest number of correct patches for each benchmark is highlighted in bold. * publicstatic<T>TcreateValue(finalStringstr,finalClass<T>clazz)throwsParseException{ : returnnull; +thrownewParseException(str); (a)TSAPR'spatch. * returnnull; +thrownewParseException("Unabletohandletheclass:"+clazz); (b)Developer-writtenpatch. Figure 5: Ranking information of correct patches. Figure 6: Number of unique and overlapped bug fixes of TSAPR and other tools. Figure 7: Fix for Cli 40 bug from Defects4J (v2.0) benchmark. Figure 8 gives the T5APR patch for Bears-32, which is only fixed by T5APR. For this bug, T5APR adds a null check for the returned value of getStep(), which is a common pattern in Java. The developer's patch for this bug is shown in Figure 2(b). T5APR's patch is semantically equivalent to the developer's patch. Figure 9 shows another Bears benchmark bug patched by T5APR. This is a complete generation patch for Bears-46 with no lines to remove and is identical to the developer's patch. For this bug, T5APR also generates a null check and then based on the Set return type of the context method, returns an empty set collection. Figure 10 shows the patch for the same bug LIS from QuixBugs benchmark in both Java (Figure 9(a)) and Python (Figure 9(b)). The generated patch is similar for both languages, but T5APR adapts the syntax for each programming language. Figure 11 shows the T5APR and developer's patch for multi-hunk bug 465-B-bug-16282461-16282524 from Codeflaws with identical patches for both hunk locations. T5APR finds the fix lower in the context and copies it to the buggy location. It also adds an if condition before it to avoid changing it if it is already true. Compilable patch rateFurthermore, we evaluate the compilable patch rate of candidate patches generated by different approaches, which reflects the tool's ability to generate syntactically correct and developer-like code. Table 4 presents the average compilation rate across different top-X values of generated candidate patches. T5APR has a slightly lower compilation rate than RewardRepair for all the top-X values. RewardRepair has a semantic training step that rewards compilable patches in its model training through backpropagation, which greatly helps it in generating more compilable candidate patches. All tools have a decreasing compilation rate as the X value increases, which means that tools generate more non-compilable patches as they generate more candidate patches (Ye et al., 2022). We use an interval for the top-200 of CoCoNuT and CURE since they only report compilation rate for top-100 and top-1000 values. Moreover, we filtered our considered bugs to be in the same set of bugs as RewardRepair for this comparison since we target more bugs. Like previous approaches (Jiang et al., 2021; Ye et al., 2022), we combine Defects4J (v1.2) and QuixBugs (Java) patches. We directly list the numbers reported by Ye et al. (2022) for SequenceR, CoCoNuT, CURE, and RewardRepair. Table 5 shows the average compilable patch rate of the top-X candidate patches for each benchmark. The Codeflaws benchmark has the highest compilation rate among all the benchmarks, followed by QuixBugs (Java). The Bears Figure 10: Fix for LIS bug from QuixBugs benchmark. Figure 8: T5APR’s fix for Bears-32 bug from Bears benchmark. Figure 9: T5APR’s fix for Bears-46 bug from Bears benchmark. benchmark has the lowest compilation rate among all the benchmarks, followed by Defects4J (v2.0). This may indicate some characteristics of Codeflaws and QuixBugs (Java) bugs that make them easier to compile than other benchmarks, and some characteristics of Bears and Defects4J (v2.0) bugs that make them harder to compile. Validation time costWe also assess the time cost of the validation effort required to reach correct patches. Table 6 provides an overview of the time takes to find correct patches, the number of validated patches until the first correct patch is found, and the maximum instances of timeouts to reach a correct patch. Codeflaws has both the lowest and highest time to reach a correct patch because Codeflaws programs are small and fast to compile and test. Most of the time consumed in validating benchmarks is for bugs with timeout patches since we have to wait for each patch for a certain time to timeout. \begin{table} \begin{tabular}{l c c c} \hline \hline Model & Top-30 & Top-100 & Top-200 \\ \hline SequenceR & 33\% & - & - \\ CoCoNuT & 24\% & 15\% & 6\%-15\% \\ CURE & 39\% & 28\% & 14\%-28\% \\ RewardRepair & 45.3\% & 37.5\% & 33.1\% \\ \hline T5APR & 42.7\% & 36.2\% & 32.3\% \\ \hline \hline \end{tabular} \end{table} Table 4: Average compilable patch rate of the Top-X candidate patches in Defects4J (v1.2) and QuixBugs (Java). Figure 11: Fix for 465-B-bug-16282461-16282524 from Codeflaws benchmark. \begin{table} \begin{tabular}{l c c c} \hline \hline Benchmark & Top-30 & Top-100 & Top-200 \\ \hline Defects4J (v1.2) & 33.2\% & 30.1\% & 27.7\% \\ Defects4J (v2.0) & 31.8\% & 28.8\% & 26.3\% \\ Bears & 34.6\% & 26.4\% & 23.6\% \\ QuixBugs (Java) & 49.1\% & 42.6\% & 37.1\% \\ Codeflaws & 73.5\% & 70.3\% & 67.3\% \\ \hline Overall & 67.8\% & 64.8\% & 61.9\% \\ \hline \hline \end{tabular} \end{table} Table 5: Average compilable patch rate of the Top-X candidate patches. Overall in our multilingual experiment, we validated 1,172,267 patches across all the benchmarks that took about 27 days. Training the multilingual model for one epoch on a single GPU took about 17 hours. ### RQ2: Multiple plausible patches Table 7 compares the results when only the first plausible patch is considered versus when all plausible patches are considered. The "top-X" thresholds consider only the first X plausible patches generated. The "all" threshold encompasses all plausible patches generated for each bug. We find 2,309 correct patches when we consider all plausible patches, which is an increase of 344 from the first plausible patch. This means that 344 correct patches are not ranked as the first plausible patch by T5APR and an incorrect patch passes the test cases due to test suite limitation. This limitation is an issue that most test suite-based APR tools have in common. The number of correct patches increases the most when we raise the threshold from top-1 to top-5, which means that most of the correct patches are ranked within the top-5 plausible patches by T5APR. This is important since according to a recent study, 72% of developers are only willing to review up to five patches in practice (Noller et al., 2022). The increase in the number of correct patches is smaller when the threshold is raised from top-5 to all. ### RQ3: Ablation study We employ an ensemble approach, combining the outputs of individual checkpoints, to enhance the overall performance of T5APR. Table 8 shows the performance of each checkpoint independently for different benchmarks. Each checkpoint result includes the manually added deletion patch. Compared with Table 3, we can see that the combination of checkpoints has better results than each checkpoint independently. This demonstrates the effectiveness of our ensemble approach (Dietterich, 2000). Overall, checkpoint5 fixes more bugs than others, but considering each benchmark independently we can see that the best performing checkpoint varies. This suggests that checkpoints complement each other and learn different patterns for different bugs. We further analyze how each checkpoint contributes to the overall pool of correct patches. Table 9 shows the contribution of each checkpoint and the manual empty patch. Almost all the checkpoints contribute to the final results. The added manual deletion patch also has a positive contribution. Figure 12 shows the result of incrementally adding the patches of each checkpoint to the generated patches of previous checkpoints. We can see that for most benchmarks, adding more checkpoints leads to better results both when \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline \multirow{2}{*}{Benchmark} & \multicolumn{4}{c}{Time to correct} & \multicolumn{4}{c}{Validated until correct} & \multicolumn{2}{c}{Timeouts} \\ \cline{2-9} & min & max & median & mean & min & max & median & mean & max \\ \hline Defects4J (v1.2) & 00:00:06.962 & 00:37:33.473 & 00:01:23.657 & 00:03:07.288 & 1 & 234 & 8.0 & 31.22 & 2 \\ Defects4J (v2.0) & 00:00:04.437 & 00:18:58.074 & 00:01:00.709 & 00:02:54.237 & 1 & 322 & 12.0 & 53.66 & 3 \\ Bears & 00:00:20.088 & 01:24:41.593 & 00:03:21.152 & 00:10:33.795 & 1 & 194 & 8.5 & 32.42 & 0 \\ QuixBugs (Java) & 00:00:07.328 & 00:29:58.495 & 00:00:48.338 & 00:04:12.138 & 1 & 250 & 14.0 & 56.64 & 13 \\ QuixBugs (Python) & 00:00:00.630 & 00:20:09.557 & 00:00:02.718 & 00:01:27.222 & 2 & 269 & 7.0 & 58.76 & 20 \\ Codeflaws & 00:00:00.333 & 02:33:47.090 & 00:00:04.578 & 00:00:55.622 & 1 & 358 & 9.0 & 37.74 & 150 \\ \hline Overall & 00:00:00.333 & 02:33:47.090 & 00:00:05.748 & 00:01:13.520 & 1 & 358 & 9.0 & 38.45 & 150 \\ \hline \hline \end{tabular} \end{table} Table 6: Statistics for validation until correct patch. Time is in HH:MM:SS.ffff format. \begin{table} \begin{tabular}{l c c c c} \hline \hline Benchmark & Top-1 & Top-5 & Top-10 & All \\ \hline Defects4J (v1.2) & 67 & 72 & 72 & 72 \\ Defects4J (v2.0) & 56 & 64 & 65 & 65 \\ Bears & 24 & 25 & 25 & 26 \\ QuixBugs (Java) & 25 & 25 & 25 & 25 \\ QuixBugs (Python) & 29 & 30 & 30 & 30 \\ Codeflaws & 1,764 & 1,990 & 2,017 & 2,071 \\ ManyBugs & - & - & - & 15 \\ BugAID & - & - & - & 5 \\ \hline Total & 1,965 & 2,206 & 2,234 & 2,309 \\ \hline \hline \end{tabular} \end{table} Table 7: Number of correctly fixed bugs based on plausible ranking when all plausible patches are considered. considering the first plausible patch and all the plausible patches. However, for some benchmarks, such as BugAID, adding more checkpoints does not improve the results. This confirms the similar findings of Lutellier et al. (2020). ### RQ4: Multilingual and monolingual In addition to the multilingual model, we also train models under the same setting as the multilingual model but only using training data of a single programming language. We then use monolingual models to generate patches for the benchmarks in the same language. Table 10 shows the comparison of the first plausible correct patches of multilingual and monolingual models. The multilingual model outperforms the monolingual models for most benchmarks, except for ManyBugs and BugAID. Note that for ManyBugs and BugAID, running the validation step could change the results, and there might be correct patches that we have missed. Furthermore, the multilingual model fixes 426 unique bugs across all the benchmarks that the monolingual models do not fix, while the monolingual models fix 120 unique bugs. This highlights the benefit of leveraging multiple programming languages for training as it transfers bug patterns across languages. ### Threats to validity In this section, we outline possible threats that could impact the validity of our experimental findings and discuss how we mitigated them. A major threat to internal validity is the potential fault in manual patch correctness assessment, which may result in misclassification or bias due to a lack of expertise or mistakes (Ye et al., 2021). This is a common threat for all program repair results based on manual assessment (Ye et al., 2021). To alleviate this threat, we compared our patches to patches generated by existing tools and carefully checked their semantic equivalency to reference developer patches. For results of other tools, we use the reported performance number in the paper of approaches and cross-checked them with patches in their repositories. Another threat to internal validity relates to potential fault in the implementation and hyperparameter configuration we used. We have double-checked our implementation, and to ensure reproducibility, we used fixed manual seed values wherever possible. To further mitigate these threats, we make all generated patches and our source code publicly available for verification and review by other researchers. \begin{table} \begin{tabular}{l c c c c c} \hline \hline Benchmark & checkpoint1 & checkpoint2 & checkpoint3 & checkpoint4 & checkpoint5 \\ \hline Defects4J (v1.2) & 51/73 & 55/75 & 56/78 & 53/78 & 59/78 \\ Defects4J (v2.0) & 43/76 & 45/78 & 51/85 & 49/84 & 50/83 \\ Bears & 14/27 & 15/26 & 20/32 & 19/29 & 18/27 \\ QuixBugs (Java) & 15/17 & 15/16 & 16/17 & 18/18 & 18/18 \\ QuixBugs (Python) & 18/19 & 19/20 & 22/23 & 23/24 & 24/24 \\ Codeflaws & 1,317/1,981 & 1,318/1,959 & 1,374/2,012 & 1,381/2,028 & 1,379/2,011 \\ ManyBugs & 11/- & 12/- & 12/- & 13/- & 12/- \\ BugAID & 5/- & 5/- & 5/- & 5/- & 5/- \\ \hline Total & 1,474/2,193 & 1,484/2,174 & 1,556/2,247 & 1,561/2,261 & 1,565/2,241 \\ \hline \hline \end{tabular} \end{table} Table 8: Result of each checkpoint independently shown as _correct/plausible_. This also includes the manually added patch to each checkpoint. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Benchmark & manual & checkpoint1 & checkpoint2 & checkpoint3 & checkpoint4 & checkpoint5 \\ \hline Defects4J (v1.2) & 8/10 & 11/15 & 8/12 & 19/25 & 10/19 & 11/13 \\ Defects4J (v2.0) & 8/11 & 4/14 & 10/16 & 14/23 & 9/16 & 11/23 \\ Bears & 2/4 & 1/1 & 1/11 & 12/14 & 6/8 & 2/5 \\ QuixBugs (Java) & 1/1 & 9/10 & 5/5 & 6/6 & 4/4 & 0/0 \\ QuixBugs (Python) & 4/5 & 5/5 & 4/4 & 8/8 & 8/8 & 0/0 \\ Codeflaws & 240/291 & 268/389 & 301/406 & 306/398 & 346/462 & 303/413 \\ ManyBugs & 2/- & 2/- & 2/- & 3/- & 3/- & 3/- \\ BugAID & 0/- & 0/- & 3/- & 2/- & 0/- & 0/- \\ \hline Total & 265/322 & 300/434 & 334/444 & 370/474 & 386/517 & 330/454 \\ \hline \hline \end{tabular} \end{table} Table 9: How many of the patches come from each checkpoint. Results are shown as _correct/plausible_. A third threat to internal validity comes from using CodeT5 as our base model, which is trained on large amounts of open-source code snippets. This means that its training data could overlap with our evaluation benchmarks. This issue is hard to address since retraining this model would require significant resources. However, some factors could mitigate this concern: First, the overlapping data, if any, would be a very small fraction of the training data. Second, both the correct and incorrect program versions would likely be present in the training data without any labels indicating which one is correct or incorrect since the pre-training objective of CodeT5 is different, and it was never specifically trained for the task of program repair. The same issue is present in approaches that use Codex or ChatGPT models (Prenner et al., 2022; Sobania et al., 2023). Our APR training data is collected up to the date of bugs in our evaluation benchmark to avoid any overlap. A threat to external validity is a threat that our approach might not be generalizable to fixing bugs outside the tested bugs benchmarks as shown by Durieux et al. (2019) by the phenomenon of "benchmark overfitting" in program repair. We use six different benchmarks in four different programming languages with up to 5,257 real-world bugs to address this issue. Evaluation on more benchmarks (e.g., Bugs.jar (Saha et al., 2018), BugsJS (Gyimesi et al., 2019), and BugsInPy (Widyasari et al., 2020)) could be done in the future. ## 5 Related work Automated program repair (APR) is a rapidly evolving and diverse field with a wide range of research and development efforts. In this section, we highlight some of the works that closely relate to our work while acknowledging that there \begin{table} \begin{tabular}{l c c} \hline Benchmark & Multilingual & Monolingual \\ \hline Defects4J (v1.2) & 67/94 & 55/93 \\ Defects4J (v2.0) & 56/103 & 52/98 \\ Bears & 24/33 & 21/34 \\ QuixBugs (Java) & 25/26 & 21/24 \\ QuixBugs (Python) & 29/30 & 26/28 \\ Codeflaws & 1,764/2,359 & 1,482/2,357 \\ ManyBugs & 15/- & 16/- \\ BugAID & 5/- & 6/- \\ \hline Total & 1,985/2,645 & 1,679/2,634 \\ \hline \end{tabular} \end{table} Table 10: Comparison of results of multilingual and monolingual models. Results are shown as _correct/plausible_. Figure 12: Results with incremental addition of checkpoints. are many other important contributions. For a more comprehensive survey of APR literature, we direct readers to recent surveys in the field (Gao et al., 2022; Huang et al., 2023; Le Goues et al., 2019; Monperrus, 2018). One well-established class of APR techniques is search-based methods that involve syntactic manipulation of code by applying different edit patterns. These techniques use a search algorithm to iteratively explore the space of possible code changes in order to find a plausible patch. Examples of such techniques include GenProg (Le Goues et al., 2012), SimFix (Jiang et al., 2018), and VarFix (Wong et al., 2021), among others. Another class of APR approaches involves semantic analysis of code. These techniques use a set of constraint specifications to transform the program repair problem into a constraint solver problem and identify potential fixes that preserve the program's intended semantics. Examples of these techniques include SemFix (Nguyen et al., 2013), Angelix (Mechtaev et al., 2016), and SOSRepair (Afzal et al., 2019). Template-based methods generate repair patches by applying a predefined program fix template to the faulty code. A fix template specifies how to modify the code to fix a certain type of bug. Fix templates can be either manually defined (Liu et al., 2019) or automatically mined from code repositories (Koyuncu et al., 2020; Liu et al., 2019). Recently, learning-based techniques have gained traction in automatically fixing software bugs by learning from extensive code repositories. These techniques mostly employ neural networks to automatically generate correct code from buggy source code and are called neural program repair tools. These tools use a variety of sequence-to-sequence, neural machine translation (NMT), graph-to-sequence, and other deep learning models to generate patches as sequences of tokens (Chen et al., 2019; Jiang et al., 2021; Lutellier et al., 2020; Ye et al., 2022) or edits (Chakraborty et al., 2022; Ding et al., 2020; Li et al., 2020). Thanks to the strong learning capabilities of these models, neural program repair techniques learn to capture the relationship between buggy and correct code, eliminating the need for manual design of fix patterns or feature templates and have outperformed many existing approaches. These works mostly use supervised learning on past bug-fixing commits. There are also works that use self-supervised learning that automatically generate training samples (Allamanis et al., 2021; Yasunaga and Liang, 2020, 2021; Ye et al., 2023). Our work uses a sequence-to-sequence model and belongs to the supervised learning category. Tufano et al. (2019) present an empirical study on using NMT to learn how to fix bugs in Java code. The authors mine a large dataset of bug-fixing commits from GitHub and extract method-level pairs of buggy and fixed code. They abstract the code to reduce the vocabulary size and train an encoder-decoder model to learn how to transform buggy code into fixed code. Chen et al. (2019) introduce SequenceR, an end-to-end approach to program repair based on sequence-to-sequence learning on source code that sees the APR task as a translation from buggy to correct source code. A copy mechanism is used to handle the large and unlimited vocabulary of code, and an abstract buggy context is constructed to capture the relevant information for generating patches. The model is trained on a large dataset of one-line bug-fixing commits from GitHub and evaluated on CodRep (Chen and Monperrus, 2018) and Defects4J (Just et al., 2014) benchmarks. Zhu et al. (2021) present Recoder, an approach for APR that uses a syntax-guided edit decoder with a provider/decider architecture to generate patches that are syntactically correct and context-aware. Recoder also introduces placeholder generation to handle project-specific identifiers and generates edits rather than modified code. KNOD is proposed by Jiang et al. (2023), which presents a tree decoder and a domain-rule distillation module. The tree decoder directly generates abstract syntax trees of patches in three stages: parent selection, edge generation, and node generation. The domain-rule distillation module enforces syntactic and semantic rules on the decoder during both training and inference phases. Ye et al. (2022) propose RewardRepair, a neural repair model that incorporates compilation and test execution information into the training objective. RewardRepair uses a discriminative model to reward patches that compile and pass test cases, and penalize patches that are identical to the buggy code or introduce regression errors. The reward signal modulates the cross-entropy loss before backpropagation, guiding the neural network to generate high-quality and more compilable patches as shown in Table 4. RewardRepair has two training phases, a syntactic training and a semantic one. The work by Jiang et al. (2021) introduces CURE, a code-aware NMT technique for APR. CURE uses three techniques to improve the search space and the search strategy for generating patches: subword tokenization, pre-trained programming language model, and code-aware beam search to generate more compilable patches. CURE demonstrates the effectiveness of applying code awareness to NMT models for the APR task. CURE differs from our approach in several aspects. CURE pre-trains its model exclusively using Java code and a standard causal language modeling task for pre-training a GPT model, while we use CodeT5, an encoder-decoder model that is pre-trained on multiple programming languages and more diverse pre-training tasks to better understand source code. Additionally, CURE's tokenizer is trained on their Java corpus, while ours is trained on the multilingual corpus of CodeT5. In contrast to CURE, we use the vanilla beam search to generate patches, which is simpler and faster than the code-aware one that CURE uses but may be less effective. However, beam search is an independent component, and we can incorporate the code-aware beam search into our tool in future work. Most of these works target Java or are only evaluated on Java language benchmarks. Two works that are closest to our work and are evaluated on multiple programming languages are CoCoNuT by Lutellier et al. (2020) and CIRCLE by Yuan et al. (2022). CoCoNuT is an APR technique that uses NMT to learn from bug fixes in open-source repositories. CoCoNuT has three main contributions: A context-aware NMT architecture that uses two separate encoders with fully convolutional layers to represent the buggy line and its surrounding context; An ensemble approach that combines different models with different levels of complexity to capture various repair strategies; Cross-language portability that allows CoCoNuT to be applied to four programming languages (Java, Python, C, and JavaScript). Our approach differs from this work in several ways. First, our approach can handle multiple programming languages with a unified model, unlike CoCoNuT, which requires individual models for each programming language. Second, we use a pre-trained programming language model that learns from a large software codebase to capture code syntax and developer-like coding style. Third, we use checkpoint ensemble for training efficiency, as opposed to CoCoNuT's model ensemble for each language. We combine these techniques to form a novel APR architecture using a text-to-text transformer model for patch generation across multiple programming languages. Yuan et al. (2022) propose a method for APR that can handle multiple programming languages using continual learning. The method consists of five components: a prompt-based data representation, a T5-based model, a difficulty-based example replay, an elastic-based parameter updating regularization, and a re-repairing mechanism. The prompt-based data representation converts the bug-fixing task into a fill-in-the-blank task that is suitable for the pre-trained T5 model. The difficulty-based example replay and the elastic-based regularization are two continual learning strategies that prevent the model from catastrophic forgetting. The re-repairing mechanism is a post-processing step that corrects the errors caused by crossing languages. The major differences between CIRCLE and our work are the following: We formulate APR as a multitask learning task, which is simpler and allows the trained model to remain relevant for a long time (Lutellier et al., 2020), so we do not need frequent retraining. By using a specific control prefix for each language, we do not need re-repairing of generated patches and we get patches in the correct syntax of the target language. We also use a pre-trained code tokenizer and model instead of a pre-trained NLP tokenizer and model for better performance on APR tasks. Tokenizers that are trained on code usually generate fewer tokens for source code. Also, tokenizers that are only trained on natural text often fail to handle code tokens well. This is why CIRCLE needs the re-repairing step to fix unknown tokens of the generated patches. Several studies have explored the potential of large language models like OpenAI's Codex and ChatGPT for APR. Prenner et al. (2022) evaluate the performance of Codex on the QuixBugs benchmark. Codex is a GPT-3-like model that can generate code snippets from natural language descriptions or partial code inputs. The authors experiment with different prompts to trigger Codex's bug-fixing ability, such as providing hints, docstrings, or input-output examples. They find that Codex is competitive with state-of-the-art neural repair techniques, especially for Python, and that the choice of prompt has a significant impact on the repair success. Similarly, Sobania et al. (2023) assess the automatic bug fixing performance of ChatGPT. They compare ChatGPT with Codex, CoCoNuT, and several standard APR approaches on the QuixBugs benchmark. The study finds that ChatGPT has similar performance to Codex and CoCoNuT, and outperforms the standard APR approaches in terms of the number of fixed bugs. The authors also analyze the types of responses generated by ChatGPT and show that providing hints to ChatGPT can improve its success rate. Although most of the APR techniques focus on single-hunk bugs, the challenge of addressing multi-hunk bugs has attracted attention as well (Li et al., 2022; Saha et al., 2019; Wong et al., 2021; Ye and Monperrus, 2023). T5APR also targets a limited subset of multi-hunk bugs with the potential for further expansion in future work. There are also multilingual repair works that fix compilation and syntax issues (Joshi et al., 2023; Yasunaga and Liang, 2021). However, they target a different problem from our work, where programs compile successfully, and we fix dynamic errors. ## 6 Conclusion In this paper, we proposed T5APR, a novel approach for automated program repair (APR) that leverages the power of the CodeT5 text-to-text transformer model. Our method addresses program repair challenges across various programming languages, offering a unified solution for bug fixing. Our approach has several noteworthy contributions. We demonstrated the ability of T5APR to efficiently handle multiple programming languages, fixing bugs in Java, Python, C, and JavaScript. The checkpoint ensemble strategy further improves the reliability and performance of our approach, providing more robust patches for different bugs. We conducted an extensive experimental evaluation to highlight the effectiveness, generalizability, and competitiveness of T5APR. T5APR correctly fixed 1,985 bugs out of 5,257 bugs across six benchmarks, including 1,442 bugs that other compared state-of-the-art repair techniques did not fix. Moreover, the patch ranking comparison showed the promising performance of T5APR in terms of generating high-ranking patches. In addition to the contributions of this research, there are several directions for future work that can further enrich the field of APR. We can investigate the selection of context windows and the impact of using larger context windows beyond the immediate buggy context to find the optimal balance between information and computational efficiency. We can use a more advanced training and checkpoint selection process to further enhance T5APR's performance. Additionally, we can extend our approach to handle more complex scenarios, such as multi-hunk bugs with different changes in multiple locations. We can also further expand the supported languages and evaluation benchmarks, even languages that were not part of the pre-training of CodeT5 but have similar syntax and semantics, as multilingual learning especially helps knowledge transfer in low-resource languages (Zugner et al., 2020). As the field progresses, the development of explainable patch generation techniques (Kang et al., 2023; Liang et al., 2019; Monperrus, 2019) and close collaboration with software developers could foster the usability and trustworthiness of automated repair solutions.
2309.05551
OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data
The inexorable growth of online shopping and e-commerce demands scalable and robust machine learning-based solutions to accommodate customer requirements. In the context of automatic tagging classification and multimodal retrieval, prior works either defined a low generalizable supervised learning approach or more reusable CLIP-based techniques while, however, training on closed source data. In this work, we propose OpenFashionCLIP, a vision-and-language contrastive learning method that only adopts open-source fashion data stemming from diverse domains, and characterized by varying degrees of specificity. Our approach is extensively validated across several tasks and benchmarks, and experimental results highlight a significant out-of-domain generalization capability and consistent improvements over state-of-the-art methods both in terms of accuracy and recall. Source code and trained models are publicly available at: https://github.com/aimagelab/open-fashion-clip.
Giuseppe Cartella, Alberto Baldrati, Davide Morelli, Marcella Cornia, Marco Bertini, Rita Cucchiara
2023-09-11T15:36:03Z
http://arxiv.org/abs/2309.05551v1
# OpenFashionCLIP: ###### Abstract The inexorable growth of online shopping and e-commerce demands scalable and robust machine learning-based solutions to accommodate customer requirements. In the context of automatic tagging classification and multimodal retrieval, prior works either defined a low generalizable supervised learning approach or more reusable CLIP-based techniques while, however, training on closed source data. In this work, we propose OpenFashionCLIP, a vision-and-language contrastive learning method that only adopts open-source fashion data stemming from diverse domains, and characterized by varying degrees of specificity. Our approach is extensively validated across several tasks and benchmarks, and experimental results highlight a significant out-of-domain generalization capability and consistent improvements over state-of-the-art methods both in terms of accuracy and recall. Source code and trained models are publicly available at: [https://github.com/aimagelab/open-fashion-clip](https://github.com/aimagelab/open-fashion-clip). Keywords:Fashion Domain Vision-and-Language Pre-Training Open-Source Datasets ## 1 Introduction In the era of digital transformation, online shopping, and e-commerce have experienced an unprecedented surge in popularity. The convenience, accessibility, and variety offered by these platforms have revolutionized the way consumers engage with retail. Such digital shift creates an immense volume of data, therefore, the need for scalable and robust machine learning-based solutions to accommodate customer requirements becomes increasingly vital [42, 48, 49]. In the fashion domain, this includes tasks such as cross-modal retrieval [18, 27], recommendation [10, 21, 39], and visual product search [2, 32, 4, 4, 32], which play a crucial role in enhancing user experience, optimizing search functionality, and enabling efficient product recommendation systems. To address these challenges, innovative solutions that combine vision-and-language understanding have been proposed [19, 30, 50]. Although prior works have made noteworthy contributions in the fashion domain, they still suffer from some deficiencies. Approaches like [4] are able to well fit a specific task but struggle to adapt to unseen datasets and exhibit sub-optimal performance when faced with domain shifts. This results in poor zero-shot capability. On the contrary, other techniques have employed CLIP-based methods [26, 47], which offer better generalization capabilities thanks to the pre-training on large-scale datasets. Some works as [8], have often relied on closed-source data, limiting their applicability and hindering the ability to reproduce and extend results. Therefore, there remains a need for a scalable and reusable method that can leverage open-source fashion data with varying levels of detail while demonstrating improved generalization and performance. In response to the aforementioned challenges, in this paper, we propose OpenFashionCLIP, a vision-and-language contrastive learning method that stands out from previous approaches in several ways. We adopt open-source fashion data from multiple sources encompassing diverse styles and levels of detail. Specifically, we adopt four publicly available datasets for the training phase, namely FashionIQ [44], Fashion-Gen [36], Fashion200K [20], and iMaterialist [17]. We believe this approach not only enhances transparency and reproducibility but also broadens the accessibility and applicability of our technique to a wider range of users and domains. The contrastive learning framework employed in OpenFashionCLIP enables robust generalization capabilities, ensuring consistent performance even in the presence of domain shifts and previously unseen data. Our method adopts a fashion-specific prompt engineering technique [6, 16, 35] and is able to effectively learn joint representations from multiple domains. OpenFashionCLIP overcomes the limitations of supervised learning approaches and closed-source data training, facilitating seamless integration between visual and textual modalities. Extensive experiments have been conducted to evaluate the effectiveness of OpenFashionCLIP across diverse tasks and benchmarks. We provide a comparison against CLIP [35], OpenCLIP [43] and a recent CLIP-based method fine-tuned on closed-source fashion data, namely FashionCLIP [8]. The experimental results highlight the significant out-of-domain generalization capability of our method. Notably, our fine-tuning strategy on open-source fashion data yields superior performance compared to competitors in several metrics, thus underscoring the benefits of leveraging open-source datasets for training. ## 2 Related Work The ever-growing interest of customers in e-commerce has made the introduction of innovative solutions essential to enhance the online experience. On this basis, recommendation systems play a crucial role and numerous works have been introduced [10, 11, 21, 39]. An illustrative example is the automatic creation of capsule wardrobes proposed in [21], where given an ensemble of garments and accessories the proposed method provided some possible visually compatible outfits. One of the most significant challenges of this task is the understanding of what visual compatibility means. To this aim, Cucurull _et al._[10] addressed the compatibility prediction problem by exploiting the context information of fashion items, whereas Sarkar _et al._[39] exploited a Transformer-based architecture to learn an outfit-level token embedding which is then fed through an MLP network to predict the compatibility score. In addition, De Divitiis _et al._[11] introduced a more fine-grained control over the recommendations based on shape and color. Generally, users desire to seek a specific article in the catalog with relative ease, therefore, designing efficient multimodal systems represents another important key to success for the fashion industry. A considerable portion of user online interactions fall into the area of multimodal retrieval, the task of retrieving an image corresponding to a given textual query, and vice versa. Prior works range from more controlled environments [23, 27] to in-the-wild settings [18], where the domain shift between query and database images is a challenging problem. Beyond recommendations and retrieval, another research line that is currently attracting attention is the one of virtual try-on, both in 3D [29, 37, 38] and 2D [13, 14, 15, 24, 31, 33, 46]. Virtual try-on aims to transfer a given in-shop garment onto a reference person while preserving the pose and the identity of the model. A related area is the one marked by fashion image editing [5, 12, 34]. While Dong _et al._[12] conditioned the fashion image manipulation process on sketches and color strokes, other approaches [5, 34] introduced for the first time a multimodal fashion image editing conditioned on text. Specifically, Pernus _et al._[34] devised a GAN-based iterative solution to change specific characteristics of the given image based on a textual query. Baldati _et al._[5], instead, focused on the creation of new garments exploiting latent diffusion models and conditioning the generation process on text, sketch, and model's pose. Solving the aforementioned downstream tasks has been made possible due to large-scale architectures explicitly trained on fashion data which effectively combine vision-and-language modalities to learn more powerful representations [19, 50]. Recent approaches exploit CLIP embeddings [35] to obtain more scalable and robust solutions able to generalize to different domains without supervision [8], but the closed source data training represents the main flaw. ## 3 On the Adaptation of CLIP to the Fashion Domain ### Fashion-Oriented Contrastive Learning Despite the significant scaling capability of large vision-and-language models such as CLIP, such a property comes at a cost. The pre-training of these models is usually conducted on datasets that contain million [35], or even billion [40] image-text pairs that, however, are gathered from the web and thus very noisy. Unfortunately, such coarse-grained annotations have been shown to lead to sub-optimal performance for vision-and-language learning [9, 25]. Moreover, the adaptation of CLIP to the specific domain of fashion is far from trivial. Indeed, a significant part of the images contained in these datasets is associated with incomplete captions or even worse, with simple and basic tags collected exploiting posts uploaded on the web by general and non-fashion-expert users. Considering these flaws, an adaptation of CLIP to a specific domain, uniquely relying on a vanilla pre-trained version, would not enable the attainment of optimal results. In our context, training on fashion-specific datasets containing fine-grained descriptions of garments and fashion accessories becomes crucial to obtain powerful representations while guaranteeing generalization and robustness to solve the tasks demanded by the fashion industries. ### CLIP Preliminaries Contrastive learning is a self-supervised machine learning technique that aims to learn data representations by constructing a powerful embedding space where semantically related concepts are close while dissimilar samples are pushed apart. On this line, the vision-and-language domain has already capitalized on such a learning technique. The CLIP model [35] represents the most common and illustrative method for connecting images and text in a shared multimodal space. The CLIP architecture consists of a text encoder \(g_{\phi}\) and an image encoder \(f_{\theta}\), trained on image-caption pairs \(\mathcal{S}=\{(x_{i},t_{i})\}_{i=1}^{N}\). The image encoder \(f_{\theta}\) embeds an image \(x\in\mathcal{X}\) obtaining a visual representation \(\mathbf{v}=f_{\theta}(x)\). In the same manner, the text encoder \(g_{\phi}\) takes as input a tokenized string \(\tilde{t}\) and returns a textual embedding \(\mathbf{u}=g_{\phi}(\tilde{t})\). For each batch \(\mathcal{B}\) of image-caption pairs \(\mathcal{B}=\{(x_{i},t_{i})\}_{i=1}^{L}\), where \(L\) is the batch size, the objective is to maximize the cosine similarity between \(\mathbf{v}_{i}\) and \(\mathbf{u}_{i}\) while minimizing the cosine similarity between \(\mathbf{v}_{i}\) and \(\mathbf{u}_{j}\), \(\forall j\neq i\). The CLIP loss can be formally expressed as the sum of two symmetric terms: \[\mathcal{L}_{contrastive}=\mathcal{L}_{T2I}+\mathcal{L}_{I2T}, \tag{1}\] Figure 1: Overview of our proposed method. We fine-tune both encoders and the linear projection layers toward the embedding space. \[\mathcal{L}_{T2I}=-\frac{1}{L}\sum_{i=1}^{L}\log\frac{\exp(\tau\mathbf{u}_{i}^{T}\mathbf{v} _{i})}{\sum_{j=1}^{L}\exp(\tau\mathbf{u}_{i}^{T}\mathbf{v}_{j})}, \tag{2}\] \[\mathcal{L}_{I2T}=-\frac{1}{L}\sum_{i=1}^{L}\log\frac{\exp(\tau\mathbf{v}_{i}^{T}\bm {u}_{i})}{\sum_{j=1}^{L}\exp(\tau\mathbf{v}_{i}^{T}\mathbf{u}_{j})}, \tag{3}\] where \(\tau\) represents a temperature parameter. ### Open Source Training In the fashion domain, several datasets, characterized by multimodal annotations from human experts, have been introduced. Differently from prior work [8] that fine-tuned CLIP on a private dataset, we devise a contrastive learning strategy entirely based on open-source data. An overview of the proposed CLIP-based fine-tuning is shown in Fig. 1. In detail, we adopt four publicly available datasets: **Fashion-Gen [36].** The dataset contains a total of \(325,536\) high resolution images (\(1360\times 1360\)) with \(260,480\) samples for the training set and \(32,528\) images both for validation and test set. In addition, \(48\) main categories and \(121\) fine-grained categories (_i.e. subcategory_) are defined. **Fashion IQ [44].** There are \(77,684\) images, divided into three main categories (dresses, shirts, and tops&tees), with product descriptions and attribute labels. **Fashion200K [20].** It contains \(209,544\) clothing images from five categories (dresses, tops, pants, skirts, and jackets) and an associated textual description. **iMaterialist [17].** It is a multi-label dataset containing over one million images and \(8\) groups of \(228\) fine-grained attributes. These datasets are characterized by different levels of detail of the image annotations. FashionIQ has been proposed to accomplish the task of interactive image retrieval, therefore, the captions are relative to what should be modified in the source image to retrieve the target image. On the contrary, iMaterialist Figure 2: Qualitative samples from the training datasets. only contains attributes while Fashion-Gen and Fashion200K present more semantically rich descriptions. As a pre-processing step, we apply lemmatization and extract the noun chunks from the textual descriptions. Noun chunks are sequences of words that include a noun and any associated word that modifies or describes that noun (_e.g._ an adjective). In particular, we adopt the spaCySS NLP library to extract noun chunks. Data pre-processing is performed for all datasets, except for iMaterialist which only contains simple attributes, thus making such an operation unnecessary. For the sake of clarity, from now on we refer to \(t_{i}\) as the pre-processed caption after noun chunks extraction. Examples of image-caption pairs from the training datasets are reported in Fig. 2. Footnote §: [https://github.com/explosion/spaCy](https://github.com/explosion/spaCy) Compared to FashionCLIP which was trained on approximately \(700k\) images, our training set is much larger and sums to \(1,147,929\) image-text pairs. In detail, during fine-tuning, we construct each batch so that it contains image-text pairs from all the different data sources. Considering the great number of pairs of the complete training dataset, we fine-tune all the pre-trained weights of the CLIP model. Indeed, only training the projections toward the embedding space would not allow to fully effectively capture the properties of the data distribution. ### Prompt Engineering Prompt engineering is the technique related to the customization of the prompt text for each task. Providing context to the model has been shown to work well in a wide range of settings. Following prior works [6, 16, 35], we provide our model with a fashion-specific context defining a template of prompts related to our application domain. Specifically, given a template of prompts \(\mathcal{P}=\{(p_{i})\}_{i=1}^{|\mathcal{P}|}\), at each training step, we select a random \(p_{i}\in\mathcal{P}\) for each image-caption pair \((x_{i},t_{i})\in\mathcal{B}\). The caption \(t_{i}\) is concatenated to \(p_{i}\) obtaining the final CLIP input. The complete fashion-specific template includes the following prompts: "a photo of a", "a photo of a nice", "a photo of a cool", "a photo of an expensive", "a good photo of a", "a bright photo of a", "a fashion studio shot of a", "a fashion magazine photo of a", "a fashion brochure photo of a", "a fashion catalog photo of a", "a fashion press photo of a", "a zalando photo of a", "a yoox photo of a", "a yoox web image of a", "an asos photo of a", "a high resolution photo of a", "a cropped photo of a", "a close-up photo of a", "a photo of one". ## 4 Experimental Evaluation In this section, we describe the open-source datasets used as benchmarks together with the tasks performed to assess the scalability and robustness of our approach. ### Benchmark Datasets We validate our approach across three different datasets: **DeepFashion**[27] contains over \(800,000\) images and is divided into several benchmarks. In our experiments, we employ the attribute prediction subset which contains \(40,000\) images and \(1,000\) different attributes. **Fashion-MNIST**[45] is based on the Zalando catalog and consists of \(60,000\) training images, a test set of \(10,000\) examples, and \(10\) categories. All images are in grayscale and have a \(28\times 28\) resolution. Following [8], we apply image inversion, thus working on images with a white background. **KAGL** is a subset of [1] and contains \(44,441\) images equipped with textual annotations including the master category, the sub-category, the article type, and the product description. In detail, we filter out all those images not belonging to the _'apparel'_ master category and kept the images depicting humans, resulting in a total of \(21,397\) samples, \(8\) sub-categories, and \(58\) article types. Qualitative examples of the adopted benchmarks are reported in Fig. 3. ### Implementation Details We train the final model for \(60\) epochs using a batch size of \(2048\). To save memory, we adopt the gradient checkpointing technique [7]. AdamW [28] is employed as optimizer, with \(\beta_{1}\) set to \(0.9\) and \(\beta_{2}\) equal to \(0.98\), epsilon of \(1e-6\), and weight decay equal to \(0.2\). A learning rate of \(5e-7\) and automatic mixed precision are applied. For a fair comparison with competitors, we select the ViT-B/32 backbone as the image encoder. During training, we apply the prompt engineering strategy described in Sec. 3.4. As a pre-trained CLIP model, we refer to the OpenCLIP implementation [22] trained on LAION-2B [40] composed of 2 billion image-text pairs. In the evaluation phase, following the pre-processing procedure of [35], we resize the image along the shortest edge and apply center crop. ### Zero-shot Classification In our context, zero-shot classification refers to the task of classification on unseen datasets characterized by different data distributions compared to the training datasets. The task is crucial to assess the transfer capability of the model to adapt to new and Figure 3: Samples from the benchmark datasets. unseen domains. Following the standard CLIP evaluation setup [35], we perform classification by embedding the image and all \(k\) categories. Regarding prompt engineering, we always append every category {label} to the same generic prompt "a photo of a". We feed each category prompt through the CLIP textual encoder \(g_{\phi}\) obtaining a set of feature vectors \(\mathcal{U}=\{(\mathbf{u}_{i})\}_{i=1}^{N}\). In the same manner, we feed the image \(x_{i}\) through the CLIP image encoder to get the embedded representation \(\mathbf{v}=f_{\phi}(x_{i})\). To classify the image we compute the cosine similarity between \(\mathbf{v}\) and each text representation \(\mathbf{u}_{i}\). The predicted category is the one with the highest similarity. Experiments have been conducted on the test splits of Fashion-MNIST, KAGL, and on the attribute prediction benchmark of DeepFashion. Our model is compared against the original CLIP model [35], which was trained on the private WIT dataset, OpenCLIP [43] pre-trained on LAION-400M [41] and LAION-2B [40], and FashionCLIP [8] that was fine-tuned on closed source data from _Farfetch_. Note that we have reproduced the results of FashionCLIP by exploiting the source code released by the authors and adapting it to our tasks and settings. The task is evaluated considering three well-known metrics, namely accuracy@\(k\), recall@\(k\), and weighted \(F1\) score. The accuracy@\(k\) computes the number of times the correct label is among the top \(k\) labels predicted by the model. The recall@\(k\), instead, measures the number of relevant retrieved items with respect to the total number of relevant items for a given query. The weighted F1 score accounts for the class distribution in the dataset by calculating the F1 score for each class individually and then averaging based on the class frequencies. Quantitative results on Fashion-MNIST and KAGL are summarized in Table 1. The first aspect to mention is the improvement against CLIP and OpenCLIP on all metrics and both datasets, indicating the effectiveness of our fine-tuning strategy enabling a strong generalization and adaptation of our model to the specific fashion domain. Compared to FashionCLIP, our model shows better performance on Fashion-MNIST, while when tested on the 58 article types of KAGL, the results are comparable. OpenFashionCLIP performs better with the increase of the number of considered categories. Table 2 shows the results on the attribute prediction benchmark of the DeepFashion dataset. Categories of this dataset are attributes describing different garment charac \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{**F-MNIST**} & \multicolumn{3}{c}{**KAGL**} \\ \cline{4-9} **Model** & **Backbone** & **Pre-Training** & **Fine-tuned** & Acc@1 & F1 & Acc@1 & Acc@5 & Acc@10 & F1 \\ \hline CLIP & ViT-B/16 & OpenAI WIT & ✗ & 69.29 & 67.88 & 31.54 & 70.08 & 90.09 & 36.04 \\ CLIP & ViT-B/32 & OpenAI WIT & ✗ & 69.51 & 66.56 & 21.44 & 66.13 & 84.97 & 27.70 \\ OpenCLIP & ViT-B/32 & LAION-400M & ✗ & 81.62 & 81.16 & 33.69 & 76.60 & 89.23 & 37.89 \\ OpenCLIP & ViT-B/32 & LAION-2B & ✗ & 83.69 & 82.75 & 46.18 & 84.49 & 95.44 & 51.23 \\ \hline FashionCLIP & ViT-B/32 & LAION-2B & ✓ & 82.23 & 82.03 & **52.90** & 85.41 & 93.40 & **54.48** \\ **OpenFashionCLIP** & ViT-B/32 & LAION-2B & ✓ & **84.33** & **84.19** & 45.97 & **88.30** & **96.46** & **53.85** \\ \hline \hline \end{tabular} \end{table} Table 1: Category prediction results on the Fashion-MNIST and the KAGL datasets. \begin{table} \begin{tabular}{l c c c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{**Overall Recall**} & \multicolumn{3}{c}{**Per-Class**} & \multicolumn{1}{c}{**Recall**} \\ \cline{4-9} **Model** & **Backbone** & **Pre-Training** & **Fine-tuned** & \multicolumn{1}{c}{R@3} & \multicolumn{1}{c}{R@5} & \multicolumn{1}{c}{R@10} & \multicolumn{1}{c}{R@3} & \multicolumn{1}{c}{R@5} & \multicolumn{1}{c}{R@10} \\ \hline CLIP & ViT-B/16 & OpenAI WIT & ✗ & 8.00 & 11.40 & 17.54 & 13.31 & 17.42 & 24.54 \\ CLIP & ViT-B/32 & OpenAI WIT & ✗ & 7.35 & 10.30 & 16.60 & 11.39 & 15.13 & 21.67 \\ OpenCLIP & ViT-B/32 & LAION-400M & ✗ & 12.58 & 17.22 & 25.64 & 17.9 & 22.81 & 30.71 \\ OpenCLIP & ViT-B/32 & LAION-2B & ✗ & 13.07 & 17.70 & 26.13 & 19.35 & 24.31 & 32.51 \\ \hline FashionCLIP & ViT-B/32 & LAION-2B & ✓ & 15.19 & 20.83 & 32.37 & 17.30 & 22.27 & 30.56 \\ **OpenFashionCLIP** & ViT-B/32 & LAION-2B & ✓ & **24.47** & **32.97** & **45.77** & **28.67** & **36.07** & **47.28** \\ \hline \hline \end{tabular} \end{table} Table 2: Attribute recognition results on the DeepFashion dataset. teristics (_e.g_. v-neck, sleeveless, etc.), therefore we leverage the recall metric in this setting to account for the multi-label nature of the dataset. In particular, we evaluate both the per-class recall@\(k\) and the overall recall@\(k\) among all attributes. In this case, our solution outperforms FashionCLIP by a consistent margin, highlighting the effectiveness of our training strategy with data of different annotation detail granularity. ### Cross-modal Retrieval Cross-modal retrieval refers to the task of retrieving relevant contents from a multi-modal dataset using multiple modalities such as text and images. Different modalities should be integrated to enable an effective search based on the user's input query. Cross-modal retrieval can be divided into two sub-tasks: image-to-text and text-to-image retrieval. In the first setting, given a query image \(x\), we ask the model to retrieve the first \(k\) product descriptions that better match the image. On the opposite, in text-to-image retrieval, given a text query, the first \(k\) images that better correlate with the input query are returned. In Table 3, we evaluate our fine-tuning method on the KAGL dataset in terms of recall@\(k\) with \(k=1,5,10\). OpenFashionCLIP performs better compared to FashionCLIP on both settings and according to all recall metrics, thus further confirming the effectiveness of our proposal. ### Effectiveness of Prompt Engineering Finally, in Table 4, we evaluate the individual contribution of prompt engineering in our fine-tuning method. We present the ablation study on all considered benchmarks. The first line of the table (_i.e_. \(w/o\) prompt engineering) refers to the case where we perform fine-tuning without using the fashion-specific template described in Sec. 3.4 but employing a fixed prompt (_i.e_. "**a photo of a"). Notably, Fashion-MNIST is used for the classification task, DeepFashion for retrieval, and KAGL for both. As the results demonstrate, the idea to construct a fashion-specific set of prompts clearly performs well across all cases except for the KAGL classification benchmark. We argue that in general, domain-specific prompt engineering represents a key factor to obtain greater domain adaptation of the CLIP model. ## 5 Conclusion In this paper, we introduced OpenFashionCLIP, a vision-and-language contrastive learning method designed to address the scalability and robustness challenges posed by \begin{table} \begin{tabular}{l c c c c c c c c} \hline \hline & \multicolumn{2}{c}{**F-MNIST**} & \multicolumn{2}{c}{**KAGL**} & \multicolumn{2}{c}{**DeepFashion**} & \multicolumn{2}{c}{**KAGL**} \\ \cline{2-9} **Model** & Acc@1 & F1 & Acc@1 & F1 & R@3 & R@3 (cls) & R@1 (I2T) & R@1 (T2I) \\ \hline w/o prompt engineering & 83.21 & 82.99 & **47.51** & 47.3 & 20.34 & 25.21 & 7.47 & **7.73** \\ **OpenFashionCLIP** & **84.33** & **84.19** & 45.97 & **53.85** & **24.47** & **28.67** & **7.57** & **7.73** \\ \hline \hline \end{tabular} \end{table} Table 4: Ablation study to assess the validity of the prompt engineering technique. \begin{table} \begin{tabular}{l l l c c c c c c c} \hline \hline & & & & \multicolumn{3}{c}{**Image-to-Text**} & \multicolumn{3}{c}{**Text-to-Image**} \\ \cline{4-9} **Model** & **Backbone** & **Pre-Training** & **Fine-tuned** & R@1 & R@5 & R@10 & R@1 & R@5 & R@10 \\ \hline FashionCLIP & ViT-B/32 & LAION-2B & ✓ & 6.61 & 19.23 & 28.66 & 6.97 & 19.14 & 27.49 \\ **OpenFashionCLIP** & ViT-B/32 & LAION-2B & ✓ & **7.57** & **20.72** & **30.38** & **7.73** & **20.58** & **28.56** \\ \hline \hline \end{tabular} \end{table} Table 3: Cross-modal retrieval results on the KAGL dataset. the fashion industry for online shopping and e-commerce. By leveraging open-source fashion data from diverse sources, OpenFashionCLIP overcomes limitations associated with closed-source datasets and enhances transparency, reproducibility, and accessibility. Our strategy, characterized by the fine-tuning of all pre-trained weights across all CLIP layers together with the adoption of a context-specific prompt engineering technique, effectively enables better adaption to our specific domain. We evaluated our strategy on three benchmarks and demonstrated that the proposed solution led to superior performance over the baselines and competitors achieving better accuracy and recall in almost all settings. #### Acknowledgements This work has partially been supported by the European Commission under the PNRR-M4C2 (PE00000013) project "FAIR - Future Artificial Intelligence Research" and the European Horizon 2020 Programme (grant number 101004545 - ReInHerit), and by the PRIN project "CREATIVE: CRoss-modal understanding and gEnerATIon of Visual and tExtual content" (CUP B87G22000460001), co-funded by the Italian Ministry of University.
2302.08307
Unitary symmetries in wormhole geometry and its thermodynamics
From a geometric point of view, we show that the unitary symmetries $U(1)$ and $SU(2)$ stem fundamentally from Schwarzschild and Reissner-Nordstr\"om wormhole geometry through spacetime complexification. Then, we develop quantum tunneling which makes these wormholes traversable for particles. Finally, this leads to wormhole thermodynamics.
Ahmed Farag Ali, Emmanuel Moulay, Kimet Jusufi, Hassan Alshal
2022-12-30T19:29:34Z
http://arxiv.org/abs/2302.08307v1
# Unitary symmetries in wormhole geometry and its thermodynamics ###### Abstract From a geometric point of view, we show that unitary symmetries U(1) and SU(2) stem fundamentally from Schwarzschild and Reissner-Nordstrom wormhole geometry through spacetime complexification. Then, we develop quantum tunneling which makes these wormholes traversable for particles. Finally, this leads to wormhole thermodynamics. "As Above So Below" THOTH pacs: 04.60.-m; 04.60.Bc; 04.70.Dy ###### Contents * I Introduction * II Schwarzschild wormhole geometry * III Reissner-Nordstrom wormhole geometry * IV Quantum tunneling and wormhole thermodynamics * IV.1 Schwarzschild wormhole case * IV.2 Reissner-Nordstrom wormhole case * V Concluding remarks ## I Introduction Einsiten-Rosen wormhole was introduced to understand the geometric meaning of mass and charge of the elementary particles in Ref. [1] and then was developed by many authors [2; 3; 4; 5; 6; 7; 8; 9; 10]. The geometric description of physical concepts was a cornerstone of several approaches to quantum gravity. These approaches include noncommutative geometry [11], string theory [12], loop quantum gravity [13] and twistor theory [14]. In this article, we focus our attention on a fundamental question: _is there a conceptual connection between unitary symmetries and wormhole geometry_? We argue that it is possible to find unitary symmetries, such as U(1) and SU(2), from Schwarzschild and Reissner-Nordstrom wormhole geometry through spacetime complexification if a new Euclidean metric on a complex Hermitian manifold is provided. This motivates us to compute quantum tunneling, which indicates that these wormholes could be traversable for particles. Finally, this allows us to introduce wormhole thermodynamics that is consistent with black hole thermodynamics [15; 16]. The article is organized as follows. We start with the Schwarzschild wormhole geometry in Section II, and we connect its complex geodesics with U(1) and SU(2) symmetries by using spacetime complexification. We also provide a new Euclidean metric on a Hermitian complex manifold. In Section III, the massless exotic Reissner-Nordstrom wormhole geometry is also connected with the same unitary symmetries, and a discussion about the classical Reissner-Nordstrom wormhole geometry and the SU(3) symmetry is addressed. Quantum tunneling for particles is studied in Section IV and lead to wormhole thermodynamics. Finally, concluding remarks are given in Section V. ## II Schwarzschild wormhole geometry It is historically known that Einstein and Rosen (ER) introduced their ER bridge, or wormhole idea, to resolve the particle problem in General Relativity (GR) [1]. The ER bridge contrives a geometric meaning of particle properties, such as mass and charge, in the spacetime, where mass and charge are nothing but bridges in the spacetime. The ER bridge idea can be summarized as follows. The Schwarzschild metric is given by \[ds^{2}=-\left(1-\frac{2M}{r}\right)^{-1}dr^{2}-r^{2}\left(d\theta^{2}+\sin^{2 }\theta d\phi^{2}\right)+\left(1-\frac{2M}{r}\right)dt^{2}. \tag{1}\] where \(M>0\). It has both the physical singularity existing at \(r=0\) that cannot be removed, and the coordinate singularity at \(r=2M\) that can be removed by choosing another coordinate system. Einstein and Rosen suggested a coordinate system which resolves the coordinate singularity at \(r=2M\) by choosing the following transformation \[u^{2}=r-2M\, \tag{2}\] leading to \(4u^{2}du^{2}=dr^{2}\). In the new coordinate system, one obtains for \(ds^{2}\) the expression \[ds^{2}=-4(u^{2}+2M)du^{2}-(u^{2}+2M)^{2}(d\theta^{2}+\sin^{2}\theta d\phi^{2}) +\frac{u^{2}}{u^{2}+2M}dt^{2}. \tag{3}\] One may notice in this coordinate system that \(u\) will be real value for \(r>2M\) and will be imaginary for \(r<2M\). As \(u\) varies from \(-\infty\) to \(\infty\), one finds \(r\) varies from \(+\infty\) to \(2M\) and then from \(2M\) to \(+\infty\). In that sense, the \(4-\)dimensional spacetime can be described by two congruent sheets that are connected by a hyperplane at \(r=2M\), and that hyperplane is the so-called "bridge". Thus, Einstein and Rosen interpreted mass as a bridge in the spacetime. This draws our attention to look closely at the case when \(r<2M\), and consequently the variable "\(u\)" would have imaginary values in this region. The geodesics in the \(u-\)coordinate system will experience two different kinds in two different regions. In the region \(r>2M\), it would follow real trajectory, and it follows imaginary trajectory in the region \(r<2M\). But as we cannot "stitch" a real space and a complex space together, we prefer to _complexity_ the whole spacetime. It might be enticing to impose real spatial indices to formulate complex geodesics as the spatial coordinate \(u\) is what motivates us to consider _spacetime complexification_, but the more wise choice is to cook one complex dimension from spatiotemporal dimensions and the other complex dimension from the leftover spatial dimensions. Also, we classify the geodesics based on real and imaginary parts in the two sheets of the wormhole. This is crucial to develop a more consistent theory of gravity for the following reasons: * The manifold in GR is chosen to be pseudoRiemannian manifold [17], which is connected and guarantees the general covariance and continuous coordinate transformations on the manifold. But a basic question emerges: To what does the region \(r<2M\) develop under diffeomorphisms? The answer should include that there must be a geometric structure, by covarience principle, that corresponds to the region in wormhole geometry. * The physical singularity at \(r=0\) is irremovable by coordinate transformation in GR [18], which implies the importance of studying the region connected with \(r=0\), even in the coordinates that give wormholes, as it is likely to have a correspondence in wormhole geometry. * In wormhole geometry, the \(u\) values become imaginary for \(r<2M\). Imaginary value in physics plays crucial rule in building unitary symmetries. We are interested to understand the effect of this imaginary region in wormhole geometry knowing that the role of complex numbers in QM is recognized as to be a central one [19]. In order to complexify a spacetime \(\mathcal{N}\), or to think of \(\mathcal{N}\to\mathbb{R}^{4}\) as \(\mathcal{M}\to\mathbb{C}^{2}\), we introduce complex manifold \(\mathcal{M}\) of two complex dimensions \(\zeta\) and \(\eta\). We consider a point \(p\in\mathcal{M}\) so that \(p=(\zeta,\eta)\) defines the complex coordinates in some local chart \[\zeta=\zeta_{1}+i\zeta_{2}\;, \tag{4a}\] \[\eta=\eta_{1}+i\eta_{2}\;, \tag{4b}\] where the complex coordinates induce the parameter space of the real parameters \((\zeta_{1},\eta_{1},\zeta_{2},\eta_{2})\) on \(\mathcal{M}\). In that sense, the full geodesic in wormhole geometry would read \[\lambda_{1}(\zeta_{1},\eta_{1},\zeta_{2},\eta_{2}) =\lambda_{1}(\zeta_{1},\eta_{1})\in\mathbb{R}\, \tag{5a}\] \[\lambda_{2}(\zeta_{1},\eta_{1},\zeta_{2},\eta_{2}) =\lambda_{2}(\zeta_{2},\eta_{2})\in\mathbb{R}\, \tag{5b}\] such that \(g_{\mu\nu}\) becomes Hermitian. We will come to the importance of this in a little bit. But for now, we study the effect of the elements of a group \(G\), as linear operators, on a complex manifold and the coordinate transformations related to \(G\). Such operations define a set of homomorphisms from \(G\) to the general linear group \(\mathrm{GL}(n,\mathbb{C})\), and such homomorphisms to the general linear group define an \(n-\)dimensional matrix representation. The matrix representation is useful when it works on any manifold chart, i.e. without fixing the manifold's basis. In that sense, a matrix representation of \(G\) is a realization of \(G\) elements as matrices affecting an \(n-\)dimensional complex space of column vectors. Additionally, the change of the manifold's basis results in conjugation of the matrix representation of \(G\). Furthermore, a matrix representation on a manifold and a group operation on a manifold are two equivalent concepts. The later defines the group orbits and group stabilizers. It is interesting to study groups of Lie isometries and their symmetries of manifolds which the \(G\) elements act transitively on. We define the isotropy group as \(G_{p}=\{g\in G,\ gp=p\},\ p\in\mathcal{M}\), and the orbit of \(G\) through \(p\) by \(Gp=\{gp,\ g\in G\}\simeq G/G_{p}\). And an orbit becomes a stabilizer if \(G\equiv G_{p}\) at \(p\in\mathcal{M}\). The transformations (2)-(4) show that a matrix \(g_{\mu\nu}\) should belong to the general linear group \(\mathrm{GL}(4,\mathbb{R}):=\{T\in\mathrm{M}_{4}(\mathbb{R}):\det T\neq 0\}\), where \(\mathrm{M}_{4}(\mathbb{R})\) is the space of all real \(4\times 4\) matrices. We can exploit the bijective relation \(\mathrm{GL}(n,\mathbb{C})\leftrightarrow\mathrm{GL}(2n,\mathbb{R})\) to complexify the spacetime. Without loss of generality, we try \(n=2\) such that the last relation means \(Z\mapsto T:=\mathbb{R}Z\) on the elementary level for the complex matrix \(Z\in\mathrm{M}_{2}(\mathbb{C})\). This means the \(2\times 2\) complex matrices \(Z\) can be characterized as \(4\times 4\) real matrices \(T\) such that they preserve the action of the linear complex structure \(J:\mathcal{M}\rightarrow\mathcal{M}\) on the metric and the manifold. The complex structure is characterized by \(J^{2}=-I\) for a manifold \(\mathcal{M}\) upon which \(Z\) acts. It is worth noting that the action of \(J\) on \(\mathcal{M}\) complexifies the tangent bundle \(T\mathcal{M}^{\mathbb{C}}\) and introduces the conjugate tangent bundle too. So to complexify spacetime, we need to construct the conjugate group \(T^{\dagger}HT:=\mathrm{SU}(2)\cap\mathcal{M}\), where \(H\) is a \(2\times 2\) Hermitian complex form of \(g_{\mu\overline{\nu}}=h_{\mu\nu}+ik_{\mu\nu}\), i.e. \(g_{\mu\nu}=g_{\overline{\mu\nu}}\), and \(\mathrm{SU}(2)\) is the special unitary subgroup. This guarantees the invariance of the Hermitian form \[\langle T\zeta,T\eta\rangle=\langle\zeta,\eta\rangle=\zeta_{1}\zeta_{2}-\eta_{1} \eta_{2}. \tag{6}\] We know that some \(Z\in\mathrm{GL}(2,\mathbb{C})\) can be defined as the _special linear subgroup_\(\mathrm{SL}(2,\mathbb{C}):=\{T\in\mathrm{GL}(2,\mathbb{C}):\det T=1\}\). Moreover, there exists another subgroup known as the _unitary subgroup_\(\mathrm{U}(2):=\{T\in\mathrm{GL}(2,\mathbb{C}):T^{\dagger}I_{(1,1)}T=I_{(1,1)}\}\), where \(T^{\dagger}\) is the conjugate transpose of \(T\) and \(I_{(1,1)}=diag(1,-1)\) as in the previously mentioned Hermitian form. Finally, the compact Lie _special unitary subgroup_ is defined as \(\mathrm{SU}(2):=\mathrm{U}(2)\cap\mathrm{SL}(2,\mathbb{C})\). The importance of unitary subgroups stems from the textbook fact that every finite subgroup of \(\mathrm{GL}(2,\mathbb{C})\) is _conjugate_ to a subgroup of \(\mathrm{U}(2)\), and the proof is based on the GL-invariance, that is, the unitary representation preserves the length of any vector belonging to \(\mathcal{M}\). If we restrict \(H\) subgroup to be the diagonal matrices in \(\mathrm{SU}(2)\subset\mathrm{M}_{2}(\mathbb{C})\), the cosets \(T^{\dagger}H\) would _partition_ the manifold associated with \(\mathrm{SU}(2)\). We will see the importance of this when we reach the process of _Hopf fibration_. Now, a Lie topological group \(G\), including \(\mathrm{SU}(2)\), acts _continuously_ on \(\mathcal{M}\) by a set of homeomorphisms \(\Phi:G\times\mathcal{M}\to\mathcal{M};(g,p)\to\Phi_{p}(g)=gp\). This action is called _proper_ for any compact group. That is, any proper map inside \(\Psi:G\times\mathcal{M}\to\mathcal{M}\times\mathcal{M}\), such that \((g,p)\to\Psi(g)(p)=(gp,p)\), should have a compact inverse. Since the isotropy group \(\mathrm{SU}(2)_{p}\) is compact, then its representation \(\chi_{p}:\mathrm{SU}(2)_{p}\to\mathrm{GL}(2,T_{p}\mathcal{M}^{\mathbb{C}})\) is continuous, where \(\chi_{p}\in\mathrm{Is}_{p}(g)\) the linear isotropy of the group element \(g\). Such representation sends the group elements \(g\) into their diffeomorphic actions \(d_{g}\in\mathrm{Diff}(\mathcal{M})\) on \(\mathcal{M}\), where \(d_{g}:=\sqrt{ds^{2}}\) is the linear isotropy of the group element \(\mathrm{Is}_{p}(g)\) associated with the invariant distance or the manifold metric [20]. A manifold \(\mathcal{M}\) is biholomorphically equivalent to \(\mathbb{C}\) when the holomorphic automorphisms \(\mathrm{Aut}(\mathcal{M})\) of the manifold are isomorphic to \(\mathrm{Aut}(\mathbb{C})\). Then, the action of \(\mathrm{SU}(2)\) identifies the rotationally symmetric complex manifolds [21]. We are interested in the case when the \(\mathrm{SU}(2)\)-orbit of \(p\) is the orthogonal group \(O_{p}\). In this case, and with the help of the conjugation of vectors by the complex structure, \(T_{p}\mathcal{M}^{\mathbb{C}}\) can be split into \(V\oplus iV\) for any \(V\in T_{p}\mathcal{M}^{\mathbb{C}}\)[22]. Therefore, \(O_{p}\) becomes real hypersurface orbits of \(\mathcal{M}\). For the sake of convenience, it is suggested to represent \(\mathrm{SU}(2)\) action in terms of coordinate charts at every point like Eq. (4). Now, the function \(\varphi\colon\mathbb{C}^{2}\to\mathrm{M}_{2}(\mathbb{C})\) defined by \[\varphi(z_{1},z_{2})=\begin{pmatrix}z_{1}&-\overline{z_{2}}\\ z_{2}&\overline{z_{1}}\end{pmatrix}\;. \tag{7}\] verifies \(\varphi(S^{3})=\mathrm{SU}(2)\), see for instance [23, Example 16.9]; and the details of finding the equivariant maps, that relate \(q\in S^{3}\) to \(p\in O_{p}\) of \(\mathcal{M}\), and CR-diffeomorphism structure of \(S^{3}\) are in Ref [21]. The equivariant diffeomorphism \(f:S^{3}\to O_{p}\), where \(f(g(p))=g(f(p))\), establishes the correspondence between the parameters \((\zeta_{1},\eta_{1},\zeta_{2},\eta_{2})\) of the point \(p\in\mathcal{M}\) and the unitary action of \(\mathrm{SU}(2)\) on \(q=(z_{i},z_{2})\in S^{3}\) endowed from the fact that every unitary representation on a Hermitian vector space \(V\) is a direct sum of the _irreducible representations_ of the group. This is crucial for finding the _group orbit_, or _congruence classes_, containing the conjugate subgroups of \(\mathrm{SU}(2)\). It is known that setting one of the \(z_{i}=0\) will define the other \(z_{j}\) to be the longitude of \(\mathrm{SU}(2)\) corresponding to the conjugate class \(T^{\dagger}HT\), where \(H\subset\mathrm{SU}(2)\). The equivariant diffeomorphism relates any \(H\) as a metric \(g_{\mu\nu}\) to a _diagonalizeble_ matrix \(\mathcal{H}\) on \(S^{3}\) using \(\varphi(z_{1},z_{2})\), and a _diagonalized_\(\mathcal{H}\) can be read as \[\mathcal{H}^{\prime}=T^{\dagger}\mathcal{H}T=\begin{pmatrix}\xi&0\\ 0&\overline{\xi}\end{pmatrix}\;, \tag{8}\] where for any wormhole sheet we define \(\xi=\kappa+i\lambda\) and \(z_{i}=\alpha_{i}+i\beta_{i},\ i=1,2\) such that \[T\mathcal{H}^{\prime}T^{\dagger}=\begin{pmatrix}z_{1}&-\overline{z_{2}}\\ z_{2}&\overline{z_{1}}\end{pmatrix}\begin{pmatrix}\xi&0\\ 0&\overline{\xi}\end{pmatrix}\begin{pmatrix}\overline{z_{1}}&\overline{z_{2}} \\ -z_{2}&z_{1}\end{pmatrix}=\begin{pmatrix}z_{1}\overline{z_{1}}\xi+z_{2} \overline{z_{2}}\overline{\xi}&z_{1}\overline{z_{2}}\xi-z_{1}\overline{z_{2}} \overline{\xi}\\ \\ \overline{z_{1}}z_{2}\xi-\overline{z_{1}}z_{2}\overline{\xi}&z_{1}\overline{z_{ 1}}\overline{\xi}+z_{2}\overline{z_{2}}\overline{\xi}\end{pmatrix}\;. \tag{9}\] So, if we want to return back to the \(\mathbb{R}^{4}\) space, and for any \(\xi\), the last transformation us to define the real vector \(x:=(x^{1},x^{2},x^{3},x^{4})\) as \[x^{1} = \kappa\;, \tag{10}\] \[x^{2} = (\alpha_{1}^{2}+\beta_{1}^{2}-\alpha_{2}^{2}-\beta_{2}^{2}) \lambda\;,\] (11) \[x^{3} = 2(\alpha_{1}\beta_{2}+\alpha_{2}\beta_{1})\lambda\;,\] (12) \[x^{4} = 2(\alpha_{1}\alpha_{2}-\beta_{1}\beta_{2})\lambda\;. \tag{13}\] This means the complex geodesics on the sheet \(1\) and sheet \(2\) are endowed with a \(\mathrm{SU}(2)\) symmetry, which is guaranteed by the conjugacy \(T^{\dagger}HT\) that defines all longitudes of \(\mathrm{SU}(2)\). Also, this may introduce a connection between external geometry and internal symmetry, it may show us a geometric origin/meaning for the unitary symmetry in physics1. As we can notice, SU(2) symmetry for geodesics is local symmetry because the parameters \((\alpha_{1},\beta_{1},\alpha_{2},\beta_{2})\) depend on the position on wormhole. So, if the coordinates \(\zeta\) and \(\eta\) render geodesics Footnote 1: For a \(\text{GL}(2n,\mathbb{R})\) of \(V\) or a Lorentz transformation on the flat Minkowski spacetime in particular, the group can be determined uniquely by its action on the null vectors that correspond to \(S^{2}\). \[\lambda_{1} = \vartheta_{1}(\alpha_{2},\beta_{2})\, \tag{14}\] \[\lambda_{2} = \vartheta_{2}(\alpha_{2},\beta_{2})\, \tag{15}\] where \(\vartheta_{1}\) and \(\vartheta_{2}\) are continuous functions in \(x^{i}\), and \(\alpha_{1}^{2}+\beta_{1}^{2}=1\), then SU(2) symmetry reduces to U(1). In that sense, SU(2) introduces local symmetry of complex vectors on the two sheets of wormholes, and U(1) symmetry introduces a local symmetry of the complex vector on the same sheet. It is worth noting that \(S^{3}\) is diffeomorphic with SU(2) [24, p. 127]. Moreover, \(S^{3}\) can be seen as a fiber bundle following the diagram \[S^{1}\,\longrightarrow\,S^{3}\,\stackrel{{\pi}}{{ \longrightarrow}}\,S^{2}\, \tag{16}\] with the Hopf fibration plotted in Figure 1 and used in physics in [25; 26] and for wormholes in [27]. Now, we are ready to complexify the wormhole metric. First, we rearrange Eq. (3) such that it becomes \[ds^{2}=+\left[\left(1-\frac{2M}{r(\zeta)}\right)dt^{2}-(u^{2}+2M)^{2}\sin^{2} \theta d\phi^{2}\right]-\left[4(u^{2}+2M)du^{2}+(u^{2}+2M)^{2}d\theta^{2} \right]. \tag{17}\] Figure 1: Hopf fibration of \(S^{3}\) (see [https://philogb.github.io/page/hopf/](https://philogb.github.io/page/hopf/)) Since we adopt the parameter \(u(r)\) as defined in Eq. (2), we also define the complex parameter \(\zeta\), its squared length, and any infinitesimal change in it as \[\zeta = \frac{1}{2}(u^{2}+2M)e^{i\theta}\, \tag{18}\] \[\zeta\overline{\zeta} = \frac{(u^{2}+2M)^{2}}{4}=\frac{r^{2}(u)}{4}\,\] (19) \[d\zeta = \frac{1}{u^{2}+2M}u\zeta du+i\zeta d\theta. \tag{20}\] Eq. (20) gives \[d\zeta d\overline{\zeta}=u^{2}du^{2}+\frac{1}{4}(u^{2}+2M)^{2}d\theta^{2}\, \tag{21}\] or \[4(d\zeta d\overline{\zeta}+2Mdu^{2})=4(u^{2}+2M)du^{2}+(u^{2}+2M)^{2}d\theta^ {2}. \tag{22}\] Meanwhile Eq. (19) yields \[(\zeta d\overline{\zeta}+\overline{\zeta}d\zeta)^{2}=u^{2}(u^{2}+2M)^{2}du^{2 }=\left(2\sqrt{\zeta\overline{\zeta}}-2M\right)4(\zeta\overline{\zeta})du^{2}\, \tag{23}\] or \[du^{2}=\frac{(\zeta d\overline{\zeta}+\overline{\zeta}d\zeta)^{2}}{4(\zeta \overline{\zeta})\left(2\sqrt{\zeta\overline{\zeta}}-2M\right)}. \tag{24}\] Substitute the last result in the LHS of Eq. (22) to get \[4\left[d\zeta d\overline{\zeta}+M\frac{(\zeta d\overline{\zeta}+\overline{ \zeta}d\zeta)^{2}}{2(\zeta\overline{\zeta})\left(2\sqrt{\zeta\overline{\zeta }}-2M\right)}\right]=4(u^{2}+2M)du^{2}+(u^{2}+2M)^{2}d\theta^{2}. \tag{25}\] In addition, the stereographic projection of \(r(x,y,z)\) on the complex plane of \(\zeta(\kappa,\lambda)\) with \(\frac{x}{\kappa}=\frac{y}{\lambda}=1-z\) give [28] \[\sin^{2}\theta=\frac{4\zeta\overline{\zeta}}{\left(1+\zeta\overline{\zeta} \right)^{2}}. \tag{26}\] Furthermore, set \[\eta=t+iM\phi \tag{27}\] such that \[dt^{2} = \frac{1}{4}(d\eta+d\overline{\eta})^{2}\, \tag{28}\] \[d\phi^{2} = \frac{1}{4M^{2}}(d\eta-d\overline{\eta})^{2}. \tag{29}\] Finally, we substitute Eq. (25,26,28,29) to get \[ds^{2}=\left(1-\frac{M}{\sqrt{\zeta\overline{\zeta}}}\right)(d\eta+d\overline{ \eta})^{2}-4\frac{(\zeta\overline{\zeta})^{2}}{M^{2}\left(1+\zeta\overline{ \zeta}\right)^{2}}(d\eta-d\overline{\eta})^{2}-4\left[d\zeta d\overline{\zeta}+ M\frac{(\zeta d\overline{\zeta}+\overline{\zeta}d\zeta)^{2}}{2(\zeta \overline{\zeta})(2\sqrt{\zeta\overline{\zeta}}-2M)}\right] \tag{30}\] which is not yet a _manifestly_ Hermitian metric despite being a general \(2-\)dimensional metric of such complex manifold. In order to make the metric (17) Hermitian, we need to consider the following coordinate redefinition \[d\tilde{u}=\frac{2(u^{2}+2M)}{u}du\, \tag{31}\] or \[\tilde{u}=\int_{-2M}^{u}\frac{2(u^{2}+2M)}{u}du=u^{2}-4M^{2}+4M\ln\left(-\frac {u}{2M}\right)=u^{2}-4M^{2}+4M\ln\left(\frac{u}{2M}\right)+i\pi \tag{32}\] where \(u^{2}-4M^{2}+4M\ln\left(\frac{u}{2M}\right)\in\mathbb{R}\). Then, the metric (17) becomes \[ds^{2}=-\left[\frac{u^{2}(\tilde{u})}{u^{2}(\tilde{u})+2M}(dt^{2}-d\tilde{u}^ {2})+(u^{2}(\tilde{u})+2M)^{2}d\Omega^{2}\right]\, \tag{33}\] which is not Hermitian yet as \(\tilde{u}\in\mathbb{C}\). However, \(e^{\tilde{u}}\in\mathbb{R}\) indeed. In order to improve the previous metric into a Hermitian, we use the following Rindler-like coordinates together with Wick rotation and the complex coordinates \[X =e^{\tilde{u}}\cosh t\, \tag{34a}\] \[T =e^{\tilde{u}}\sinh t,\ T\to iT\,\] (34b) \[\eta =X+iT\,\] (34c) \[d\theta^{2}+\sin^{2}\theta d\phi^{2} =\frac{d\zeta d\overline{\zeta}}{\left(1+\frac{1}{4}\zeta \overline{\zeta}\right)^{2}}\, \tag{34d}\] such that the relevant metric becomes \[ds^{2}=h(\eta+\overline{\eta})d\eta d\overline{\eta}+k(\eta+\overline{\eta}) \frac{d\zeta d\overline{\zeta}}{\left(1+\frac{1}{4}\zeta\overline{\zeta} \right)^{2}}\, \tag{35}\] which is an Euclidean metric of the corresponding \(2-\)dimensional Hermitian complex manifold for arbitrary real valued functions \(h(\eta+\overline{\eta})\) and \(k(\eta+\overline{\eta})\), see [29, pages 44-45] for more details. There are many other ways to render a Hermitian metric. Whether the metric is real or Hermitian, the process of complexification visualizes how Schwarzschild wormholes behave in the realm of complex geometry. This is an important result as it could help studying the _wavefunction of wormholes_ upon analyzing the geometry as a Quantum Field Theory (QFT) in complex curved spacetime [30]. **Remark 1**.: _Consider the de Sitter-Schwarzschild metric_ \[ds^{2}=-\left(1-\frac{2M}{r}-\frac{\Lambda}{3}r^{2}\right)^{-1}dr^{2}-r^{2} \left(d\theta^{2}+\sin^{2}\theta d\phi^{2}\right)+\left(1-\frac{2M}{r}-\frac{ \Lambda}{3}r^{2}\right)dt^{2}\, \tag{36}\] _where \(\Lambda>0\). By using the results about depressed cubic equations given for instance in [31], the polynomial \(f(r)=-\frac{\Lambda}{3}r^{3}+r-2M\) has three real roots if and only if \(\Lambda<\frac{1}{9M^{2}}\). If \(\Lambda>\frac{1}{9M^{2}}\), then \(f(r)\) has one real root and two complex conjugate roots. In wormhole geometry, real horizon means the possibility to measure beyond it, and complex horizon means the impossibility to measure beyond it._ ## III Reissner-Nordstrom wormhole geometry In order to understand the geometric origin of the charge, Einstein and Rosen [1] investigated the following exotic Reissner-Nordstrom metric \[ds^{2}=-\left(1-\frac{2M}{r}-\frac{Q^{2}}{r^{2}}\right)^{-1}dr^{2}-r^{2}\left( d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\left(1-\frac{2M}{r}-\frac{Q^{2}}{ r^{2}}\right)dt^{2}\, \tag{37}\] where \(M>0\) and \(Q>0\) for exotic matter with negative energy density. If we consider the following transformation \[u^{2}=r^{2}-2Mr-Q^{2}\, \tag{38}\] it leads to \(u^{2}du^{2}=(r-M)^{2}dr^{2}\). In the new \(u\) coordinate system, one obtains for \(ds^{2}\) the expression \[ds^{2}=-\frac{r^{2}}{(r-M)^{2}}du^{2}-\left(u^{2}+2Mr+Q^{2}\right)\left(d\theta ^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\frac{u^{2}}{u^{2}+2Mr+Q^{2}}dt^{2}. \tag{39}\] We have \((r-M)^{2}=u^{2}+M^{2}+Q^{2}\). Consider the continuous and positive function \(u\mapsto\tilde{f}(u)\) defined by \[r=\sqrt{u^{2}+M^{2}+Q^{2}}+M:=\tilde{f}(u)\, \tag{40}\] and one obtains for \(ds^{2}\) the expression \[ds^{2}=-\frac{\tilde{f}(u)^{2}}{u^{2}+M^{2}+Q^{2}}du^{2}-\left(u^{2}+2M\tilde {f}(u)+Q^{2}\right)\left(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\frac{u^ {2}}{u^{2}+2M\tilde{f}(u)+Q^{2}}dt^{2} \tag{41}\] We find the coordinate \(u\) vanishes at the event horizons when \(r_{1}=M-\sqrt{M^{2}+Q^{2}}\) and \(r_{2}=M+\sqrt{M^{2}+Q^{2}}\). In the \(u\) coordinate, the bridge at \(r=r_{2}\) verifies \(r_{1}<0<M<r_{2}\). The metric (41) is defined properly until \(r=M\) and the singularity at \(r=r_{2}\) is removed. So, we obtain two regions for the first sheet: * \(u\) has imaginary value when \(r\) varies from \(0\) to \(r_{2}\); * \(u\) has real value from \(0\) to \(+\infty\) when when \(r\) varies from \(r_{2}\) to \(+\infty\). Similarly, we have two regions for the other sheet. When \(0<r<M\), the function \(\tilde{f}(u)\) in the metric (41) must be replaced by \(-\sqrt{u^{2}+M^{2}+Q^{2}}+M\). As in [1] and for sake of simplicity, we consider that \(M=0\). In that case, the metric (41) reduces to \[ds^{2}=-du^{2}-(u^{2}+Q^{2})(d\theta^{2}+\sin^{2}\theta d\phi^{2})+\frac{u^{2} }{u^{2}+Q^{2}}dt^{2}. \tag{42}\] It is possible to obtain a metric very similar to (35) by using similar calculations, except that \(h\) and \(k\) become functions in \(Q\) but not in \(M\). Calculations are left to the reader. **Remark 2**.: _First, consider the classical Reissner-Nordstrom metric_ \[ds^{2}=-\left(1-\frac{2M}{r}+\frac{Q^{2}}{r^{2}}\right)^{-1}dr^{2}-r^{2}\left( d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\left(1-\frac{2M}{r}+\frac{Q^{2}}{ r^{2}}\right)dt^{2}, \tag{43}\] _where \(M>0\) and \(Q>0\). We choose that_ \[M>Q>0. \tag{44}\] _Consider the following transformation_ \[u^{2}=r^{2}-2Mr+Q^{2}\, \tag{45}\] _which gives \(u^{2}du^{2}=(r-M)^{2}dr^{2}\). In the new \(``u"\) coordinate system, one obtains for \(ds^{2}\) the expression_ \[ds^{2}=-\frac{r^{2}}{(r-M)^{2}}du^{2}-\left(u^{2}+2Mr-Q^{2}\right)\left(d \theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\frac{u^{2}}{u^{2}+2Mr-Q^{2}}dt^{2}. \tag{46}\] _We have \((r-M)^{2}=u^{2}+M^{2}-Q^{2}\). For \(r>M\), and by using condition (44), we obtain_ \[r=\sqrt{u^{2}+M^{2}-Q^{2}}+M:=\tilde{g}(u)\, \tag{47}\] _w _with \(u\mapsto\tilde{g}(u)\) continuous and positive. In the new coordinate system, one obtains for \(ds^{2}\) the expression_ \[ds^{2}=-\frac{\tilde{g}(u)^{2}}{u^{2}+M^{2}-Q^{2}}du^{2}-\left(u^{2}+2M\tilde{g}( u)-Q^{2}\right)\left(d\theta^{2}+\sin^{2}\theta\,d\phi^{2}\right)+\frac{u^{2}}{u^{2 }+2M\tilde{g}(u)-Q^{2}}dt^{2} \tag{48}\] _We find the coordinate \(u\) vanishes at the event horizons when \(r_{1}=M-\sqrt{M^{2}-Q^{2}}\) and \(r_{2}=M+\sqrt{M^{2}-Q^{2}}\). In the \(u\) coordinate, the bridge at \(r=r_{2}\) verifies \(0<r_{1}<M<r_{2}\). The metric (48) is defined until \(r=M\) and the singularity at \(r=r_{2}\) is removed. So, we obtain three regions for the first sheet:_ * \(u\) _has real value from_ \(Q^{2}\) _to_ \(0\) _when_ \(r\) _varies from_ \(0\) _to_ \(r_{1}\)_;_ * \(u\) _has imaginary value when_ \(r\) _varies from_ \(r_{1}\) _to_ \(r_{2}\)_;_ * \(u\) _has real value from_ \(0\) _to_ \(+\infty\) _when when_ \(r\) _varies from_ \(r_{2}\) _to_ \(+\infty\)_._ _We also have three regions for the other sheet. When \(0<r<M\), the function \(\tilde{g}(u)\) must be replaced by \(-\sqrt{u^{2}+M^{2}-Q^{2}}+M\) in the metric (48). The situation is therefore different from those presented previously and additional studies will be necessary. Then, we know that \(\mathrm{SU}(3)\) follows the diagram_ \[SU(2)\,\longrightarrow\,SU(3)\,\stackrel{{\pi}}{{ \longrightarrow}}\,S^{5}\, \tag{49}\] _where \(\pi\) is the projection, see for instance [32, Proposition 13.11]. The special unitary group \(\mathrm{SU}(3)\) is the nontrivial \(\mathrm{SU}(2)-\)bundle over \(S^{5}\), see for instance [33, Section 3]. Moreover, \(S^{5}\) is diffeomorphic with \(\mathrm{SU}(3)/\,\mathrm{SU}(2)\)[24, p. 127]. However, the way of building an Euclidean metric on a complex Hermitian manifold involving the \(\mathrm{SU}(3)\) symmetry is an open problem. We also notice that if \(M^{2}\leq Q^{2}\), then the polynomial \(P(r)=r^{2}-2M+Q^{2}\) is always positive such that the change of variable \(u\) cannot provide unitary symmetries. Contrary to the Schwarzschild and exotic Reissner-Nordstrom wormhole geometry, the classical Reissner-Nordstrom wormhole geometry implies the mass-charge condition (44) which is also used to avoid naked singularities [34, Section 12.6]._ ## IV Quantum Tunneling and Wormhole Thermodynamics Discovering unitary symmetries in wormhole geometry motivates us to explore the quantum properties of wormholes. Being traversable for a wormhole is a challenge, see for instance [8; 9]. Wormholes are generally non-traversable for classical matter [6] but they can be modified to be traversable by removing event horizons, see [35; 36; 37; 38] for Schwarzschild-like wormholes and [39; 40] for Reissner-Nordstrom-like wormholes. We know that particles are subject to quantum tunneling, which makes Schwarzschild and Reissner-Nordstrom wormholes traversable for particles while keeping event horizons. A similar idea has been used since the seminal works of Bekenstein in [15] and Hawking in [16] for studying the black hole radiation. In this section, we develop quantum tunneling and wormhole thermodynamics by computing the Hawking temperature. ### Schwarzschild wormhole case First, we point out and observe an interesting fact about the radial null curves in the wormhole metric (3) by setting \(ds^{2}=d\theta=d\phi=0\), yielding \[\frac{du}{dt}=\pm\frac{u}{2(u^{2}+2M)}. \tag{50}\] The above quantity defines the "coordinate speed of light" for the wormhole metric, and as we can see there is a horizon with a coordinate location \(u=0\) yielding \[\left.\frac{du}{dt}\right|_{u=0}\longrightarrow 0. \tag{51}\] The presence of the horizon implies that the quantum tunneling of particles from "another universe" to our universe can form Hawking radiation and, consequently, detecting particles by a distant observer located in our universe. We can study the tunneling of different massless or massive spin particles; and in the present work, we focus on studying the tunneling of vector particles. The motion of a massive vector particle of mass \(m\), described by the vector field \(\psi^{\mu}\), _might_ be studied by the Proca equation (PE), which reads [41] \[\nabla_{\mu}\nabla^{[\mu}\psi^{\nu]}-\frac{m^{2}}{\hbar^{2}}\psi^{\nu}=\frac{ 1}{\sqrt{-g}}\partial_{\mu}\left[\sqrt{-g}\partial^{[\mu}\psi^{\nu]}\right]- \frac{m^{2}}{\hbar^{2}}\psi^{\nu}=0 \tag{52}\] where, from the metric (3), we define the determinant \(\sqrt{-g}=2u^{2}(u^{2}+2M)^{2}\sin\theta\), and \[\nabla_{[\mu}\psi_{\nu]}=\frac{1}{2}(\nabla_{\mu}\psi_{\nu}-\nabla_{\nu}\psi_ {\mu}):=\psi_{\mu\nu}. \tag{53}\] The corresponding action is \[S=-\int d^{4}x\sqrt{g}\left(\frac{1}{2}\psi_{\mu\nu}\psi^{\mu\nu}+\frac{m^{2} }{\hbar^{2}}\psi_{\mu}\psi^{\mu}\right). \tag{54}\] Then in any curvilinear coordinates, and using the Bianchi-Ricci identity \(\nabla_{[\lambda}\psi_{\mu\nu]}=0\), we get the true version of Eq. (52) as a QFT in curved spacetime equations of motion \[\nabla^{\nu}\nabla_{[\nu}\psi_{\mu]}-{\cal R}_{\mu}^{\ \nu}\psi_{ \nu}-\frac{m^{2}}{\hbar^{2}}\psi_{\mu}=0\, \tag{55a}\] \[\nabla_{\lambda}\nabla^{\lambda}\psi_{\mu\nu}+{\cal C}_{\mu\nu}^{ \ \kappa\lambda}\psi_{\kappa\lambda}-\left(\frac{{\cal R}}{3}+\frac{m^{2}}{\hbar^ {2}}\right)\psi_{\mu\nu}=0\, \tag{55b}\] where \({\cal R}_{\mu\nu}\) and \({\cal R}\) are the Ricci tensor and the Ricci scalar respectively, and \({\cal C}_{\mu\nu\kappa\lambda}\) is the trace-free conformal curvature tensor. Taking the flat limit \(g_{\mu\nu}\rightarrow\eta_{\mu\nu}\) changes the essence of the last two equations to become Lorentz invariant. Solving tunneling equations exactly is quite hard. So, we apply the WKB approximation method \[\psi_{\nu}=C_{\nu}(t,u,\theta,\phi)e^{\left(\frac{i}{\hbar}(S_{0}(t,u,\theta, \phi)+h\,S_{1}(t,u,\theta,\phi)+...)\right)}. \tag{56}\] Taking into the consideration the symmetries of the metric (3) given by three corresponding Killing vectors \((\partial/\partial_{t})^{\mu}\) and \((\partial/\partial_{\phi})^{\mu}\), we may choose the following ansatz for the action \[S_{0}(t,u,\theta,\phi,\psi)=Et+R(r,\theta)-j\phi\, \tag{57}\] where \(E\) is the energy of the particle, and \(j\) and \(l\) denotes the angular momentum of the particle corresponding to the angles \(\phi\) and \(\psi\), respectively. If we keep only the leading order of \(\hbar\), we find a set of four differential equations. These equations can help us to construct a \(4\times 4\) matrix \(\aleph\), which satisfies the following matrix equation \[\aleph(C_{1},C_{2},C_{3},C_{4})^{T}=0. \tag{58}\] We solve for the radial part to get the following integral \[R_{\pm}=\pm\int\frac{2\sqrt{E^{2}-\frac{u^{2}}{u^{2}+2M}\left[m^{2}+\Delta(u) \right]}}{{\cal F}(u)}du\, \tag{59}\] where \[\Delta(u)=\frac{(\partial_{\theta}R)^{2}}{(u^{2}+2M)^{2}}+\frac{j^{2}}{(u^{2} +2M)^{2}\sin^{2}\theta}\, \tag{60}\] and \[{\cal F}(u)=\frac{u}{u^{2}+2M}={\cal F}^{\prime}(u)|_{u=0}(u-u_{h})+\cdots \tag{61}\] Now, there is a singularity in the above integral when \(u_{h}=0\), meaning that \({\cal F}\to 0\). So in order to find the Hawking temperature, we now make use of the equation \[\lim_{\epsilon\to 0}{\rm Im}\frac{1}{u-u_{h}\pm i\epsilon}=\delta(u-u_{h})\, \tag{62}\] where \(u_{h}=0\). In this way we find \[{\rm Im}R_{\pm}=\pm\frac{2E\pi}{{\cal F}^{\prime}(u)|_{u=0}}. \tag{63}\] Using \(p_{u}^{\pm}=\pm\partial_{u}R_{\pm}\), for the total tunneling rate gives \[\Gamma=\exp\left(-\frac{1}{\hbar}{\rm Im}\oint p_{u}{\rm d}r\right)=\exp\left[ -\frac{1}{\hbar}{\rm Im}\left(\int p_{u}^{+}{\rm d}u-\int p_{u}^{-}{\rm d}u \right)\right]=\exp\left(-\frac{4E\pi}{\hbar{\cal F}^{\prime}(u)|_{u=0}} \right). \tag{64}\] It is interesting that, for the black hole case, there is a temporal part contribution due to the connection of the interior region and the exterior region of the black hole. In the wormhole case, we don't have such a contribution. We can finally obtain the Hawking temperature for the wormhole by using the Boltzmann factor \(\Gamma=\exp(-E/T)\), and setting \(\hbar\) to unity, so that it results with \[T=\frac{{\cal F}^{\prime}(u)|_{u=0}}{4\pi}=\frac{1}{8\pi M}. \tag{65}\] This is interesting result as it shows that the Hawking temperature for the Schwarzschild wormhole coincides with the Schwarzschild black hole temperature. We can verify the above result for the Hawking temperature using a topological method based on the Gauss-Bonnet theorem reported in Ref [42, 43]. Let's now rewrite the metric (3) in a form of \(2-\)dimensional Euclidean spacetime given by \[ds^{2}=4(u^{2}+2M)du^{2}+\frac{u^{2}}{u^{2}+2M}d\tau^{2}. \tag{66}\] The Hawking temperature can be found from [42] \[T=\frac{\hbar c}{4\pi\chi k_{B}}\sum_{j\leq\chi}\int_{u_{h}}\sqrt{g}\,{\cal R} \,du. \tag{67}\] Applying this equation for the wormhole metric (66), we find the Ricci scalar \[{\cal R}=\frac{4M}{(u^{2}+2M)^{3}} \tag{68}\] and \(\sqrt{g}=2u\). Setting \(\hbar=c=k_{B}=1\), using the fact that the Euler characteristic of Euclidean geometry is \(\chi=1\) at the wormhole horizon \(u_{h}=0\), we solve the integral (67) and obtain \[T=\frac{1}{4\pi}\int_{0}^{\infty}\frac{4M}{(u^{2}+2M)^{3}}2\,u\,du=\frac{1}{8 \pi M}. \tag{69}\] which coincides with the Hawking temperature (65) obtained via tunneling. ### Reissner-Nordstrom wormhole case Here we shall consider a tunneling from massless RN wormhole geometry using metric (42). For the radial null curve, and by setting \(ds^{2}=d\theta=d\phi=0\), we obtain \[\frac{du}{dt}=\pm\frac{u}{\sqrt{u^{2}+Q^{2}}}\, \tag{70}\] and therefore we see that the points \(u=0\) play the role of the horizon as \(du/dt\to 0\), provided that \(u=0\). This indicates that there could be a quantum tunneling associated with the horizon. To find the Hawking temperature, we can apply the WKB approximation given by Eq. (56) along with the action (57). Consequently, we construct a \(4\times 4\) matrix \[\mathrm{M}(D_{1},D_{2},D_{3},D_{4})^{T}=0\, \tag{71}\] where, for the radial part, we get the following integral \[R_{\pm}=\pm\int\frac{\sqrt{E^{2}-\frac{u^{2}}{Q^{2}+u^{2}}\left[m^{2}+\xi(u) \right]}}{\mathcal{G}(u)}du\, \tag{72}\] with \[\xi(u)=\frac{(\partial_{\theta}R)^{2}}{(Q^{2}+u^{2})^{2}}-\frac{j^{2}}{\sin^{ 2}\theta(Q^{2}+u^{2})^{2}}\, \tag{73}\] and \[\mathcal{G}(u)=\frac{u}{\sqrt{Q^{2}+u^{2}}}=\mathcal{G}^{\prime}(u)|_{u=0}(u- u_{h})+\cdots \tag{74}\] Now, there is a singularity in the above integral when \(u_{h}=0\), meaning that \(\mathcal{G}\to 0\). In order to find the Hawking temperature at \(u_{h}=0\), we consider \[\mathrm{Im}R_{\pm}=\pm\frac{E\pi}{\mathcal{G}^{\prime}(u)|_{u=0}}. \tag{75}\] Using \(p_{u}^{\pm}=\pm\partial_{u}R_{\pm}\), the total tunneling rate gives \[\Gamma=\exp\left(-\frac{2E\pi}{\hbar\mathcal{G}^{\prime}(u)|_{u=0}}\right). \tag{76}\] Boltzmann factor \(\Gamma=\exp(-E/T)\) leads to define the temperature as \[T=\frac{\mathcal{G}^{\prime}(u)|_{u=0}}{2\pi}=\frac{1}{2\pi Q}. \tag{77}\] Let's now derive the Hawking temperature using a topological method based on the Gauss-Bonnet theorem. To do so, we need to rewrite the metric (42) in a form of \(2-\)dimensional Euclidean spacetime given by \[ds^{2}=du^{2}+\frac{u^{2}}{u^{2}+Q^{2}}d\tau^{2}. \tag{78}\] For the Ricci scalar, we obtain \[\mathcal{R}=\frac{6Q^{2}}{(Q^{2}+u^{2})^{2}}\, \tag{79}\] and \(\sqrt{g}=u(u^{2}+Q^{2})^{-1/2}\). At the wormhole horizon \(u_{h}=0\), we obtain \[T=\frac{1}{4\pi}\int_{0}^{\infty}\frac{6\,u\,Q^{2}}{(Q^{2}+u^{2})^{5/2}}du= \frac{1}{2\pi\,Q}\, \tag{80}\] which coincides with the Hawking temperature (77) obtained via tunneling. ## V Concluding remarks We closely looked at Schwarzschild and Reissner-Nordstrom wormhole geometry and obtained unitary symmetries U(1) and SU(2) using spacetime complexification; the study brings that attention to the possibility that wormholes could illustrate the relation between unitary symmetries and spacetime geometry. Additionally, we developed wormhole thermodynamics for Schwarzschild and Reissner-Nordstrom wormholes through quantum tunneling. The results are consistent with those of Hawking and Bekeinstein for black hole thermodynamics. It implies that particles can cross these wormholes. This could be related to the ER=EPR conjecture [44] and to the new experimental findings obtained from studying traversable wormholes/EPR pair entanglement within quantum computing regimes [45]. We hope to report on these important results in the future. ## Acknowledgement AFA would like to thank Shaaban Khalil and Ammar Kassim for the discussions. K.J would like to thank Dejan Stojkovic for interesting comments during the preparation of this work.
2309.15024
Synthia's Melody: A Benchmark Framework for Unsupervised Domain Adaptation in Audio
Despite significant advancements in deep learning for vision and natural language, unsupervised domain adaptation in audio remains relatively unexplored. We, in part, attribute this to the lack of an appropriate benchmark dataset. To address this gap, we present Synthia's melody, a novel audio data generation framework capable of simulating an infinite variety of 4-second melodies with user-specified confounding structures characterised by musical keys, timbre, and loudness. Unlike existing datasets collected under observational settings, Synthia's melody is free of unobserved biases, ensuring the reproducibility and comparability of experiments. To showcase its utility, we generate two types of distribution shifts-domain shift and sample selection bias-and evaluate the performance of acoustic deep learning models under these shifts. Our evaluations reveal that Synthia's melody provides a robust testbed for examining the susceptibility of these models to varying levels of distribution shift.
Chia-Hsin Lin, Charles Jones, Björn W. Schuller, Harry Coppock
2023-09-26T15:46:06Z
http://arxiv.org/abs/2309.15024v1
# Synthia's Melody: A Benchmark Framework for Unsupervised ###### Abstract Despite significant advancements in deep learning for vision and natural language, unsupervised domain adaptation in audio remains relatively unexplored. We, in part, attribute this to the lack of an appropriate benchmark dataset. To address this gap, we present **Synthia's melody**, a novel audio data generation framework capable of simulating an infinite variety of 4-second melodies with user-specified confounding structures characterised by musical keys, timbre, and loudness. Unlike existing datasets collected under observational settings, Synthia's melody is free of unobserved biases, ensuring the reproducibility and comparability of experiments. To showcase its utility, we generate two types of distribution shifts--domain shift and sample selection bias--and evaluate the performance of acoustic deep learning models under these shifts. Our evaluations reveal that Synthia's melody provides a robust testbed for examining the susceptibility of these models to varying levels of distribution shift. Chia-Hsin Lin\({}^{\star}\)Charles Jones\({}^{\star}\)Bjorn W. Schuller\({}^{\star\dagger}\)Harry Coppock\({}^{\star}\)\({}^{\star}\)\({}^{\star}\)GLAM, Imperial College London, UK \({}^{\dagger}\)Chair EIHW, University of Augsburg, Germany [email protected], {charles.jones17, bjoern.schuller, harry.coppock}@imperial.ac.uk ## 1 Introduction While deep learning models achieve impressive performance across domains such as imaging [1], text [2], and audio [3], they are prone to learning _shortcuts_ - features not representative of the intended task [4]. For example, image classifiers trained to recognise animals may instead depend on spuriously correlated background features [5], and natural language models may falsely rely on sentiment when predicting review quality [6]. Similar effects manifest across many data types and learning tasks, encompassing imaging, text [7], audio [8], and reinforcement learning [9]. This tendency poses credibility problems for deploying deep learning methods in high-stakes settings. Notably, shortcuts were leveraged heavily during the COVID-19 pandemic response to achieve falsely high diagnostic accuracy for SARSCoV2 infection using patient respiratory audio [10, 11]. Although models may not extract the intended features, shortcut learning needs not be an issue if shortcuts are present at both training and deployment time. In practice, this is unlikely to be the case [12]. We may instead view shortcut learning as part of a larger problem of _distribution shift_ (or dataset shift), where the joint distribution of inputs and outputs differs between settings [13]. When building models, we usually assume that training and testing data are identically and independently distributed (_i.i.d_) [14]. However, shifts between training and testing stages are ubiquitous in practice [15]. Trained models may not be robust to dataset shifts in unseen environments. To address the issue, research on domain adaptation methods aims to improve the robustness of models [16, 17, 18, 19]. However, most of the research focuses on the image and text domains, while relatively few address audio. We attribute this to the lack of common benchmark datasets in audio akin to coloured-MNIST [12] in vision and MNLI [20] in texts. In audio, [21] inject biases into observational speech datasets with five data augmentation methods: mp3 compression, additive white noise, loudness normalisation, non-speech zeroing, and \(\mu\)-law encoding. Although the results show that models are prone to the injected shortcuts, unobserved artefacts in the original speech data, such as microphone mismatch, could still alter the experimental outcomes. Moreover, the research only considers cases where training data is 100% or 0% perturbed, which limits the ability to observe model behaviours with varying shift levels. To foster the corresponding research in audio, we propose a fully synthesised data generation mechanism called **Synthia's melody**. The mechanism generates 4-second melody samples in the form of.wav file at a sample rate of 16 000 Hz for a naive key (binary) classification task. The simulated data exhibit several attractive properties. First, it ensures the reproducibility of experiment results, as the data depend only on pre-written scripts in contrast to most audio data collected under observational settings. Second, researchers can generate desired shifts by modifying the mechanism parameters. Third, the generated data are interpretable by humans: researchers can hear the generated melody and feel differences if shifts occur. The example usage is shown in Figure 1. Code and audio samples are available at [https://github.com/cyth](https://github.com/cyth) Figure 1: **Example usage of Synthia’s melody**. Here, we see the steps to generate a dataset of 20 samples where the music timbre and prediction label are 100% correlated. The generated data has a causal structure represented by the causal graph (a). We point the user to the GitHub README.md file for a full set of instructions and use cases. ## 2 Background **Domain adaptation** Let \(\mathcal{X}\) and \(\mathcal{Y}\) denote the input and label space, respectively. We denote the domain \(D\) with \(D=\{x^{(i)},y^{(i)}\}_{i=1}^{n}\sim p^{D}(x,y)\), where \(x\in\mathcal{X}\), \(y\in\mathcal{Y}\), and \(p^{D}(x,y)\) is the joint distribution of the corresponding random variables \(X\), \(Y\) that generates \(D\). Given that dataset shift exists, the goal of domain adaptation methods is to learn a predictive function \(h:\mathcal{X}\rightarrow\mathcal{Y}\) from the source (training) domain \(D_{train}\) that minimises the predictive risk \(R\) in a similar but unseen domain \(D_{test}\), given that \(p^{test}(x,y)\neq p^{train}(x,y)\). The predictive risk function can be written as \[R=\min_{h}\mathbb{E}_{(x,y)\in D^{test}}[\ell(h(x),y)], \tag{1}\] where \(\mathbb{E}\) is the expectation and \(\ell(\cdot,\cdot)\) is the loss function 1. Footnote 1: The equation for \(R\) represents the theoretical risk we aim to minimise in the target domain \(D_{test}\). It is important to note that we cannot directly compute this risk during training given the unsupervised domain adaptation setting. **Related music theory** We set the generated data in the context of music. Readers unfamiliar with music theory are referred to [22, 23]. A melody is composed of a sequence of musical tones. Each musical tone has four auditory attributes: pitch, duration, timbre, and loudness. In signal processing, such attributes correspond to frequency, time, waveshape, and amplitude. In Western music, there are 24 music keys; half are major, and the other half are minor. For most people, major keys sound happy and minor keys sound sad. Each music key has seven notes in its corresponding scale. The set of 7 notes is distinct in each of the 24 keys. Certain combinations of the seven notes form chords. Common chords include triads and sevenths, where three notes form triads and sevenths are formed by four. Roman numerals often denote chords. For example, the first triad and fifth seventh in a major key are denoted as I and V\({}_{7}\), respectively. We say a melody is in a key if it is composed by the chords in that key. ## 3 Method To simulate melodies, we need to define four auditory attributes of each musical tone: pitch, duration, timbre, and loudness. **Oscillator and ADSR envelope** We determine timbre and loudness with two components: oscillator and ADSR envelope. An oscillator generates waves with a given frequency (pitch) and amplitude (loudness). Oscillators with different wavesheg create different timbres. We use sine, square, sawtooth, and triangle oscillators in this study. The ADSR envelope alters the amplitude of waves that oscillators generate. The amplitude change in a melody can be "stable", "increase", or "decrease", where details can be found in the Appendix, amplitude change. **Melody generation** Given timbre and loudness defined, we define pitch and duration, or melodies, by using random sampling algorithms. To generate a melody, we randomly draw a music key \(K\) among 12 major/minor keys given a label \(Y\in\{\text{major},\text{minor}\}\). We then randomly draw \(N\) chords \(C_{i=1}^{N}\) in the key \(K\), where \(N\) is uniformly sampled from some integer set. We consider 10 chords: \(C_{\text{major}}=\{\text{I}\text{ ii iii IV V vii}\text{ ii},\text{V}_{7},\text{vii}^{ \text{o}}\tau_{\text{j}}\}\) and \(C_{\text{minor}}=\{\text{i ii}\text{ iii}\text{ iii}^{*}\text{ iv V VI vii}\text{ ii}^{\text{o}}\tau_{\text{j}},\text{V}_{7},\text{vii}^{ \text{o}}\tau_{\text{j}}\}\) for major and minor samples, respectively. For each chord, we sample their duration \(T_{i=1}^{N}\) independently from some continuous distribution. If \(\sum T_{i=1}^{N}\) is less than 4 seconds, we repeat the melody until the targeted duration is reached. The melody generation algorithm is detailed in the Appendix, algorithm 1. **Data generation** We generate 50 000 melodies with random seeds from 0 to 49 999 and perform train-val split to obtain 40 000 training and 10 000 validation data. We generate 10 000 melodies with random seeds from 55 000 to 64 999. The process is repeated for four timbres: sine, square, sawtooth, and triangle. We fix the amplitude to "stable" for all samples2. As such, the training, validation, and test sets of the four timbres are acoustically indistinguishable except for their timbres. Footnote 2: experimenting with using different amplitudes, e. g., “increase” or “decrease” or a custom config left for future work. **Shifts considered** We represent distribution shifts with causal graphs [24, 25], where the shifts are treated as outcomes of interventions. We focus on anticausal tasks [26], where the input \(X\) is caused by the label \(Y\), such that \(Y\to X\). We consider two types of shifts: domain shift and sample selection bias [13] detailed in Figure 2. Domain shift (Figure 1(a)) refers to cases when the observed covariate \(X\) is affected by some mapping \(f\), which varies across training and testing. Given varying \(f\), the learnt distribution \(p^{train}(y|x)\) has no guarantee to be the same as \(p^{test}(y|x)\). The goal of a conditional classifier is to learn the domain-invariant conditional distribution \(p(y|x_{0})\) via \(X\) given \(X_{0}\) unobserved. Sample selection bias (Figure 1(b)) refers to cases when the sample section process \(V\) depends on both \(X\) and \(Y\), and the dependency varies between training and testing time. We denote the selection process with a binary variable \(V\), where the event \(V=1\) indicates that the sample is being selected and \(V=0\), otherwise. Given the varying dependency of \(V\), the learnt distribution in training \(p^{train}(y|x)\) may not be applicable to that of testing time \(p^{test}(y|x)\). **Shift construction** We use timbre to construct two types of dataset shifts. For domain shift, we represent two domains by samples generated by sine and square waves. We consider 12 shift levels by gradually replacing the sine sample with the square ones in the training data until the number of sine and square samples are equal. For sample selection bias, we construct a biased sample by correlating the timbre with the prediction label. Specifically, we generate major samples with sine waves and minor ones with square waves. We consider 11 shift levels by varying the proportion of biased samples in the training data. We evaluate trained models on three test sets: in-distribution, neutral, and anti-bias sets. The in-distribution set is the test set that has the same proportion of biased sample as the training set. The neutral set has no correlation between the timbre Figure 2: The two types of shift considered in this study. and the prediction label. The anti-bias set has the same proportion of the biased samples as the training set, while the bias is reverted, e. g. major samples in square waves and minor ones in sine waves. The three test sets are built by extracting the required samples from the sine and square test sets to maintain low compute and time costs. **Model** We consider two models: baseline and Domain-Adversarial Neural Network (DANN). The baseline model is a SampleCNN[27] with 9 Res-2 blocks[28]. We set all kernel sizes, max-pooling sizes to 3 and stride sizes to 1 in all convolutional layers. The DANN model is a SampleCNN trained with the domain adversarial training algorithm developed by [16]. A DANN consists of a feature encoder, a classifier, and a domain discriminator. The domain discriminator distinguishes samples in sine and square waves. To make the results of the two models comparable, we use 1 Res-2 block in the feature encoder, 8 Res-2 blocks in the label classifier, and 2 Res-2 blocks in the domain discriminator. In this way, the total parameter sizes of the feature encoder and classifier are the same as the baseline model. ## 4 Result The main purpose of Synthia's melody is to provide a tool researchers can use to evaluate audio-based machine learning models in different dataset shift settings. To demonstrate Synthia's melody's utility, we evaluate the baseline and DANN models across varying levels of shift for a range of different shifts. **Domain Shift** Figure 3 details the logit output of the baseline model when evaluated on varying levels of domain shift. Here, we see the model performs poorly when no square samples appear in the training set, predicting all square samples as minor. Interestingly, we see a drastic change in behaviour when just two square samples are injected into the training, with the spread of logit scores becoming considerably broader. Inspecting Figure 4, which details the corresponding accuracy scores, we see that, despite the behaviour change in logit output, a considerable proportion of square samples are needed before model performance approaches that of an in-distribution test set. Here, we demonstrate how Synthia's Melody can be used to uncover interesting relationships between domain shift, model architecture, and performance. **Sample Selection Bias** Figure 6 shows the logit outputs of baseline models on the neutral test set with varying levels of sample selection bias in training data. We examine the model score from two aspects: key (Figure 5(a)) and the confounding wave shape (Figure 5(c)). At shift levels greater than 0.6, less confident predictions are made, first with major and then with minor classes. When the shift strength is the largest (shift level=1.0), the models do not learn the target task at all, making bias-leading predictions based on wave shapes only - see Figure 5(c). Figure 4: **Test accuracy of baseline models trained on datasets with 12 levels of domain shift. The experiment is repeated five times. The line represents the sample mean of the five test accuracies. The 95% error bands are calculated with \(\pm\) 2 standard deviations from the sample mean, assuming that the test accuracy follows a normal distribution.** Figure 5: **Comparison between baseline and DANN models test accuracy across 11 levels of sample selection bias. The experiment is repeated five times for baseline models and three times for DANN. The line represents the sample mean of the five test accuracies. The bands are calculated with \(\pm\) 1 standard deviation from the sample mean, assuming that the test accuracy follows a normal distribution.** Figure 3: **Distribution of model score of baseline models on the square test set with varying number of square samples (N) in the training data. Each point represents the sigmoid prediction of a square test sample. Colours represent the music key, where major samples are expected to have scores close to 1 and minors with 0.** accuracy on neutral and anti-bias sets starts to drop. Specifically, when the shift level equals 1.0, the anti-bias accuracy becomes 0.0, suggesting the baseline model does not learn any signals except the biases. Here, the shortcut is so strong that it acts as a mask, preventing the model from learning any other features despite having ample capacity. This is corroborated by the no-better-than-random neutral test set performance. Figure 7 shows the baseline model and DANN test accuracy on melodies with unseen wave shapes during training (sawtooth and triangle) as the level of sample selection bias (sine and square) increases in the training set. As shown in Figure 5, the test accuracy drops as the shift level increases, identifying a reliance on leveraging sine vs square timbre for key prediction. The result shows that DANN is more robust and generalises to unseen wave shapes better than the baseline model. Interestingly, the DANN trained on data with a shift level of 0.4 performs better on unseen wave shapes than ones trained with a lower shift level, such as 0.0 and 0.1. This suggests that injecting a small amount of bias in the training data may actually boost the DANN model's robustness. We hypothesise that a mild amount of biases increases the DANN model's incentive to learn features invariant to the bias feature as the reverted gradients from the discriminator will get larger. There is also the factor of the discriminator acting as a regulariser, injecting noise into the system, through the inverted gradients. ## 5 Conclusion We presented Synthia's melody, a robust framework for examining the susceptibility of deep audio models to distribution shifts. All melody samples are free of unobserved biases, given their synthetic nature. We detailed novel model behaviour under varying levels of shifts constructed via music timbre. We considered two distribution shifts, domain shift and sample selection bias, and two types of models, a SampleCNN baseline and DANN, where DANN is a SampleCNN trained with a domain-adaptation algorithm. In two types of shift, we showed baseline models make more confident predictions with lower shift levels and notably, in sample selection bias, we showed the injected bias stops the baseline models from learning the target task even if they have enough capacity to do so. The evaluation Figure 6: **Comparison of Model and DANN model scores on the neutral test set.** (a) and (c) show the scores of the baseline model, while (b) and (d) show the scores of the DANN model. Figure 7: **Test accuracy of DANN on samples with unseen wave shapes during training.** The x-axis represents the proportion of the biased samples in the sine/square training data. The experiment is repeated five times for baseline models and three times for DANN. The line represents the sample mean and the error bands are calculated with \(\pm\) 1 standard deviation from the sample mean. demonstrates that DANN is more robust to the injected bias and can boost the model's robustness up to, but not including, extreme levels of shift. Synthia's melody provides a robust testbed that allows for reproducible results and serves as an important evaluation framework for the development of future domain adaptation algorithms. Moving forward, the insights gained from Synthia's melody offer vital avenues for enhancing the resilience of deep audio models.
2309.03646
Differential geometric bifurcation problems in pde2path -- algorithms and tutorial examples
We describe how some differential geometric bifurcation problems can be treated with the MATLAB continuation and bifurcation toolbox pde2path. The basic setup consists in solving the PDEs for the normal displacement of an immersed surface $X\subset\mathbb{R}^3$ and subsequent update of $X$ in each continuation step, combined with bifurcation detection and localization, followed by possible branch switching. Examples treated include some minimal surfaces such as Enneper's surface and a Schwarz-P-family, some non-zero constant mean curvature surfaces such as liquid bridges and nodoids, and some 4th order biomembrane models. In all of these we find interesting symmetry breaking bifurcations. Some of these are (semi)analytically known and thus are used as benchmarks.
Alexander Meiners, Hannes Uecker
2023-09-07T11:25:08Z
http://arxiv.org/abs/2309.03646v1
# Differential geometric bifurcation problems in pde2path - algorithms and tutorial examples ###### Abstract We describe how some differential geometric bifurcation problems can be treated with the Matlab continuation and bifurcation toolbox pde2path. The basic setup consists in solving the PDEs for the normal displacement of an immersed surface \(X\subset\mathbb{R}^{3}\) and subsequent update of \(X\) in each continuation step, combined with bifurcation detection and localization, followed by possible branch switching. Examples treated include some minimal surfaces such as Enneper's surface and a Schwarz-P-family, some non-zero constant mean curvature surfaces such as liquid bridges and nodoids, and some 4th order biomembrane models. In all of these we find interesting symmetry breaking bifurcations. Some of these are (semi)analytically known and thus are used as benchmarks. ###### Contents * 1 Introduction * 2 Geometric background, and data structures * 2.1 Differential geometry * 2.2 Default data and initialization of a pde2path struct p * 2.3 pde2path setup for discrete differential geometry * 2.3.1 Discrete differential geometry FEM operators * 2.3.2 The pde2path library Xcont * 3 Example implementations and results * 3.1 Spherical caps * 3.2 Some minimal surfaces * 3.2.1 Prescribing one component of \(X\) at the boundary * 3.2.2 A Plateau problem * 3.2.3 Bifurcation from the Enneper surface * 3.3 Liquid bridges and nodoids * 3.3.1 Nodoid theory * 3.3.2 Nodoid continuation with fixed boundaries * 3.3.3 Short nodoids * 3.3.4 Long nodoids * 3.4 Nodoids with pBCs in \(z\) * 3.5 Triply periodic surfaces * 3.5.1 The Schwarz P minimal surface (family) * 3.5.2 CMC companions of Schwarz P * 3.6 Fourth order biomembranes, cylinders * 4 3.6.1 Continuation in the spontaneous curvature * 3.6.2 Intermezzo: Other radii * 3.6.3 Continuation in surface tension * 3.7 Biomembrane caps * 4 Summary and outlook * A Spheres, hemispheres, VPMCF, and an alternative setup * A.1 Spheres * A.2 Hemispheres * A.3 Spherical caps via 2D finite elements ## 1 Introduction There are various algorithms and toolboxes for the numerical continuation and bifurcation analysis for solutions of partial differential equations (PDEs), for instance AUTO [10], Coco[11], BifurcationKit.jl[21], and pde2path[21, 22]. In its standard setup, pde2path is for PDEs for functions \(u:\Omega\times\Lambda\to\mathbb{R}^{N}\), where \(\Omega\subset\mathbb{R}^{d}\) is a fixed domain, \(d=1,2\), or \(3\), \(N\in\mathbb{N}\), and \(\Lambda\subset\mathbb{R}^{p}\) is a parameter domain, or for time-dependent functions \(u:I\times\Omega\times\Lambda\to\mathbb{R}^{N}\), \(I\subset\mathbb{R}\), which then includes the continuation and bifurcation of time periodic orbits. Essentially, this also applies to BifurcationKit.jl, while wrt PDEs AUTO originally focusses on 1D boundary value problems, and Coco in principle allows great flexibility by delegating the PDE definition/discretization to the user. However, none of these packages seem directly applicable to differential geometric PDEs in parametric form, which deal directly with manifolds, e.g., surfaces in 2D, which are not graphs over a fixed domain. There are well established numerical methods for the discretization of such PDEs, for instance the surface finite element method (surface FEM) [11], but there seem to be few algorithms or packages which combine these with continuation and bifurcation. Two notable exceptions are the algorithm from [14], and the SurfaceEvolver, for which bifurcation aspects are for instance discussed in [1]. Here we present an extension of pde2path aimed at geometric PDE bifurcation problems. We focus on constant mean curvature (CMC) surfaces, which are not necessarily graphs, and with, e.g., the mean curvature, or the area or enclosed volume as the primary bifurcation parameter. See Fig. 1 for a preview of the type of solutions we compute. For \(X\) a two dimensional surface immersed in \(\mathbb{R}^{3}\), we for instance want to study the parameter dependent problem \[H(X)-H_{0} =0, \tag{1a}\] \[V(X)-V_{0} =0, \tag{1b}\] possibly with boundary conditions (BCs) in (a), where \(H(X)\) is the mean curvature at each point of \(X\), and \(V(X)\) is the (properly defined) volume enclosed by \(X\). The system (1) is obtained for minimizing the area \(A(X)\) under the volume constraint \(V(X)=V_{0}\), i.e., as the Euler-Lagrange equations for minimizing the energy \[E(X)=A(X)+H_{0}(V(X)-V_{0}), \tag{2}\] and \(V_{0}\in\mathbb{R}\) typically plays the role of an "external continuation parameter", while \(H_{0}\), which for instance describes a spatially constant pressure difference for interfaces between fluids, is "free". Following [14], our setting for (1) and generalizations is as follows. Let \(X_{0}\) be a surface satisfying (1) for some \(V_{0}\) and \(H_{0}\), and define a new surface via \(X=X_{0}+u\,N_{0}\), \(u:X_{0}\to\mathbb{R}\) with suitable boundary conditions, where \(N_{0}:X_{0}\to\mathbb{S}^{2}\) is (a choice of) the unit normal vector field of \(X_{0}\) Then (1) reads \[G(u,\widetilde{H})=H(X)-\widetilde{H}=0\text{ (with boundary conditions if applicable)},\] (3a) which is a quasilinear elliptic equation for \[u\] coupled to the volume constraint \[q(u)=V(X)-\widetilde{V}. \tag{3b}\] Thus, \[\text{after solving (\ref{eq:volume}) for $u,\widetilde{H},\widetilde{V}$ we can update $X_{0}=X_{0}+uN_{0}$, $H_{0}=\widetilde{H},V_{0}=\widetilde{V}$, and repeat}. \tag{4}\] We generally compute (approximate), e.g., the mean curvature \(H\) from a surface FEM discretization of \(X\), see SS2.3. Alternatively, we may assume a parametrization \(\phi:\Omega\to\mathbb{R}^{3}\) of \(X\) over some bounded domain \(\Omega\subset\mathbb{R}^{2}\), and compute, e.g., \(H\) from a classical 2D FEM mesh in \(\Omega\), although this is generally more complicated and less robust than using surface meshes, and we mainly review it for completeness in App. A.3. Both approaches can and usually must be combined with adaptive mesh refinement and coarsening as \(X\) changes. Both can also be applied to other geometric PDEs, also of higher order, for instance fourth order biomembrane model, see SS3.6. In this case, the analog of (3a) can be rewritten as a system of (2nd order) PDEs for a vector valued \(u\), and the same ideas apply. Figure 1: Preview of solutions (solution branches) we compute. (a) \(H\) over \(V\) for spherical caps, and sample solutions, §3.1. The colors indicate \(u\) in the last step, yellow\(>\)blue, and thus besides giving visual structure to \(X\) indicate the “direction” of the continuation. \(H\) is negative since here \(N\) is the outer normal. (b) Enneper minimal surface (a bounded part, with the boundary shown in red), §3.2.3. (c) A liquid bridge between two circles, with excess volume and hence after a symmetry breaking bifurcation, §3.3.3. (d) A nodoid with periodic BCs, cut open for plotting, §3.4. (e) A Schwarz P surface, §3.5.1. Samples (b)–(e) are each again from branches of solutions of the respective problems, see Figures 8, 9,14, and 15 for the bifurcation diagrams. Our work comes with a number of demos which are subdirectories of pde2path/demos/geomtut, see Table 1. The rather large number of demos is aimed at showing versatility, and, more importantly, is due to our own needs for extensive testing, in particular of various BCs, and of different mesh handling strategies. Table 2 summarizes acronyms and notation used throughout, and Fig.2 explains the basic installation steps for pde2path. See also [4] for a quick overview of installation and usage of pde2path, and of all demos coming with pde2path, and [10] or [11, Chapters 5 and 6] for getting started with pde2path via simple classical PDEs. **Remark 1.1**: Here we focus on stationary problems of type (1), which give critical points of the volume preserving mean curvature flow (VPMCF). A time \(t\) dependent 2D manifold \(X(t)\subset\mathbb{R}^{3}\) deforms by mean curvature flow (MCF) if, assuming the correct sign for \(H\), i.e., \(H>0\) for \(X\) bounding a convex body and \(N\) the inner normal, \[\dot{X}=-H(X)N. \tag{5}\] This is the \(L^{2}\) gradient flow for the area functional \(A(X)\), and can be considered as a quasilinear parabolic PDE, at least on short times. For closed and compact \(X\) there always is finite time blowup (generically by shrinking to a "spherical point"), and we refer to [14] for an introduction to this huge field, which inter alia heavily relies on maximum (comparison) principles. \begin{table} \begin{tabular}{l l} directory & remarks \\ \hline spcap1 & Spherical caps via surface meshes, introductory demo. \\ bdcurve & Experiments on minimal surfaces with different boundary curves. \\ enneper & Bifurcation from Enneper’s surface, closely related to bdcurve. \\ nodDBC & Nodoids with Dirichlet BCs, including so called liquid bridges. \\ nodpBC & Nodoids with periodic BCs. \\ TPS & Triply Periodic Surfaces, here Schwarz P. \\ biocyl & Helfrich cylinders with clamped BCs as an example of a 4th order problem. \\ biocaps & Disk type solutions as a variant of biocyl. \\ \hline spheres & Continuation of spheres, and tests for VPMCF, SSA.1. \\ hemispheres & Continuation of hemispheres on a supporting plane, and VPMCF, SSA.2. \\ spcap2 & Spherical caps via 2D FEM in the preimage, SSA.3. \\ \end{tabular} \end{table} Table 1: Demo directories in pde2path/demos/geomtut. The first two and last three are rather introductory and not dealing with bifurcations. \begin{table} \begin{tabular}{l|l|l|l} \(X\) & surface immersed in \(\mathbb{R}^{3}\) & \(N=N(X)\) & surface unit normal \\ \(A\)=\(A(X)\)=\(A(u)\) & area of \(X\), resp. of \(X\)=\(X_{0}\)+\(uN_{0}\) & \(V=V(X)\) & (algebraic) volume, e.g., (14) \\ \(H=H(X)\) & mean curvature, e.g., (12) & \(K=K(X)\) & Gaussian curvature \\ \(G(u,\lambda)=0\) & generic form of a PDE such as & \(\mathrm{ind}(X)\) & index, i.e., number of unstable \\ & (3a), \(\lambda\) as a generic parameter & & eigenvalues of linearization \\ \(L=\partial_{u}H(u)\) & Jacobi op. (with BCs) & \(q(u,\lambda)=0\) & generic constraint such as (3b) \\ \hline BC & boundary condition & DBC/NBC & Dirichlet/Neumann BC \\ pBC & periodic BC & PC & phase condition \\ BP/FP & branch/fold point & CMC & constant mean curvature \\ TPS & triply periodic surface & TPMS & triply periodic minimal surface \\ MCF & mean curvature flow & VPMCF & volume preserving MCF \\ \end{tabular} \end{table} Table 2: Notations and acronyms; for given \(X_{0}\), quantities of \(X=X_{0}+uN_{0}\) will also be considered as functions of \(u\), e.g., \(A(u)=A(X_{0}+uN_{0})\). * Make a directory, e.g., myp2p anywhere in your path. Download pde2path from [11] to myp2p and unpack, which gives you the pde2path home directory myp2p/pde2path. * In Matlab, change into pde2path/ and call setpdepath to make the libraries available. (We also recommend ilupack [1], which is not used here but otherwise in pde2path for large scale problems). * Test, i.e: change directory into pde2path/demos/geomtut/spcap1, load the script cmds1.m into the editor (i.e., type edit cmds1.m at the command line). To get an understanding what command does what, we then recommend to run cmds1.m in "cell-mode", i.e., to proceed "cell-by-cell". * Find a demo that is closest to the problem you want to study; copy this demo directory to a new directory myproplem/ or any other name (we recommend not as a subdirectory of pde2path but _somewhere else_, for instance in a subdirectory myproplems of myp2p. In myproplem, modify the relevant files (usually at least *init and cmds) and explore. The VPMCF reads \[\dot{X}=-(H(X)-\overline{H})N,\quad\overline{H}=\frac{1}{A(X)}\int_{X}H\, \mathrm{d}S, \tag{6}\] and for closed \(X\) conserves the enclosed volume \(V(X)\). For non-closed \(X\) one typically studies Neumann type BCs on "support planes", see, e.g., [10] and in most cases the analysis is done near axisymmetric states such as spheres, spherical caps, and cylinders. In general, the existence and regularity theory for (6) is less well understood than for (5) due to the lack of general maximum principles for (6). Our notion of stability of solutions of (1) (indicated by thick lines in bifurcation diagrams, while branches of unstable solutions are drawn as thinner lines) refers to (6) if we have an active volume constraint such as (3b), and to (5) if not, with the exception of the fourth order problems in SS3.6, see Remark 3.12. Moreover, by \(\mathrm{ind}(X)\) we denote the number of unstable eigenvalues of the linearization of (the discretization of) (3), including the constraints (if active). We also provide very basic setups to numerically integrate (5) and (6) by explicit Euler stepping. This often has to be combined with mesh adaptation, and in this case \(A\) does not necessarily decrease monotonously for MCF. Moreover, our VPMCF typically conserves \(V\) only up to \(0.5\%\) error. Thus, both are not necessarily efficient or highly accurate, but can be used to generate initial guesses for the continuation of steady states of (1). See SS3.1, SS3.2.3 (MCF) and SSA.1, SSA.2 (VPMCF) for examples, and, e.g., [1, 2, 3] for much more sophisticated numerical algorithms for geometric flows including (5) and (6), and detailed discussion. \(\rfloor\) The remainder of this note is organized as follows: In SS2 we review some differential geometric background, and the pde2path data structures and functions to deal with geometric PDEs. The central SS3 discusses the main demos, and in SS4 we give a summary, and an outlook on ongoing and future work. In SSA we comment on the further demos spheres and hemispheres, which do not show bifurcations but deal with VPMCF and Neumann type free BCs, and present a classical FEM setup for spherical caps. See also [14] for supplementary information (movies) on some of the rather complicated bifurcation diagrams we obtain. Figure 2: Installation of pde2path (version 3.1), and a “typical” directory structure of myp2p. Geometric background, and data structures ### Differential geometry We briefly review the geometric PDE setup, and recommend [10, 11, 12] for further background, among many others. Throughout, let \(\Sigma\) be a 2D connected compact orientable manifold, with coordinates \(x,y\), and possibly with boundary \(\partial\Sigma\), and for some \(\alpha\in(0,1)\) immersed by \(X\in C^{2,\alpha}(\Sigma,\mathbb{R}^{3})\). By pulling back the standard metric of \(\mathbb{R}^{3}\) we obtain the first and second fundamental forms on \(\Sigma\) expressed via \(X\) as \[g=\begin{pmatrix}g_{11}&g_{12}\\ g_{12}&g_{22}\end{pmatrix}=\begin{pmatrix}\|X_{x}\|^{2}&\langle X_{x},X_{y} \rangle\\ \langle X_{x},X_{y}\rangle&\|X_{y}\|^{2}\end{pmatrix},\qquad h=\begin{pmatrix} h_{11}&h_{12}\\ h_{21}&h_{22}\end{pmatrix}=\begin{pmatrix}\langle X_{xx},N\rangle&\langle X_{ xy},N\rangle\\ \langle X_{xy},N\rangle&\langle X_{yy},N\rangle\end{pmatrix}, \tag{7}\] with unit normal \(N\), which we consider as a field on \(\Sigma\), or locally on \(X\), which will be clear from the context. The mean curvature \(H\) then is \[H=\frac{1}{2}\frac{h_{11}g_{22}-2h_{12}g_{12}+h_{22}g_{11}}{g_{11}g_{22}-g_{1 2}^{2}}, \tag{8}\] which is the mean of the minimal and maximal normal curvatures \(\kappa_{1}\) and \(\kappa_{2}\). The Gaussian curvature is \[K=\kappa_{1}\kappa_{2}. \tag{9}\] The sign of \(H\) depends on the orientation of \(X\), i.e., on the choice of \(N\). A sphere has positive \(H\) iff \(N\) is the inner normal. The Gaussian curvature does not depend on \(N\) or any isometry of \(\Sigma\) (Gauss' Theorema egregium). A generalization of the directional derivative of a function \(f\) to vector fields or tensors is the covariant derivative \(\nabla_{Z}\) for some vector field \(Z\) on \(X\). For a vector field \(Y\), the covariant derivative in the \(j\)'th coordinate direction is defined as \(\nabla_{j}Y_{i}:=\frac{\partial\dot{\nabla}_{Z}}{\partial x_{j}}+\Gamma^{i}_{ jk}Y_{k}\), and for a 1-form \(\omega\) we have \(\nabla_{j}\omega_{i}:=\frac{\partial\omega_{i}}{\partial x_{j}}-\Gamma^{i}_{ jk}\omega_{k}\), with the Christoffel symbols \(\Gamma^{i}_{jk}=\frac{1}{2}g^{jl}(\partial_{x_{j}}g_{kl}+\partial_{x_{k}}g_{ jl}-\partial_{x_{l}}g_{jk})\), where \(g^{ij}\) are the entries of \(g^{-1}\), and where we use Einstein's summation convention, i.e., summation over repeated indices. The covariant derivative is linear in the first argument, giving a general definition of \(\nabla_{Z}Y\) with some vector field \(Z\), and if \(f\) is a function on \(X\), then \[\nabla_{Z}f=\langle g\nabla f,Z\rangle_{\mathbb{R}^{2}}\,. \tag{10}\] Throughout we are dealing with surfaces (2 dimensional manifold immersed into \(\mathbb{R}^{3}\)), hence the gradient \(\nabla\) is the _surface gradient_, i.e., the usual gradient \(\nabla_{\mathbb{R}^{d}}\) in \(\mathbb{R}^{3}\) projected onto the tangent space, \[\nabla f=\nabla_{\mathbb{R}^{3}}f-\langle\nabla_{\mathbb{R}^{3}}f,N\rangle\,N, \tag{11}\] which later will be needed to (formulate and) implement phase conditions, and, e.g., Neumann type BCs. This also gives the _Laplace Beltrami operator_ via \[\Delta f=g^{ij}\nabla_{i}\nabla_{j}f,\] which then also applies to general tensors. Using the Gauss-Weingarten relation \(\frac{\partial^{2}X}{\partial x_{i}\partial x_{j}}=\Gamma^{k}_{ij}\frac{ \partial X}{\partial x_{k}}+h_{ij}N\) we obtain \[\Delta X=g^{ij}\nabla_{i}\nabla_{j}X=g^{ij}\left(\frac{\partial^{2}X}{\partial x_{i} \partial x_{j}}-\Gamma^{i}_{jk}\frac{\partial X}{\partial x_{k}}\right)=g^{ij}h _{ij}N=2H(X)N=2\vec{H}(X),\] where \(\vec{H}(X)\) is called the mean curvature vector, and \[H(X)=\frac{1}{2}\left\langle\Delta X,N\right\rangle. \tag{12}\] The area of \(X\) is \[A(X)=\int_{X}\,\mathrm{d}S, \tag{13}\] and, based on Gauss' divergence theorem, the (algebraic) volume is \[V(X)=\frac{1}{3}\int_{X}\left\langle X,N\right\rangle\,\mathrm{d}S. \tag{14}\] If \(X\) is a closed manifold bounding \(\Omega\subset\mathbb{R}^{3}\), i.e., \(\partial\Omega=X\), and \(N\) the outer normal, then \(V(X)=|\Omega|\) is the physical volume. If \(X\) is not closed, then we typically need to add a third of the flux of \(\vec{x}\) through the open ends to \(V(X)\) (see the examples below). We denote the set of all immersed surfaces with the same boundary \(\gamma\) by \[\mathcal{N}_{\gamma}=\{X:X\text{ is an immersed surface as above and }\partial X=\gamma\}. \tag{15}\] The following Lemma states that all immersions \(Y\in\mathcal{N}_{\gamma}\) close to \(X\) are graphs over \(X\) determined by a function \(u\) as \(Y=X+uN\), which justifies our numerical approach (4). The condition that \(Y\) has the same boundary as \(X\) in general cannot be dropped, as obviously motions of \(\partial X\) tangential to \(X\) cannot be captured in the form \(X+uN\); see SS3.2.1 for an illustration. **Lemma 2.1**: _[_KPP17_]__. For \(X\in C^{2,\alpha}(\Sigma,\mathbb{R}^{3})\) with boundary \(\partial X=\gamma\) there exists a neighborhood \(U\subset C^{2,\alpha}(\Sigma,\mathbb{R}^{3})\) of \(X\) such that for all \(Y\in U\cap\mathcal{N}_{\gamma}\) there exists a diffeomorphism \(\phi:\Sigma\to\Sigma\) and a \(u\in C^{2,\alpha}(\Sigma)\) such that_ \[Y\circ\phi=X+u\,N. \tag{16}\] Assume that a CMC surface \(X_{0}\) with boundary \(\partial X_{0}\)=\(\gamma\) and volume \(V(X_{0})\)=\(V_{0}\) belongs to a family of CMC surfaces \(X_{t}\), \(t\in(-\varepsilon,\varepsilon)\) for some \(\varepsilon>0\). For example, the spherical caps \(S_{t}\) from Fig. 1(a) with the boundary \(\gamma=\{(x,y,0)\in\mathbb{R}^{3}:x^{2}+y^{2}=1\}\) are a family of CMC immersions fully described by the height \(t\in\mathbb{R}\). By Lemma 2.1, the parameter \(t\) uniquely defines \(u\) in a small neighborhood, i.e., \(X_{t}=X+u\,N\), and the system of equations for \(u\) reads \[H(u)-H_{0}=0, \tag{17}\] for some \(H_{0}\in\mathbb{R}\), where we abbreviate \(H(u)=H(X+u\,N)\), etc. If we consider variational vector fields at \(X_{0}\) in the form \(\psi=\frac{\partial X_{t}}{\partial t}|_{t=0}=uN\), and additionally assume that \(X_{t}\in\mathcal{N}_{\gamma}\), then necessarily \[u|_{\partial X}=0,\text{ Dirichlet boundary conditions (DBCs)}. \tag{18}\] Such \(X_{t}\) are called _admissible variations_ in [13, SS2.1], and we have the following results on derivatives of \(A\) and \(V\). **Lemma 2.2**: _[_11_, SS2.1]_ _For an admissible one parameter variation \(X_{t}\) of \(X\in C^{2,\alpha}(\Sigma)\) and variational vector fields \(\psi=\frac{\partial X}{\partial t}\big{|}_{t=0}=uN\) the functions \(t\mapsto A(t)=A(X_{t})\) and \(t\mapsto V(t)=V(X_{t})\) are smooth, and_ \[V^{\prime}(0)=\int_{X_{0}}u\,\mathrm{d}S,\quad A^{\prime}(0)=-2\int_{X_{0}}H_{0 }u\,\mathrm{d}S,\text{ and }A^{\prime\prime}(0)=-\int_{X_{0}}(\Delta u+\|S_{0}\|^{2}u)u\, \mathrm{d}S, \tag{19}\] _where \(\|S_{0}\|^{2}=4H_{0}^{2}-2K_{0}\) with the Gaussian curvature \(K_{0}\). Thus_ \[\frac{\mathrm{d}}{\mathrm{d}t}H(X_{t})\Big{|}_{t=0}=-\Delta u-\|S_{0}\|^{2}u, \tag{20}\] _and the directional derivative (20) is given by the self-adjoint Fredholm operator \(L\) on \(L^{2}(X_{0})\) with_ \[L=\partial_{u}H(0)=-\Delta-\|S_{0}\|^{2},\quad\text{with DBCs}. \tag{21}\] **Remark 2.3**: a) The operator in (21) without BCs is called _Jacobi operator_, and a nontrivial kernel function is called a _Jacobi field_ on \(X=X_{0}\). An immersion \(X\) with a Jacobi field satisfying the BCs is called _degenerate_. The Fredholm property allows the use of the Crandall-Rabinowitz bifurcation result [10]: Given a \(C^{1}\) branch \((t_{0}-\varepsilon,t_{0}+\varepsilon)\ni t\mapsto X_{t}\), if \(X_{t}\) is non-degenerate for \(t\in(t_{0}-\varepsilon,t_{0})\cup(t_{0},t_{0}+\varepsilon)\), and if at \(t_{0}\) a _simple_ eigenvalue \(t\mapsto\mu_{0}(t)\) crosses transversally, i.e., \(\mu(t_{0})=0\), \(\mu_{0}^{\prime}(t_{0})\neq 0\), then a branch \(\widetilde{X}_{t}\) bifurcates at \(t_{0}\). See also [10] for a formulation via Morse indices \(\mathrm{ind}(X_{t})\)=number of negative eigenvalues of \(L\), counted with multiplicity, used to find bifurcation points in families of nodoids, which we shall numerically corroborate in SS3.3.2. An equivariant version can be found in [11, Theorem 5.4], applied to bifurcations of triply periodic minimal surfaces, for which linearizations always have a trivial 5 dimensional kernel due to translations and rotations, see SS3.5 for numerical illustration. See also [12, 13, 14] and [15, Chapters 2 and 3] for general discussion of Crandall-Rabinowitz type results, and of Krasnoselski type results (odd multiplicity of critical eigenvalues, based on degree theory), including equivariant versions. b) Besides the (zero) DBCs (18) corresponding to a fixed boundary \(\partial X=\gamma\), we shall consider so called free boundaries of Neumann type. This means that \(\partial X\subset\Gamma\), where \(\Gamma\subset\mathbb{R}^{3}\) is a fixed 2D support manifold (e.g., a plane), and that \(X\) intersects \(\Gamma\) orthogonally. Following [13], we summarize the second derivative of \(A\) in this case as follows: if \(h_{\Gamma}\) is the second fundamental form of \(\Gamma\), and \(\psi=uN\), then \(N|_{\partial X}\) is tangent to \(\Gamma\), such that \(h_{\Gamma}(N,N)\) is well defined, and \[A^{\prime\prime}(0)=-\int_{X_{0}}(\Delta u+\|S_{0}\|^{2}u)u\,\mathrm{d}S-\int _{\partial X_{0}}h_{\Gamma}(N,N)u^{2}\,\mathrm{d}s. \tag{22}\] Note that the term \(\int_{\partial X_{0}}\ldots\) in (22) vanishes if \(\Gamma\) is a plane and hence \(h_{\Gamma}\equiv 0\). c) The formulas (19)-(21) translate to our discrete computational setting in a straightforward way, see SS2.3.1. However, given some \(X_{0}\), we compute \(X=X_{0}+uN_{0}\) via Newton loops for iterates \(u_{n}\) with derivatives (of \(V,A\) and \(H\)) evaluated at \(X_{n}=X_{0}+u_{n}N_{0}\), and hence the formulas are accordingly adjusted. \(\rfloor\) ### Default data and initialization of a pde2path struct p Before explaining the modifications needed for the geometric problems, we briefly review the standard setup of pde2path, and as usual assume that all problem data is contained in the Matlab struct p as in problem. In the standard FEM setting this includes the object p.pdeo (with sub-objects fem and grid), which provides methods to generate FEM meshes, to code BCs, and to assemble FEM matrices M (mass matrix) and K (e.g., Laplacian), or directly a rhs \(G\). Typical initializations and first continuation steps in the FEM setup (for semilinear problems) then run as follows, where steps 1,2 and 5 are usually combined into an init-function. 1. Call p=stanparam() to initialize most fields in p with default values (see source of stanparam.m for default fields and values). 2. Call a pdeo constructor, for instance p.pdeo=stanpdeo2D(p,vararg), where here and in the following vararg stands for variable arguments. 3. In a function oosetfemops (in the current directory), use p.pdeo.assema to generate a mass matrix p.mat.M and a stiffness matrix p.mat.K (typically corresponding to \(-\Delta\)), and possibly further FEM matrices, e.g., for BCs. 4. Use p.mat.M and p.mat.K in a function r=sG(p,u) to encode the PDE, and optionally the Jacobian in Gu=sGjac(p,u) (usually recommended, but numerical Jacobians are also supported). The input argument u contains the "PDE unknowns" \(u\) and the parameters appended at the end. If required by the problem, similarly create a function q=qf(p,u) for the constraints as in (3b), and a function qu=qder(p,u) for the derivatives of qf. 5. Initialize p.u with a first solution (or a solution guess, to be corrected in a Newton loop). 6. Call p=cont(p) to (attempt to) continue the initial solution in some parameter, including bifurcation detection, localization, and saving to disk. 7. Call p=swibra(dir,bpt,newdir) to attempt branch switching at branch point bpt in directory dir; subsequently, call p=cont(p) again, with saving in newdir. 8. Perform further tasks such as fold or branch-point continuation; use plotbra(dir,pt,vararg) to plot bifurcation diagrams, and plotsol(dir,pt,vararg) to plot sample solutions. **Remark 2.4**: The rhs, Jacobian, and some further functions needed/used to run pde2path on a problem p, are interfaced via function handles in p.fuha. For instance, you can give the function encoding your rhs \(G\) any name, e.g., myrhs, with signature res=myrhs(p,u), and then set p.fuha.sG=@myrhs, but you can also simply keep the "standard names" sG and sGjac and encode these in the respective problem directory. For many handles in p.fuha there are standard choices which we seldomly modify, e.g., p.fuha.headfu=@stanheadfu (the header for printouts). Functions for which the "default choice" is more likely to be modified include, e.g., * p.fuha.outfu=@stanbra, signature out=stanbra(p,u), branch output; * p.fuha.lss=@lss, signature [x,p]=lss(A,b,p), linear systems solver \(x=A^{-1}b\). Other options include, e.g., lssbel (bordered elimination) and lssAMG (preconditioned GMRES using ilupack[11]). During continuation, the current solution is plotted via plotsol(p), and similarly for a posteriori plotting (from disk). The behavior of plotsol is controlled by the subfields of p.plot (and possible auxiliary arguments), and if p.plot.pstyle=-1, then plotsol immediately calls a function userplot, to be user-provided. Such user functions naturally must be in the Matlab-path, typically in the current problem directory, which Matlab scans first when looking for a file. We sometimes also exploit this to overload pde2path library functions that need modifications for a given problem. \(\rfloor\) ### pde2path setup for discrete differential geometry #### 2.3.1 Discrete differential geometry FEM operators We recall a few discrete differential geometry operators from [13, 14], and shall use implementations of them from the gptoolbox[14]. Given a triangulation \(\texttt{X}\in\mathbb{R}^{n_{p}\times 3}\) (point coordinates) and \(\texttt{tri}\in\mathbb{R}^{n_{t}\times 3}\) (triangle corner indices) of \(X\), and the piecewise linear element "hat" functions \(\phi_{i}:X\to\mathbb{R}\), \(\phi_{i}(X_{j})=\delta_{ij}\), we have \[\int\nabla\phi_{i}\nabla\phi_{j}\,\mathrm{d}S=-\frac{1}{2}(\cot \alpha_{ij}+\cot\beta_{ij})=:L_{ij}, \tag{23}\] where \(\alpha_{ij}\) and \(\beta_{ij}\) are the angles opposite the edge \(e_{ij}\) from point \(X_{i}\) to point \(X_{j}\). For \(u:X\to\mathbb{R}\), \(u=\sum_{i=1}^{n_{p}}u_{i}\phi_{i}\), this yields the FEM stiffness matrix \(Lu\) corresponding to the Laplace-Beltrami operator \(-\Delta u\) weighted by the mass matrix \(M\). In [10] it is explained that for geometric problems, with possibly rather distorted triangles, instead of the full mass matrix \(M^{\mathrm{full}}\) with \[M^{\mathrm{full}}_{\ ij}=\int\phi_{i}\phi_{j}\,\mathrm{d}S, \tag{24}\] the Voronoi mass matrix \[M=\mathrm{diag}(A_{1},\ldots,A_{n_{p}}), \tag{25}\] should be expected to give better approximations, see also Fig. 3. Here, \(A_{i}=\sum_{j=1}^{n_{i}}A_{m}(T_{j})\) is the area of the Voronoi region at node \(i\), where \(T_{j}\), \(j=1,\ldots,n_{i}\) are the adjacent triangles, and \(A_{m}(T)\) is a "mixed" area: For non-obtuse \(T\), \(A_{m}(T)\) is the area of the rhomb with corners in \(X_{i}\), in the midpoints of the edges adjacent to \(X_{i}\), and in the circumcenter of \(T\), while for obtuse \(T\) we let \(A_{m}(T):=|T|/2\) if the angle at \(X_{i}\) is obtuse, and \(A_{m}(T):=|T|/4\) else. Alltogether, this yields the approximation \[-\Delta u=M^{-1}Lu, \tag{26}\] where \(M\) from (25) is diagonal, and \(L\) and \(M\) are evaluated very efficiently via cotmatrix and massmatrix from the gptoolbox, see Table 3. However, as we always consider our problems such as (3) in weak form, we let \(\mathtt{H}=-\frac{1}{2}\left\langle LX,N\right\rangle\), where for the vertex normals \(N\) we can use per_vertex_normals, and the weak form of, e.g., \(H-H_{0}\)=0 then is \[-\left\langle LX,N\right\rangle-2MH_{0}=0, \tag{27}\] again with Voronoi \(M\). Alternatively, we use [k,H,K,M]=discrete_curvatures(X,tri), where K and \(\mathtt{k}=(k_{1},k_{2})\) are the (weighted, i.e., weak) discrete Gaussian and principal curvatures per vertex; these are computed from a discrete version of the Gauss-Bonnet theorem.1 Namely Footnote 1: On a manifold \(X\) with boundary \(\partial X\) we have \(\int_{X}K\,\mathrm{d}S+\int_{\partial X}\kappa_{g}\,\mathrm{d}s=2\pi\chi(X)\) where \(\chi(X)\) is the Euler characteristic of \(X\), and \(\kappa_{g}\) is the geodesic curvature of \(\partial X\). This will play an important role for the biomembranes in §3.6. The discrete formula (28) is used at interior points of \(X\), while at boundary points \(X_{i}\) it is modified to \(K(X_{i})-\pi\). \[\mathtt{K}(X_{i})=2\pi-\sum_{j=1}^{n_{i}}\theta_{j},\quad\text{(and $k_{1}=H+\sqrt{D}$ and $k_{2}=H-\sqrt{D}$)}, \tag{28}\] where the \(\theta_{j}\) are the angles at \(X_{i}\), and where the discriminant \(D=H^{2}-K\) (which is non-negative in the continuous case) in the discrete case is set to 0 if negative. An approximations of \(K\) is then obtained (cheaply, since \(M\) is diagonal) from \[K=M^{-1}\mathtt{K}. \tag{29}\] There are further schemes for \(H\) and \(K\), with different convergence behaviors, see [11] and the references therein. Numerical experiments in [17] show that a variety of natural schemes for \(-\Delta\) in general do not converge, but that \(M^{-1}L=-\Delta+\mathcal{O}(h^{2})\) with Voronoi \(M\) at valence 6 nodes (6 neighbors) [17, Theorem 2.1], where \(h\) is a suitable triangle diameter. In Fig. 3 we give an illustration of the error and convergence behavior of our discrete \(H=-\frac{1}{2}M^{-1}\left\langle LX,N\right\rangle\) based on (26), and of \(K\) from (29) on (coarse) discretizations of the unit sphere, obtained via subdivided_sphere with 2 (a) resp. 3 (b) subdivisions, and with \(N\)=outer normal. Therefore, \(H=-1\), \(K=1\) are the exact values, and the two left columns indicate the convergence for \(H\), but also that the node valence plays a role on these otherwise very regular meshes.2 However, the last column shows that using \(M^{\text{full}}\) in this example, i.e., \(H_{\text{full}}=-\frac{1}{2}M^{\text{full}-1}\left\langle LX,N\right\rangle\) gives a significant error (and similarly in \(K\)), and in fact no convergence at the valence 5 nodes; see spheres/convtests.m for the source of these experiments. Footnote 2: Euler’s polyhedron formula yields that triangulations with all nodes of valence 6 do not exist, see, e.g., [1]. #### 2.3.2 The pde2path library Xcont Table 3 lists the main (for us) functions from the gptoolbox[18], which we interface by functions from the pde2path library Xcont. The most important new data for continuation of a surface \(X\) are p.X and p.tri, which essentially replace the data in p.pdeo.grid. The most important switch, which also modifies the behavior of some standard pde2path functions, is \[\text{p.sw.Xcont}=\left\{\begin{array}{ll}0&\text{legacy setting (no X),}\\ 1&\text{switch on X--continuation (default),}\\ 2&\text{refined setting for X--continuation.}\end{array}\right. \tag{30}\] The difference between p.sw.Xcont=1 and p.sw.Xcont=2 is as follows: For p.sw.Xcont=1 we update p.X after convergence of the Newton loop, i.e., set p.X=p.X+u*N0, p.up=u (for plotting, see Remark 2.5(b)), and u(1:p.np)=0 (zeroing out \(u\) for the next continuation step). For p.sw.Xcont=2 we do all this after each successful Newton-step, such that we obtain slightly different (more accurate) Jacobians as we also have a new \(N\). A side-effect is that the last update in the Newton loop and Figure 3: Discrete \(H\) (and \(K\)) on (coarse) meshes of the unit sphere (plots cropped). Two left columns: Convergence for \(H=-\frac{1}{2}M^{-1}\left\langle LX,N\right\rangle\) and \(K=M^{-1}\)K with Voronoi \(M\). Right column: No convergence for \(H\) (and similar for \(K\)) at valence 5 nodes when using \(M^{\text{full}}\). hence the p.up may be very small and will in general not represent the "direction" of continuation. For p.sw.Xcont=1, p.up will contain the complete step, and therefore we choose this as default. For the spectral computations (bifurcation detection and localization) p.sw.Xcont=1 vs p.sw.Xcont=2 makes no difference as all spectral computations are done after convergence of the Newton loops, i.e., after the final update of p.X. Some main functions from the pde2path library Xcont are listed in Table 4, and can be grouped as follows: 1. Functions directly interfacing the gptoolbox such as N=getN(p) which in the default setting just calls N=per_vertex_normals(p.X,p.tri). These are meant as easy-to-change interfaces, as for a given problem it may be desired to, e.g., change the orientation of \(X\). For this, make a local copy of getN.m and there set N=-per_vertex_normals(p.X,p.tri). 2. "Template" functions to compute \(V(u)\) (corresponding to \(V(X)=V(X_{0}+uN_{0})\) in (14)), and \(A(u)\) from (13), and wrappers to use them in constraints such as \(\mathtt{q}=\mathtt{q}\mathtt{f}\mathtt{V}(\mathtt{p},\mathtt{u})=V(u)-V_{0}\), and their derivative, such as \(\mathtt{q}=\mathtt{q}\mathtt{j}\mathtt{a}\mathtt{c}\mathtt{V}(\mathtt{p}, \mathtt{u})=\partial_{u}V\). (Since \(V\) is a scalar function, here we use "\(\mathtt{j}\mathtt{a}\mathtt{c}\)" in a loose sense.) 3. Template convenience functions such as pplot for plotting p.X, and updX for updating \(X=X_{0}+uN_{0}\), cf. (4), and overloads of pde2path library function adapted to \(X\) continuation, e.g., getGupde, and a prototype oosetfemops, which for \(X\)-continuation usually should only serve as a dummy needed by other pde2path functions. 4. Functions related to mesh handling such as refineX, coarsenX and degcoarsenX, and associated elements-to-refine-selectors such as e2rsA (based on triangle areas) and e2rshape (based on triangle shapes). Additionally there are retrigX and moveX, based on [11], see Remark 2.6. 5. The template geomflow for geometric flows such as MCF, interfaced in p.fuha.flowf. All of these functions, in particular from 2. and 3. are "templates"; for a given problem it may be necessary or useful to make local copies of some of these functions in the problem directory and change them there. Additionally, there is convenience function p=stanparamX(p), which (re)sets some pde2path parameters to typical values for \(X\)-continuation. Another important interface to the gptoolbox is 1. getM(p), which for scalar problems should call M=massmatrix(p.X,p.tri,'Voronoi'), and for vector valued problems should build the system mass matrix from such "scalar" M, cf. SS3.6. However, for compatibility with the non-\(X\)-setting, getM is _not_ part of Xcont, but needs a local copy in each (\(X\)-continuation) problem directory. \begin{table} \begin{tabular}{l|l} function & remarks \\ \hline N=per\_vertex\_normals(X,tri) & normals; should be interfaced as N=getN(p,X), see Table 4 to possibly flip sign at one place; \\ L=cotmatrix(X,tri) & cotangent discrete Laplace–Beltrami \\ M=massmatrix(X,tri,type) & mass matrix, with type ’full’, ’barycentric’ or ’voronoi’. \\ K=discrete\_gaussian\_curvature(X,tri) & Gaussian curvature \(K\). \\ [k,H,K,M]=discrete\_curvatures(X,tri) & principal curvatures \(k=(k_{1},k_{2})\), and \(H\), \(K\), \(M\). \\ \hline [X,tri,DBC,NBC] & refinement, with DBC/NBC the Dirichlet/Neumann \\ =TrefineRGB(X,tri,DBC,NBC,elist) & boundary nodes, and elist the triangles to refine. Interfaced via p=refineX(p,sig); where sig is the usual factor of triangles to refine, and where after refinement p.u and p.tau are interpolated to the new mesh. See also TcoarsenRGB, with similar signature. \\ \end{tabular} \end{table} Table 3: Main functions used from [10] and [11, 12], and some small additions/mods. Here X\(\in\)\(\mathbb{R}^{n_{p}\times 3}\) are the point coordinates of the triangulation, and tri\(\in\mathbb{R}^{3\times n_{t}}\) the triangulation as rows of point numbers of triangles. **Remark 2.5**: a) As is, out=cmcbra(p,u) puts the data \[\text{out=[pars;V;A;E;meshq]} \tag{31}\] into out, where pars of length \(m=\texttt{length(u)}-\texttt{p.nu}\) is the (user defined) parameter vector of the problem, \(V\) and \(A\) are volume and area of \(X\), and \(E=A+\texttt{par(1)}V\), cf. (2), which assumes that \(H_{0}\) is at par(1), and is an _active_ continuation parameter. Next, meshq = \((\delta_{\text{mesh}},a_{\text{max}},a_{\text{min}},h_{\text{max}},h_{\text{ min}})\) gives measures of the mesh, computed in meshq=meshqdat(p). Here, \[\delta_{\text{mesh}}=\max(h/r)=\text{mesh distortion}, \tag{32}\] where, for each triangle, \(h\) is the maximum edge-length, and \(r\) the in-radius; \(a_{\text{max}}\) and \(a_{\text{min}}\) are the max and min of the triangle areas, and \(h_{\text{max,min}}\) are the largest and smallest edge lengths in the triangulation. Thus, \(\delta_{\text{mesh}}\) is our default mesh-distortion measure (one of several possible), see, e.g., Fig. 4 and Fig. 12 for plots of \(\delta_{\text{mesh}}\) along branches. For an equilateral triangle \(\delta_{\text{mesh}}=6/\sqrt{3}\approx 3.46\), and our experiments suggest that, as a rule of thumb, triangles with \(\delta_{\text{mesh}}<50\) are (still) of "reasonable shape". \begin{table} \begin{tabular}{l|l} function & remarks \\ \hline p=stanparamX(p) & convenience function to set pde2path parameters to “standard” values for \(X\)–continuation; to be called _after_ p=stanparam() during initialization. \\ \hline getN & interface to per\_vertex\_normals; to flip orientation, make local copy with \\ & function N=getN(p,X); N=per\_vertex\_normals(X,p.tri); end \\ getA, getV & get area \(A\) and volume \(V\) according to (13) and (14). \\ qV, qjacV & \(V\) as a constraint \(q\) for continuation, and its derivative \(\partial_{u}q\). \\ qA, qjacA & \(A\) as a constraint \(q\) for continuation, and its derivative \(\partial_{u}q\). \\ c2P & triangle center to point (nodes) interpolation matrix. \\ cmcbra & default branch output in our demos below. \\ \hline pplot & plot of p.X, usually called from userplot; see Remark 2.5b). \\ oosetfemops & usually only needed as a dummy (since called by pde2path library functions). \\ updX & update p.X after successful Newton steps. \\ plotHK & given p.X, plot \(H(X)\) and \(K(X)\) (mostly for checking). \\ \hline e2rsA, e2rsAi & element–to–refine–selector choosing triangles with large areas; e2rsAi chooses triangles with small areas, intended for coarsening. \\ e2rsshape1 & Select triangles according to (32). See also e2rsshape*. \\ meshqdat & mesh quality data, see (33). \\ refineX & mesh refinement of p.X; selecting triangles via p.fuha.e2rs and calling TrefineRGB (or Trefinelong if p.sw.rlong==1 for bisection of only longest edges of triangles). \\ coarsenX & coarsening of p.X, same syntax as refineX; i.e., selecting triangles via p.fuha.e2rs. \\ & Inverse to refineX, as only triangles from prior refinement can be coarsened (to their common ancestors). \\ degcoarsenX & coarsening of mesh by removing degenerate triangles (via gptoolbox). \\ retrigX & retriangulation of \(X\), based on [20]; using the adjacency in p.trig to generate a new (Delauney) triangulation of \(X\) while keeping the surface structure. \\ moveX & retriangulate and move points of \(X\) based on [20] and the associated code. \\ \hline geomflow & simple explicit time integrator for \(X\)=\(fN\), where \(f\)=p.fuha.flowf, e.g., \(f(X)\)= \(-H\) for mean curvature flow, implemented in mcff (with DBCs). \\ \end{tabular} \end{table} Table 4: Main functions from the library Xcont, which collects interfaces to functions from gptoolbox and [21], and some additions/mods; see sources for argument lists and detailed comments, and Remark 2.5 for further comments.. out=p.fuha.outfu is appended to bradat(p,u)=[count;type;ineg;lam;err;L2]3, and we list the component c for branch plotting for p.fuha.outfu=@cmcbra (with m the number of parameters): Footnote 3: six values, where count=step-counter, type\(\in\{0,1,2,3\}\) (regular point, BP, FP, Hopf point), ineg=number of unstable eigenvalues (if p.sw.spcalc\(>0\)), lam=value of primary continuation parameter, and err and L2 are _not_ meaningful in the Xcont setting, see bradat.m for details) \[\begin{array}{c|c|c|c|c|c}\mbox{c}&1\ldots\mbox{m}&\mbox{m+1}&\mbox{m+2}&\mbox {m+3}&\mbox{m+4}&\mbox{m+5}&\mbox{m+6}&\mbox{m+7}&\mbox{m+8}\\ \mbox{meaning}&\mbox{pars}&V&A&E&\delta_{\rm mesh}&a_{\rm max}&a_{\rm min}&h_ {\rm max}&h_{\rm min}\end{array}. \tag{33}\] Thus, to plot, e.g., \(V\) over the _active_ bifurcation parameter from a computed branch b1 into figure fnr, use plotbra(b1,fnr,m+1,varargin), where varargin gives many options for colors, labels, etc. Similarly, to plot \(\delta_{\rm mesh}\), use plotbra(b1,fnr,m+4,varargin). Moreover, c in plotbra(b1,fnr,c,varargin) can be a vector, and to, e.g., plot \(\delta_{\rm mesh}\) over \(V\) use c=[m+1, m+4]. b) pplot(p,fignr) (or pplot(dir,pt,fignr), where p is loaded from dir/pt) colors p.X by p.up, which contains \(u\) from the last continuation step and hence codes (together with \(N\)) the "continuation direction" (if p.sw.Xcont=1), or just the last Newton update (if p.sw.Xcont=2), see (30). The behavior of pplot can further be controlled by settings in the _global_ structure p2pglob.4 For instance, p2pglob.tsw controls the titles, e.g., \((V,H)\) for tsw=1, and \((A,H)\) for tsw=2, which assumes that the parameters are ordered as \((H,V,A)\). For flexibility, if tsw\(>4\), then pplot searches the current directory for a function mytitle to generate a title, see, e.g., demo geomtut/enneper. Again, pplot is a template, if necessary to be modified in the problem directory for customized plots, but all demos from geomtut/ only use the given library version. Footnote 4: This turned out to be convenient, i.e.: When plotting from file we very often want to change the look of plots “globally”, i.e., without first loading the point and then adapting settings. c) In all demos below we use indices p.idx and p.idN of boundary points to set boundary conditions. In this, p.idx should be thought as points for Dirichlet BCs and associated to p.DBC (the corresponding _edges_, updated in refineX and coarsenX), and p.idN as Neumann BCs with edges p.NBC. All of these can also be empty (for instance if \(X\) is closed, or has only one type of boundary), and again, the use of p.idx, p.idN, p.DBC and p.NBC should only be seen as a template, for instance to be modified if a given problem has several different boundaries. \(\rfloor\) **Remark 2.6**: a) For surface meshes (X, tri), mesh adaptation, i.e., refinement and coarsening, seems even more important than for standard (non-parametric) problems, because well behaved initial triangulations (well shaped triangles of roughly equal size) may deteriorate as \(X\) changes. The case of growing spherical caps in Fig. 1(a) is rather harmless as triangle sizes grow but shapes stay intact, and can easily be dealt with by refinement of the largest triangles. For this, in e2rsA we simply order the \(n_{t}\) triangles of tri by decreasing size, and from these choose the first \(\lfloor\sigma n_{t}\rfloor\) for refinement by refineX, i.e., we generally use \(\sigma=\mbox{p.nc.sig}\) as the parameter for the fraction of triangles to refine. The refinement can be either done as RGB if p.sw.rlong=0, or by refining only the longest edges of the selected triangles if p.sw.rlong=1. RGB is generally better if triangle shapes are crucial, but may result in rather long cascades to avoid hanging nodes (such that \(\sigma\) is only a lower bound for the fraction of triangles actually refined). Refine-long gives more control as _only_ the selected triangles are bisected (plus at most one more triangle for each one selected), but may lead to obtuse triangles, and it seems that as for standard FEM very obtuse triangles are more dangerous than very acute triangles. A short computation shows that, e.g., for a right-angled triangle refine-long increases \(\delta=h/r_{\rm in}\) by 45%; however, this can often be repaired by combining refine-long with retrigX, see b). See also [20] for a very useful discussion of mesh quality (in the planar setting, and in 3D). Conversely, coarsenX can be used to coarsen previously refined triangles, again from a list generated by p.fuha.e2rs, which should be reset from the one chosen for refinement. For instance, e2rsAi selects the \(\lfloor\sigma n_{t}\rfloor\) triangles of _smallest_ area, but these have to be from the list of previously refined triangles. degcoarsenX works differently: It calls the modification rmdegfaces of the gptoolbox function rm_degenerate_faces, and essentially aims to remove obtuse and acute triangles by collapsing (short) edges. This works in many cases but may result in hanging nodes such that the FEM no longer works. Both, refineX and degcoarsenX can be told to _not_ refine/coarsen boundary triangles, which is crucial for the case of periodic BCs. b) We also provide two small modifications of (actually interfaces to) code from [10]. In retrigX.m we generate a new (Delauney) triangulation of \(X\), keeping intact the surface structure of \(X\). This is in particular useful if \(X\) has been obtained from _long_ refinement, which typically results in nodes having 8 adjacent triangles (valence 8), while "standard" triangulations (and the output of retrigX) have valence 5 and 6, which generally seems to result in more robust continuations. In moveX we combine retrigX with motion of the points in \(X\) due to "truss forces" of the triangulation, aimed at more uniform edge lengths. Due to the similarity of the triangulation truss forces and surface tension, this works best for minimal surfaces (\(H\)=0), or otherwise for surfaces with small \(|H|\). \(\rfloor\) ## 3 Example implementations and results Our demos are meant to show how to set up different geometric bifurcation problems, in particular with different BCs. They mostly deal with classical minimal or more generally CMC surfaces, for instance the Enneper and Schwarz-P surfaces, and so called nodoids (including physically relevant liquid bridges). Many demos start with CMC surfaces of revolution, and our main interest then are bifurcations breaking the rotational symmetry. The minimal surfaces in SS3.2 are motivated by Plateau's problem of soap films spanning a given wire, and in SS3.6 we consider 4th order problems obtained from the Helfrich functional. All demos come with a number of function files namely (at least, with * a placeholder, usually to be replaced by a short problem name, cf. SS2.3): sG*.m describing the rhs of the problem; *init.m for initialization; userplot.m and getM.m for technical reasons (downward compatibility). Additionally, in some demos we overload functions from libs/Xcont, e.g., cmcbra.m for branch output. Finally, there are script files cmds*.m with * a number if there is more than one script. In our descriptions of the first demos, we give tables listing the used files and their purpose (starting with the scripts), and we give a few listings of (parts of) pertinent files to discuss important points. This becomes less for the later more advanced demos, for which we rather put more comments into the m-files themselves. ### Spherical caps We start with the continuation in volume \(V\) of spherical caps over the unit circle \(\gamma\) in the \(x\)-\(y\) plane, as previewed in Fig. 1(a). It is known [1, SS2.6] that no bifurcations occur, and hence this only serves as an introductory toy model. Table 5 gives an overview of the used files, and Listings 1-2 show the initialization scinit.m, the rhs sGsc.m, and the first script cmds1.m. The BCs are \(\partial X=\gamma=\{(x,y,0)\in\mathbb{R}^{3}:x^{2}+y^{2}=1\}\), which since they hold for the initial unit disk translate into \[u|_{\partial X}=0. \tag{34}\] **Remark 3.1**: a) We can as well continue directly in \(H\), without any constraints, and starting from the disk again obtain the same branch (see lines 14,15 of cmds1.m). Our setup in cmds1.m is motivated by applications, where typically the volume is the external parameter. I.e., the setup is a template how to use the volume (qfV) or area (qfA) constraints, together with the derivatives (qjacV and qjacA). Note that p.nc.ilam=[2,1] for using \(V\) as the _primary active_ parameter in cmds1, and p.nc.ilam=[3,1] for using \(A\), while \(H=\texttt{par}(\texttt{1})\) is a _secondary_ active parameter in both cases. b) Only the _active_ continuation parameters are updated in p.u; thus, when continuing only in \(H\), say, then to plot, e.g., \(A\) over \(H\) we cannot choose p.plot.bpcmp=3 (the parameter index of \(A\)), but must take p.plot.bpcmp=3+2=5. This is because the computed \(A\) is put second after the parameters in the output function cmcbra, and here we have three parameters \((H,V,A)\). But again, \(A=\texttt{par}(3)\) is only updated if \(3\in\texttt{p.nc.ilam}\), i.e., if \(A\) is an active parameter. During init, we call pde=diskpdeo2 to generate a temporary FEM object from which we extract the initial mesh to generate the initial p.X and store p.tri as the triangulation. Additionally we extract p.DBC as the (Dirichlet) boundary _edge_ index vectors, and p.idx as the boundary _point_ indices. This is in principle redundant, but it makes the setup of the DBCs in sGsc shorter. ``` functionr=sGsc(p,u)%sphericalcapPDE(moregenerally:anyCMCwithDBCs) par=u(p.nu+1:end);H0=par(1);u=u(1:p.np);%splitintODE-uandparameters N0=getN(p,p.X);X=p.X+u.*N0;N=getN(p,X);%basenormal,newX,newnormal M=getM(p,X);LB=cottmatrix(X,p.tri);%massmatrixandLaplace-Beltrami r=-0.5*dot(LB*X,N,2)+M*(H0*ones(p.np,1));%rhs-PDE,i.e.,H(X)-H0=0 6r(p.idx)=u(p.idx);%DirichletBCs ``` ``` p2pglob.tsw=1;p2pglob.vi=[20,40];p2pglob.edc='k';%plottingcontrols %%init;parswillbecoverwritteninscinit nx=12;h0=0;v0=0;a0=0;par=[h0;v0;a0];%initialpars!p=scinit(nx,par);p=setfn(p,'cap1');p.sol.ds=0.1; p.sw.jac=0;%numerical(0)orfunctional(1)jacsforG,speednoproblem p.sw.qjac=1;%numerical(0,tooslow),hencefunctional(1)derivativeforq \begin{table} \begin{tabular}{l|l} cmds1.m & continuation in \((V,H)\) and in \((A,H)\), respectively. \\ cmds2.m, cmds3.m & tests of different mesh refinement options, and MCF tests. \\ getM.m & standard (Voronoi) mass matrix. \\ scinit.m & Init, data stored in p.u (including computed \(H,A\) and \(V\)), and in p.X and p.tri. \\ sGsc.m, scjac.m & rhs based on (23), and Jacobian based on (20). \\ mcff.m & mean curvature flow rhs \(f\); problem dependent via choice of getN. \\ \hline cmcbra.m & local copy and mod of library function cmcbra.m to put error \(e(X)\) (36) on branch. \\ refufu.m & local copy and mod (and renaming) of stanufu.m to do adaptive mesh refinement \\ based on \(e(X)\); “switched on” by setting p.fuha.ufu=@refufu. \\ coarsufu.m & similar to refufu.m, used for mesh coarsening of decreasing caps; \\ \end{tabular} \end{table} Table 5: Files in pde2path/demos/geomtut/spcap1; the last two are typical examples of (small) local mods of library functions. p.nc.ilam=[2 1]; p.nc.nq=1; p.fuha.qf=@qfV; p.fuha.qfder=@qjacV; % contin V, p.plot.bpcmp=1; p.nc.usrlam=[2 4]; % cmpforbranch-plot, vals for forced output 9 p=cont(p,5); % go %% alternate cont and mesh refinement based ont triangle areas p=load('cap1','pt5','cap1r'); p.sw.nobdref=0; p.sw.rlong=1; p.file.smod=2; sig=0.2; for i=1:10; p=refineX(p,sig); p=cont(p,5); end %% just cont in H (no constraints) 14p=load('cap1','pt0','Hcont');p.nc.nq=0;p.nc.ilam=1;p.sol.ds=-0.1;p.plot.bpcmp=5; sig=0.2; for i=1:4; p=cont(p,5); p=refineX(p,sig); end % alternate ref. and cont Listing 2: spcap1/sGsc.m, and start of cmds1.m (omitting plotting). In cmds1.m we then continue the initial disk (with \(V=0\) and \(A=\pi\)) in \(V\). For this we switch on the constraint \(V(u)=V\) via p.nc.nq=1, p.fuha.qf=@qfV; p.fuha.qfder=@qjacV, with the Xcont library functions qfV and qjacV, and set p.nc.ilam=[2,1], cf. Remark 3.1. For mesh adaptation we use the triangle areas on \(X\) as selector, and refineX also updates p.DBC and p.idx, leading to Fig. 1(a), where we use repeated mesh refinement every 5th step. This way we can accurately continue to arbitrary large \(V\), i.e., arbitrary large "cap radius" \(R\), where \(H=1/R\) asymptotes to \(H=0\). In the second part of cmds1.m we continue in \(A\) and hence set p.nc.ilam=[3,1]; p.fuha.qf=@qfA; and p.fuha.qfder=@qjacA. This yields exactly the same branch as the continuation in \(V\), and all this works very robustly and fast. **Remark 3.2**: The numerical Jacobians of \(G\) (for p.sw.jac=0 in line 5 of Listing 2) are sufficiently fast to not play a role for the speed of the continuation, at least for \(n_{p}<2000\), say, because Matlab's numjac can efficiently exploit the known sparsity (structure) of \(\partial_{u}G\), given by the sparsity structure of the Laplacian \(K\), or equivalently, by the sparsity structure of the (full, not Voronoi) mass matrix \(M\). On the other hand, if \(q\) implements some integral constraints, e.g., area or volume, then \(\partial_{u}q(u)\in\mathbb{R}^{n_{q}\times n_{p}}\) is dense, and numerical derivatives for \(\partial_{u}q\) are a serious bottleneck. For illustration, in cmds1.m we use the commands jaccheck and qjacheck, which are rather important for "debugging" when numerical Jacobians become too slow. Both return relative errors between functional and numerical Jacobians, and as a rule of thumb, in \(\partial_{u}G\) relative errors \(\leq 10^{-3}\) should be achieved, and do not affect the continuation or bifurcation results, and for \(\partial_{u}q\) even somewhat larger relative errors are usually no problem. \(\rfloor\) In cmds2.m we test different options for mesh adaptation, see Listing 3 and Fig. 4.5 The black line capr1 in (a), starting from cap1/pt10, corresponds to adaptation each 15th step, with "refinement factor" \(\sigma=0.3\) (fraction of triangles marked for refinement). As we choose p.sw.rlong=1 we only bisect the longest edge of a selected triangle, and the actual fraction of refined triangles is between \(\sigma\) and \(2\sigma\).6 For capr3 (red) we refine when the "error" \(e(X)\) exceeds p.nc.errbound = 0.04, where \(e(X)\) is also used for plotting and defined as follows: For given \(V\) we compute the (exact) \(H(V)\) of the associated (exact) spherical cap \(C(V)\) as Footnote 5: Fig. 4(a,b) shows essentially verbatim output from plotbra in cmds2.m, where the dots and numbers indicate the continuation step, subsequently used also in the sample plots as in (c). This also holds for all subsequent plots, and the only “manual adjustments” are the occasional repositioning of the numbers at the arrows by drag and drop, as this is not automatically optimized. Footnote 6: For p.sw.rlong=0 (RGB refinement with possibly longer cascades to avoid hanging nodes) \(\sigma\) is only a lower bound. \[H(V)=-\frac{\pi^{1/3}(3V+s-\pi^{2/3})(s-3V)^{1/3}}{s(3V+s)^{1/3}},\quad\text{ where}\quad s=\sqrt{9V^{2}+\pi^{2}}. \tag{35}\] We then define the "relative \(L^{2}\) error" \[e(X)=\|H(X)-H(V)\|_{L^{2}(X)}/|H(V)|, \tag{36}\] and put \(e(X)\) on the branch in the modified local copy cmcbra.m of the standard (library) cmcbra.m.7\(e(X)\) can then be plotted like any other output variable, and, moreover, can be used (without recomputing) in p.fuha.ufu (user function), which is called after each successful continuation step. The default (library) setting p.fuha.ufu=@stanufu essentially only gives printout, and to switch on the adaptive meshing we rename and modify a local copy as refufu.m, and set p.fuha.ufu=@refufu. Since \(e(X)\) is at position 13 in (our modified) out=cmcbra(p,u), and since out is appended to the six values from bradat, cf. Remark 2.5a), in refufu.m we then simply add the commands Footnote 7: We also put \(\|H(X)-H(V)\|_{\infty}/|H(V)|\) and \(z_{\max}(V)-z_{\max}\) on the branch, where \(z_{\max}(V)\) is the height of \(C(V)\) and \(z_{\max}\) the numerical height; these can then also be plotted via plotbra, and/or chosen as error indicators, but the \(L^{2}\) error seems most natural. Also note that \(e(X)\) is normalized by \(|H(V)|\) (which decays in \(V\)), but not by \(A\) (which increases with \(V\)). if brout(6+13)\(>\)p.nc.errbound; p=refineX(p,p.nc.sig); end. Another "natural" alternative is to refine when \[a_{\max}=\max(a_{1},\ldots,a_{nt})>\mbox{p.maxA}, \tag{37}\] i.e., when the maximum area of the nt triangles exceeds a chosen bound. This is _not_ an error estimator in any sense (as a plane can be discretized by arbitrary large triangles), but an ad hoc criterion, with typically an ad hoc choice of p.maxA, which could be correlated to \(H\). It is implemented in refufumaxA.m which, if \(\max A>\mbox{p.maxA}\), calls refineX with e2rsmaxA to select all triangles with \(A>(1-\sigma)\mbox{p.maxA}\). With \(\mbox{p.maxA}=0.3\) and \(\sigma=0.2\) this yields the magenta line in Fig.4(a). The samples in Fig.4(c) illustrate a refinement step on the black branch, yielding a "reasonable" mesh also at large \(V\). However, this naturally depends on the choice of steps between refinements (and on the refinement fraction sig and continuation stepsize ds). For the red line in Fig. 4(a), the refinement when the error \(e(X)\) exceeds the chosen bound p.nc.errbound is more genuinely adaptive, and this similarly holds for capr4 based on (37), see also cmds2.m for various further plots. (b) shows that the long-refinement generally yields a (mild) increase of the mesh distortion \(\delta_{\rm mesh}\), but overall the mesh-quality stays very good. Figure 4: Results from spcap1/cmds2.m. (a) Error \(e(X):=\|H-H(V)\|_{2}/|H(V)|\) for refinement each 15th step (capr1, black) (starting at step 10), when \(e(X)>\mbox{p.nc.errtol}=0.05\), using p.fuha.ufu=@refufu (capr3, red), and when \(\max(A)>0.3\) using p.fuha.ufu=@refufumaxA with \(\sigma=0.3\) (capr4, magenta). At \(V=200\), \(n_{p}=1452\) on capr1, \(n_{p}=1486\) on capr3, and \(n_{p}=636\) on capr4. (b) Mesh distortion \(\delta_{\rm mesh}=\max(h/r)\) (edge-length over in–radius). (c) Illustration of meshes before/after refinement at pt25; plots cropped at \(y=0\) for better visibility of the meshes, and the boundary at \(z=0\) marked in red. %alternaterefineandcont,hereref.each15thstep;singlestepsforsaving p=p0;p=setfn(p,'capr1');sig=0.3;nsteps=13; fori=1:5;p=refineX(p,sig);p=cont(p,1);p=cont(p,1);p=cont(p,nsteps);end %%refinewhenerrorexceedserrround,usingrefufl 10p=p0;p=setfn(p,'capr3');p.fuhu.ufu=prefufu;p.nc.errbound=0.04;p=cont(p,100);%%refinewhenmaxAexceedsp.maxA,usingrefuflumaxA p=p0;p=setfn(p,'capr3');p.fuha.ufu=prefufumaxA;p.fuha.ezrs=@e2rsmaxA; p.maxA=0.5;p=cont(p,100);%%errorplots;errorappendedatendofcmcbra,componentc=13 15lab=[2527];c=13;mclf(8);plotbra('capr1','pt71',8,c,'lab',lab,'fp',11); plotbra('capr3',8,c,'lab',[],'cl','r','fp',11); ``` Listing 3: Selection from spcap1/cmds2.m, refinement each 15thstep, \(e(X)\)-dependent refinement via setting p.fuha.ufu=@refufu and p.nc.errbound=0.04, and refinement based on (37). In cmds3.m and Fig. 5 we _decrease_\(V\) from \(V\approx 150\) (running the branch capr1 from Fig. 4 backwards), and test the MCF from a spherical cap at \(V\approx 15\). For both, because the shrinking of the caps gives mesh distortions, the main issue is that we now need to alternate continuation/flow and mesh-_coarsening_. For the continuation we give two options: similar to the refinement for increasing \(V\) in Fig. 4, we either coarsen after a fixed number of steps (black branch), or when \(\delta_{\text{mesh}}>8\) (magenta branch). Both here work efficiently only until \(V\approx 35\), after which new parameters for the coarsening should be chosen. For the MCF in (d) we similarly coarsen after a given number of time steps. With this we can flow back to the disk, more or less reached at \(t=3\), but the last plot in (d) shows that along the way we have strongly distorted meshes, which are somewhat repaired in the coarsening steps, and the final distortion with \(\delta_{\text{mesh}}\approx 30\) is not small but OK. ``` %%gobranchbackwards,cont-coarseningloop p=loadp('capr1','pt56','capr1b');p.sol.ds=-3.2;p=resetc(p);p.fuha.ezrs=@e2rsAi; p.file.smod=5;p0=p;sig=0.5;fori=1:8;p=coarsenX(p,sig);p=cont(p,5);end %%coarsenviacoursufu 5p=p0;p=setfn(p,'capr1d');p.fuha.ufu=@coarsufu;p.nc.errbound=8;p=cont(p,40); %%MCF,withinitiallargeV;tohandlemeshing,alternateflowandcoarsening 15%thismayrequiretrialanderrortobalancedt,flow-lengthnf,and 16%coarseningsigc.Firstsomegraphicssettings,thenloadandprepare: p2pglob.cut=0;p2pglob.vi=[30,40];p2pglob.cm='spring';p2pglob.tsw=4; p=loadp('capr','pt15','mcf');sigc=0.1;dt=0.0005;nfl=500;nplot=100; p.sw.nobocarsen=0;p.t=0;plotH(p);figure(1);title('t=0');%prepareMCF 20p.fuha.flow=@refif;t=0;ts=[];p.fuha.ezrs=@e2rsAi; %%theMCF/coarseningloop;repeatthiscellasdesired fori=1:4;[p.X,t,ts]=geomflow(p,t,ts,dt,nf,nplot);p=coarsenX(p,sigc);end ``` Listing 4: Selection from spcap1/cmds3.m; decreasing \(V\) by going backwards, and MCF; both need to be combined with coarsening. Omission between lines 5 and 14 deal with plotting, and further experiments are at the end of cmds3.m. **Remark 3.3**: The performance of the MCF as in Fig. 5, based on our simple explicit Euler stepping, depends on the choice of flow parameters, i.e., step size dt, number \(n_{\text{f}}\) of steps before coarsening, and coarsening factor \(\sigma\). With too weak coarsening (large \(n_{\text{f}}\), or small \(\sigma\)), triangles may degenerate. Too aggressive coarsening (large \(\sigma\)) may lead to wrong identification of boundary edges. Altogether, at this point we must recommend trial and error. We also tested the use of degcoarsenX instead of coarsenX (see cmds3.m) for backward continuation and for MCF, but this here does not give good results; see Fig.12 for a successful use of degcoarsenX in a related problem. \(\rfloor\) ### Some minimal surfaces Plateau's problem consists in finding soap films \(X\) spanning a (Jordan) curve (a wire) \(\gamma\) in \(\mathbb{R}^{3}\), and minimizing area \(A\). Mathematically, we seek a _minimal_ surface \(X\), i.e., \(H(X)\equiv 0\), with \(\partial X=\gamma\). Such problems have a long history, and already Plateau discussed non-uniqueness and bifurcation issues, called "limits of stability" in [14]. A classical example for which a bifurcation is known is Enneper's surface, see SS3.2.3. However, in the demo bdcurve we first start with other BCs, meant to illustrate options (and failures) for prescribing boundary values in our numerical setup. We introduce parameters \(\alpha\in\mathbb{R}\) and \(k\in\mathbb{N}\) (angular wave number) and a switch p.bcsw, and consider BCs of the form \[u|_{\partial X}=0,\quad(\text{for p.bcsw=0}), \tag{38}\] \[X_{3}|_{\partial X}=\alpha\sin(k\phi),\quad\phi=\arctan(y/x)\quad (\text{for p.bcsw=1}),\] (39) \[\partial X=\gamma(\cdot;\alpha,k)\quad(\text{for p.bcsw=2}), \tag{40}\] where \(\gamma\) in (40) is a prescribed boundary curve, depending on parameters \(\alpha,k\). Specifically, in SS3.2.2 we choose \[\gamma(\phi;\alpha,k)=\begin{pmatrix}\beta\cos(\phi)\\ \beta\sin\phi\\ \alpha\cos(k\phi)\end{pmatrix},\quad\phi\in[0,2\pi),\quad\beta=\sqrt{1-\alpha ^{2}\cos^{2}(k\phi)}. \tag{41}\] For (39), \(\partial X\) is not uniquely determined by the parameter \(\alpha\) (and fixed \(k\)), and this illustrates how our scheme (4) can fail, and that the condition \(Y\in\mathcal{N}_{C}\) in general cannot be dropped in Lemma 2.1. Relatedly, for (39) the continuation can genuinely depend on the continuation stepsize ds, as different predictors give different BCs (39). In other words, the problem is under-determined and the continuation algorithm itself "chooses" the BCs. Thus, (39) is a cautionary example, though Figure 5: Results from spcap1/cmds3.m. (a)-(c) continuation backwards in \(V\) from \(V{\approx}150\) (\(n_{p}\)=1452); coarsening each 5th step (capr1b, black, \(n_{p}\)=644 at \(V{=}40\)) vs coarsening when \(\delta_{\text{mesh}}>8\) (magenta, \(n_{p}\)=650 at \(V{=}40\))). (d) MCF from the spherical cap at \(V{\approx}15\). time series of \(A\) and \(V\), sample plots, and time series of \(\delta_{\text{mesh}}\) (last plot). Coarsening at times \(t=0.25j\), altogether from \(n_{p}=773\) at \(t=0\) to \(n_{p}=450\) at \(t=3\). it produces interesting minimal surfaces. On the other hand, (40) is a genuine DBC with unique continuation, which however requires a modification of the "standard" updX.m, and careful mesh handling. Together, (39) and (40) are meant to illustrate options. The condition on \(\beta\) in (41) yields that \(\|\gamma\|_{2}=1\), i.e., that \(\gamma\) lies on the unit sphere, for \(\alpha\in[0,1]\). Moreover, the projection of \(\gamma\) into the \(x\)-\(y\) plane is injective, and this is useful since we then can extract \(\phi\) from \(\partial X\). In SS3.2.3 we treat a variant of (41), associated to the Enneper surface, where the discretization of \(\gamma\) requires a further trick. On the other hand, \(\gamma\) from (41) becomes singular at \(\phi=j\pi/k\) as \(\alpha\to 1\), which is useful to test mesh-handling. Thus, SS3.2.2 and SS3.2.3 are quite related, but illustrate different effects. Table 6 shows the used files, Listing 5 shows sGbdcurve, and Listing 6 the "new" (compared to spcap1/) files needed to run the BCs (40). The other files are essentially as in spcap1/, except that we now have altogether five parameters (\(H,V,A,\alpha,k\)), and that we use the additional parameter p.bcsw. #### 3.2.1 Prescribing one component of \(X\) at the boundary In cmdsla.m (Listing 7) we continue (39) in \(\alpha\), starting with \(\alpha=0\) at the flat disk, and first with angular wave number \(k=2\). Some results are shown in Fig. 6. \begin{table} \begin{tabular}{l|l} cmds1a.m & continuation in \(\alpha\) (and \(H\)) for (39), see Fig.6; MCF tests in cmds1b. \\ cmds2.m & continuation in \(\alpha\) for (40), see Fig. 7. \\ bdcurveinit.m & Initialization, very similar to scinit. \\ sGbdcurve.m & very similar so sGsc, except for the BCs. \\ updX.m & mod of standard updX; for p.bcsw=2 setting the boundary curve. \\ bcX.m & user function to give \(\gamma\), here implementing (41). \\ \end{tabular} \end{table} Table 6: Files in pde2path/demos/geomtut/bdcurve. As we increase \(\alpha\), the surface lifts at \(\phi=\pi/4\) and \(\pi=5\pi/4\) according to \(X_{3}=\alpha\sin(2\phi)\), and sinks at \(\phi=3\pi/4\) and \(\phi=7\pi/4\). Near \(\alpha=0.5\) (b2/pt12), \(X\) becomes vertical at these angles, and hence our scheme (4) can no longer continue to fulfill the BCs. To better resolve the boundary we use some mesh-refinement _only at the boundary_. For this we choose p.fuha.e2rs=@e2rsbdry at b2/pt6 and obtain the blue branch (with a sample top view as last plot in (b)), which however naturally runs into the same continuation failure at \(\alpha\approx 0.5\). Although this was on quite coarse meshes, none of this changes on finer meshes, and hence this mainly serves as an example of necessary failure of the algorithm (4), and as an example of mesh refinement with e2rsbdry. The red branch in (a) together with sample (c) shows that here the branches are continuation stepsize ds dependent. In (d) we choose finer meshes and wave numbers \(k=1\) (blue branch b1) and \(k=4\) (b4, grey), and get analogous results up to continuation failure. In (e,f) we switch back to continuation in \((H,A)\) from b1/pt6, in both directions of positive (black branch) and negative (grey branch) \(H\). As \(\alpha\) is now fixed again, \(\partial X\) stays fixed even with the BCs (39). The branches are ds-independent again, and \(H\) asymptotes to nonzero \(\pm H_{\infty}\) as \(A\to\infty\). In cmds1b.m we run MCF (not Figure 6: Results from bdcurve/cmds1a.m. (a) Continuation in \(\alpha\) with BCs (34), \(k=2\): black and red branches with \(n_{p}\)=945 mesh points and fixed ds=0.05 (black) and ds=0.1 (red), illustrating the step–length dependence of the continuation; blue branch b2r via refinement at boundary. Samples in (b) and (c), partly with cropping. (d) Like (a,b) but on finer meshes, with \(k\)=1 and \(k\)=4, and with marking \(\partial X\) in red. All of (a–d) are _minimal_ surfaces, i.e., \(H\equiv 0\). (e,f) Switching back to continuation in \(H\) at b2/pt6. shown) from selected solutions from (f), where again we need to undo the refinement which happened, e.g., during the continuation \(H\) from b2Hb/pt0 to pt30, and thus we alternate between geomflow and coarsenX as in spcap1/cmds2.m and Fig.5. #### 3.2.2 A Plateau problem In cmds2.m we choose the BCs (40) with \(\gamma\) from (41). We again continue in \(\alpha\), for \(k=2,3\), starting at \(\alpha=0\) with the unit disk. The basic idea (Listing 6) to implement (40), (41) is to \[\text{set }\partial X=\gamma\text{ in }\text{upd}\text{X, and }u|_{\partial X }=0\text{ in }\text{sGbdcurv}. \tag{42}\] 1%%genuinebdcurve,firstwithk=2,5initialsteps nx=15;al=0;h0=0;v0=0;a0=0;k=2;par=[h0;v0;a0;al;k]; p=bdcurveinit(nx,par);p=setfn(p,'d2');p.nc.ilam=4;p.bcsw=2;p=cont(p,5); %%someboundaryrefinementandcoarsening,trialanderrortchoosesig p=loadp('d2','pt5');%reloadpoint(easierfortrialanderror) 6sigr=0.1;sigc=0.1;p=refineX(p,sigr);p=cont(p,2);p=degcoarsenX(p,sigc); %%containductionalternatingwithmoveX,andrefinementandcoarsen,parameters: nis=15;ncs=1;%innersteps(beforeref/coars),#cont-steps(beforemore) dt=0.1;nit=5;%stepsizeanditerationsinmoveX fori=1:3;%outerloop, l1forj=1:nis;%innerloop,alternatmoveXandcont p=moveX(p,dt,nit);pplot(p,20);p=cont(p,ncs); end p=refineX(p,sigr);p=degcoarsenX(p,sigc);%refinedcoarsen end ``` Listing 8: First 15 lines from bdcurve/cmds2.m, using the BCs (40). Figure 7 shows some results from cmds2.m. The crucial points are that as we increase \(\alpha\) (in particular beyond \(\alpha=0.2\), say) we * move mesh points via moveX(p,dt,it) after ncs continuation steps (here ncs=1); * after nis=15 "inner" steps refine \(X\) (introduce new points), here near the boundary, _and_ coarsen \(X\), here via degcoarsenX(p,sigc), to remove "bad" triangles. The parameter dt in moveX is the Euler step size to balance the "truss forces" [21] (nit gives the number of iterations), while sigc in degcoarsenX has a similar meaning as in refineX and coarsenX, i.e., giving the fraction of triangles to coarsen.8 Again, the parameters ncs, nis, sigr and sigc are generally highly problem dependent and it may require (educated) trial and error to find good values. In summary, Fig. 7 shows that with a good combination of moveX, refineX and degcoarsenX we can continue rather complicated minimal surfaces \(X\) (\(X\) with complicated boundary curve \(\gamma\)) with reasonable meshes.9 Footnote 8: In more detail, degcoarsenX can also be called as p=degcoarsenX(p,sigc,iter), where iter (default 5) gives the number of internal iterations, or as p=degcoarsenX(p,sigc,iter,keepbd) where keepbd=1 (default 0) means that boundary triangles are kept, which is mainly needed for periodic BCs, see §3.4. Footnote 9: As already said in Rem. 2.6b), due to the analogy between the truss forces and surface tension (constant in minimal surfaces) moveX works particularly well for minimal \(X\). #### 3.2.3 Bifurcation from the Enneper surface The Enneper surface is a classical minimal surface. Bounded parts of it can be parameterized by10 Footnote 10: see also Remark 3.9 for the Enneper–Weierstrass representation \[X_{E}=X_{E}(r,\vartheta)=\begin{pmatrix}r\cos(\vartheta)-\frac{r^{3}}{3}\cos(3 \vartheta)\\ -r\sin(\vartheta)-\frac{r^{3}}{3}\sin(3\vartheta)\\ r^{2}\cos(2\vartheta)\end{pmatrix},\quad(r,\vartheta)\in D_{\alpha}=[0,\alpha) \times[0,2\pi), \tag{43}\] see Fig.8. We start by reviewing some basic facts, see [10] and the references therein. For \(\alpha\leq 1/\sqrt{3}\), the boundary curve \[\gamma(\vartheta;\alpha)=\left(\alpha\cos(\vartheta)-\tfrac{\alpha^{3}}{3}\cos( 3\vartheta),-\alpha\sin(\vartheta)-\tfrac{\alpha^{3}}{3}\sin(3\vartheta), \alpha^{2}\cos(2\vartheta)\right),\quad\vartheta\in[0,2\pi) \tag{44}\] has a convex projection to the \(x\)-\(y\)-plane, and for \(1/\sqrt{3}<\alpha\leq 1\) the projection is still injective. This yields uniqueness (of the minimal surface spanning \(\gamma\)) for \(0<\alpha\leq 1\) (see [11] for \(\alpha\in(1/\sqrt{3},1]\)). For \(\alpha>1\) uniqueness of \(X_{E}\) fails, i.e., at \(\alpha=1\) we have a (pitchfork, by symmetry) bifurcation of different minimal surfaces spanning \(\gamma_{\alpha}\)[12]. This has been analyzed in detail in [10] as a two-parameter bifurcation problem, showing a so called cusp catastrophe.11 Footnote 11: See, e.g., [11, Example 1.30] and the references therein for comments on cusps (and other catastrophes). In the demo enneper we simply choose \(\alpha\) as a continuation/bifurcation parameter for \[H(X)=0,\quad\partial X=\gamma_{\alpha}, \tag{45}\] and get the pitchfork bifurcation at \(\alpha=1\). The used files bcX.m, cmds1.m, cmds2.m, enninit.m, sGenn.m, updX.m are very similar to those from the demo bdcurve, but we also include a Jacobian sGjacenn.m, a function mytitle.m for customized titles, and thinterpol.m, discussed next. The problem (45) is "easy" in the sense that we have the explicit parametrization (43) which we can use at any \(\alpha\), but like in SS3.2.2 it does require care with the meshing, and compared to (41) it requires an additional trick to update \(\vartheta\) on \(\partial X\) after mesh adaption (at the boundary): Since we cannot in general extract \(\vartheta\) from (44) from the projection to the \(x\)-\(y\) plane (which is not injective for \(\alpha>1\)), we keep a field \(\vartheta=\mathtt{p.th}\) associated to p.idx (the indices of \(\partial X\)) in the given discretization. Then, if p.X1 is obtained from refining p.X with new mesh-points p.nX on \(\partial X\), then we need to update the \(\vartheta=\mathtt{p.th}\) values of p.nX. This is done in p=thinterpol(p,idxold,thold) by * finding the (old-point) neighbors of the new points on \(\partial X\); * linear interpolation of the neighbors' \(\vartheta\) values to the new points. This is a question of indexing, and we refer to the source of thinterpol.m for comments. A refinement step thus takes the form idold=p.idx; thold=p.th; p=refineX(p,sigr); p=thinterpolol(p,idold,thold); see cmds1.m which produces Fig. 8. The other files in enneper/ are very much like in bdcurve/. Figure 7: Results from cmds2.m for BCs (40) with \(k\)=2 (black branch d2) and \(k\)=3 (blue branch d3). Along the way in, e.g., d2 we do 3 refinements and coarsenings, and the total number of mesh points only increases mildly from \(n_{p}\)=945 to \(n_{p}\)=1177. These are really two different (\(k\)=1 vs \(k\)=2) continuation problems, out of inifinitely many (\(k\)\(\in\)\(\mathbb{N}\)), and hence the two branches in (a) are from different problems and are both stable. At \(\alpha=1\) we find a supercritical pitchfork bifurcation from \(X_{E}\), branch e1 (black), to a branch e2 (blue) which breaks the \((x,y,z)\mapsto(-y,x,-z)\) symmetry of \(X_{E}\) (rotation by \(\pi/2\) around the \(z\) axis and mirroring at the \(z=0\) plane). The solutions "move up" (or down) in the middle, which decreases \(A\) compared to \(X_{E}\), cf. (c) vs (f). (d) illustrates that the (algebraic) volume \(V\) of \(X_{E}\) is always zero. The numerical continuation of e1 to large \(\alpha\) is no problem, using suitable mesh-adaption, even as \(\gamma(\cdot;\alpha)\) self-intersects for \(\alpha>\sqrt{3}\), because the associated parts of \(X_{E}\) do not "see" each other, cf. (e) for an example. The continuation of e2 to larger \(\alpha\) is more difficult, and fails for \(\alpha>1.5\), as for instance shortly after e1b/pt30 we can no longer automatically adapt the mesh near the top. However, physically the change of stability at the symmetry breaking pitchfork at \(\alpha=1\) is most interesting. Using suitable combinations of geomflow, refineX, degcoarsenX and moveX we can use MCF to converge for \(\alpha>1\) and \(t\to\infty\) to e2, from a variety of ICs, for instance from perturbations of e1, see Fig. 8(g,h), and enneperflow.avi in [13]. After convergence we can then again continue the steady state, see cmds2.m. Figure 8: Bifurcation from the Enneper surface \(X_{E}\), \(A\) over \(\alpha\) (a), and \(V\) over \(\alpha\) (d). At \(\alpha=1\) (e1/pt10 in (b)), the branch e1b (blue) with smaller \(A\) bifurcates from e1 (black), samples in (b,c) and (e,f). (g,h) MCF from perturbation of e1/pt23 to e2/pt30, samples showing \(H\). ### Liquid bridges and nodoids Weightless liquid bridges are CMC surfaces with prescribed boundary usually consisting of two parallel circles wlog centered on the \(z\)-axis at a fixed distance \(l\) and parallel to the \(x\)-\(y\) plane. Additionally there is a volume constraint, which makes the problem different from Plateau's problem. See for instance [12] and the references therein for physics background and results (experimental, numerical, and semi-analytical). We consider liquid bridges between two fixed circles \(C_{1}\) and \(C_{2}\) of \[\text{radius }r=r^{*}=1\text{, parallel to the }x\text{-}y\text{ axis and centered at }z=\pm l=\pm 1/2. \tag{46}\] A trivial solution \(X_{0}\) is the cylinder, with \(H=1/2\), volume \(V=2\pi l\) and area \(A=4\pi rl\) (without the top and bottom disks). Further explicit solutions are known in the class of surfaces of revolution, for instance nodoids. We first review some theory for nodoids with DBCs, and then continue basic liquid bridges (embedded nodoids), with bifurcations to non axial branches, see Figures 9 and 10. In Figure 11 we then start directly with nodoids with one "inner loop". Nodoids with "periodic" BCs are studied in [13], and numerically in SS3.4, where we also comment on the theory for these. #### 3.3.1 Nodoid theory In [10], a family of nodoids \(\mathcal{N}(r,R)\) is parameterized by the neck (smallest) radius \(r\) and the buckle (largest) radius \(R\). Let \(l\in\mathbb{R}\) and \(C_{1},C_{2}\subset\mathbb{R}^{3}\) be two circles of radius \(r^{*}\) centered at heights \(z=\pm l\) and parallel to the \(x\)-\(y\) plane. With the two parameters \(a,H\in\mathbb{R}\) the nodoids are parameterized by the nodary curve \[(x,z):[-t_{0},t_{0}]\to\mathbb{R}^{2},\quad t\mapsto\big{(}x(t),z(t)\big{)}= \Big{(}\tfrac{\cos t+\sqrt{\cos^{2}t+a}}{2|H|},\ \ \tfrac{1}{2|H|}\int_{0}^{t}\tfrac{\cos\tau+\sqrt{\cos^{2}\tau+a}}{\sqrt{\cos^{ 2}\tau+a}}\cos\tau\,\mathrm{d}\tau\Big{)}\,, \tag{47}\] which is then rotated around the \(z\) axis, i.e., \[\mathcal{N}_{t_{0}}:M\to\mathbb{R}^{3},\qquad(t,\theta)\mapsto\big{(}x(t)\cos \theta,x(t)\sin\theta,z(t)\big{)}\,, \tag{48}\] where \(M=[-t_{0},t_{0}]\times[0,2\pi)\). Thus, in terms of SS2.1 these nodoids are immersions of cylinders. While (47) only gives nodoids with an even number of self intersections (or none), shifting \(t_{0}\) also gives odd numbers of self intersections. From the immersion \(\mathcal{N}_{t_{0}}\), we can determine geometric quantities by evaluating the parametrization at the endpoints. For example the height and the radius are given by \[2l=\frac{1}{|H|}\int_{0}^{t_{0}}\frac{\cos t+\sqrt{\cos^{2}t+a}}{\sqrt{\cos^{ 2}t+a}}\cos t\,\mathrm{d}t,\quad r^{*}=\frac{\cos t_{0}+\sqrt{\cos^{2}t_{0}+a }}{2|H|}, \tag{49}\] and the buckle radius (at \(t=0\)) is \(R=\frac{1+\sqrt{1+a}}{2|H|}\). Implicitly, the equations in (49) define \(a(t_{0})\), hence also the mean curvature \(H\), and thus \(t_{0}\) parameterizes a family of nodoids \(t_{0}\mapsto\mathcal{N}_{t_{0}}\). Conversely, given \(r,l\) in (46), the implicit equation \[\frac{l}{2r}\left(\cos t_{0}+\sqrt{\cos^{2}t_{0}+a}\right)-\left(\sin t_{0}+ \int_{0}^{t_{0}}\frac{\cos^{2}\tau}{\sqrt{\cos^{2}\tau+a}}\,\mathrm{d}\tau \right)=0 \tag{50}\] defines all possible combinations of \(a\) and \(t_{0}\) satisfying the boundary condition, which we exploit to relate our numerics to results from [10], see Remark 3.6. In order to detect bifurcations from the family (48), we search for Jacobi fields vanishing on the boundary, cf. (21). The unit normal vector (field) of \({\cal N}_{t_{0}}\) is \[N=\left(\cos t\cos\theta,\cos t\sin\theta,\sin t\right),\quad t\in[-t_{0},t_{0}),\ \vartheta\in[0,2\pi),\] and for every fixed vector \(\vec{x}\in\mathbb{R}^{3}\), the function \(\langle\vec{x},N\rangle\) is a solution to (20). So the task is to find \(\vec{x}\) and \(t_{0}\) such that the Dirichlet BCs are fulfilled. The components of \(N\) have zeros if the nodoid meets the boundary horizontally (parallel to the \(x\)-\(y\) plane), which happens at \(t_{0}=\frac{\pi}{2}+n\pi\), or vertically, which happens at \(t_{0}=n\pi\) for \(n\in\mathbb{N}\). Choosing the unit basis \((e_{i})_{i=1,2,3}\), we have in the horizontal case that \(\langle e_{i},N\rangle\left|{}_{\partial{\cal N}_{t_{0}}}\right.=0\) for \(i=1,2\), and in the vertical case \(\langle e_{3},N\rangle\left|{}_{\partial{\cal N}_{t_{0}}}\right.=0\). **Lemma 3.4**: _[_12_, Lemma 3.4 and Proposition 3.6]_ _Consider the one parameter family \({\cal N}_{t_{0}}\). If for some \(t_{0}\in\mathbb{R}_{+}\) the normal vector at \(\partial{\cal N}_{t_{0}}\) is_ 1. \(N=\left(0,0,\nu(x)\right)\)_, then_ \(L=\partial_{u}H(u)\) _has a double zero eigenvalue._ 2. \(N=\left(\nu_{1}(x),\nu_{2}(x),0\right)\) _then_ \(L=\partial_{u}H(u)\) _has a simple zero eigenvalue._ _The immersions are isolated degenerate, i.e., there exists an \(\varepsilon>0\) such that \(({\cal N}_{t})_{t\in[t_{0}-\varepsilon,t_{0}+\varepsilon]}\) has a jump in the Morse index. In 1. this occurs for \(t_{0}=\frac{\pi}{2}+k\pi\), and in 2. for \(t_{0}=k\pi\), for every \(k\in\mathbb{N}\)._ Now general bifurcation results (see the discussion after Lemma 2.2) yield the existence of bifurcation points at the horizontal and vertical cases presented in Lemma 3.4. **Theorem 3.5**: _[_12_, Propositions 3.5 and 3.6]_ _In cases 1. and 2. in Lemma 3.4 we have bifurcation points for the continuation in \(H\). Moreover,_ 1. _if_ \(\psi=\langle e_{i},N\rangle\in\ker L\) _for_ \(i=1,2\)_, then the bifurcating branch breaks the axial symmetry;_ 2. _if_ \(\psi=\langle e_{3},N\rangle\in\ker L\)_, then the bifurcating branch breaks the_ \(z\mapsto-z\) _symmetry._ #### 3.3.2 Nodoid continuation with fixed boundaries Nodoids with DBCs at the (fixed) top and bottom circles are treated in the demo nodDBC. Table 7 lists the pertinent files. We treat two cases: * Short embedded nodoids (liquid bridges) in cmds1.m, starting from the cylinder (eventually continued to self-intersecting nodoids). * Long nodoids (with self-intersections from the start) in cmds2.m. \begin{table} \begin{tabular}{l|l} cmds1.m & continuation in \((A,H)\) of “short” nodoids, starting from a cylinder. First yielding classical liquid bridges, but eventually turning into self intersecting nodoids, requiring restarts. See Figs. 9,10, and cmds1plot.m for the plotting. \\ cmds1A2t.m & Relating the numerical BPs from cmds1.m to Theorem 3.5. \\ cmds2.m & Continuation in \((A,H)\) of “long” nodoids with one inner loop, see Figs. 11,12. \\ bdmov1.m, bdmov2.m & scripts to make movies of Fig. 9 and Fig. 11, see also [14]. \\ \hline nodinit.m, nodinit1.m & Initialization of “short” and “long” nodoids \\ sGndD.m, sGndDjac.m & rhs with DBCs, and Jacobian. \\ qfArot.m & area and rotational constraints, see qjacArot for derivative. \\ getN.m & overload of getN to flip \(N\). \\ coarsufu.m & mod of stanufu.m for adaptive coarsening, see Fig. 12. \\ \end{tabular} \end{table} Table 7: Files in pde2path/demos/geomtut/nodDBC; For solutions without axial symmetry we additionally need to set a rotational phase condition (PC): If \(X\) is a solution to (3), so is \(R_{\phi}X\), where \(\phi\) is the angle in the \(x\)-\(y\) plane, and \[R_{\phi}\vec{x}=\begin{pmatrix}\cos\phi&\sin\phi&0\\ -\sin\phi&\cos\phi&0\\ 0&0&1\end{pmatrix}\vec{x}. \tag{51}\] Thus, if \(\partial_{\phi}(R_{\phi}X)|_{\phi=0}=\frac{1}{x^{2}+y^{2}}\left(-y\partial_{x }X+x\partial_{y}X\right)\in\mathbb{R}^{3}\) is non-zero, then it gives a non-trivial kernel of \(L\), which makes continuation unreliable and bifurcation detection impossible. See, e.g., [10, SS3.5] for further discussion of such continuous symmetries. Here, to remove the kernel we use the PC \[q(u):=\int_{X}\left\langle\partial_{\phi}X_{0},X_{0}+uN_{0}\right\rangle\, \mathrm{d}S=\int_{X}\left\langle\partial_{\phi}X_{0},N_{0}\right\rangle u\, \mathrm{d}S=:\int_{X}\mathrm{d}\phi\ u\,\mathrm{d}S\stackrel{{!} }{{=}}0, \tag{52}\] where \(X_{0}\) is from the last step, with normal \(N_{0}\), where \(\phi\) is the angle in the \(x\)-\(y\) plane, and hence \(\partial_{\phi}X=-X_{2}\nabla_{X_{1}}X+X_{1}\nabla_{X_{2}}X\), where \(\nabla_{X_{j}}\) are the components of the surface gradient, cf. (10). On the discrete level we thus obtain the linear function \[q(u)=(\mathrm{d}\phi)^{T}u,\,\text{with derivative }\partial_{u}q=(\mathrm{d} \phi)^{T}, \tag{53}\] \(\mathrm{d}\phi=\langle-X_{2}\nabla_{X_{1}}X+X_{1}\nabla_{X_{2}}X,N\rangle\), node-wise, i.e., \(\nabla_{X_{j}}X\) is interpolated to the nodes via c2P, with Voronoi weights. We then add \(s_{\mathrm{rot}}q(u)\) to \(E\) from (2) with Lagrange multiplier \(s_{\mathrm{rot}}\), and thus modify the PDE to \(G(u):=H(u)-H_{0}+s_{\mathrm{rot}}\mathrm{d}\phi\stackrel{{!}}{{=}}0\). This removes the \(\phi\)-rotations of non-axisymmetric \(X\) from the kernel of \(\partial_{u}G(u)\), and, moreover, \(|s_{\mathrm{rot}}|<10^{-8}\) for all the continuations below. Since the (algebraic) volume \(V\) of self-intersecting nodoids is not intuitive, here we use continuation in area \(A\) and \(H\). Thus, we start with the constraint qfA with derivative qjacA. For non-axisymmetric branches we set up qfArot and its derivative, where we put (52) as a second component of qfA, and similarly for the derivatives, and when we bifurcate to a non-axisymmetric branch, we set p.nc.nq=2 (2 constraints, area and rotational phase) and p.fuha.qf=qqfArot. #### 3.3.3 Short nodoids Listing 9 shows how we initialize by either a cylinder (icsw=0) or parametrization (48). Here, pde.grid is a 2D rectangular FEM mesh, of which we use the second component as \(\phi\in[-\pi,\pi]\). Lines 16-20 implement (47) and (48) with _twice_ the line \(\phi=\pm\pi\) where the rotation around the \(z\)-axis closes. To obtain a mesh without duplicate points from this, in the last line of Listing 9 we use clean_mesh from the gptoolbox. h=par(1); a=par(4); po=pde.grid.p; x=po(1,:)'; phi=po(2,:)'; ificsw=0; p.X=[a.*cos(phi),a.*sin(phi),x]; % just cylinder else % KPP17 parametrization of nodoids 15 x1=(cos(x)+sqrt(a+cos(x).^2))/(2*abs(h)); x2=zeros(size(x)); xl=size(x,1); for i=1:xl y=@(t)1./(2*abs(h)).*(cos(t)+sqrt(cos(t).^2+a)).*cos(t)./sqrt(cos(t).^2+a); x2(i)=integral(y,0,x(i)); end 20 p.X=[x1.*cos(phi),x1.*sin(phi),x2]; % initial X end par(7)=max(p.X(:,3))-min(p.X(:,3)); % needed ingetV [p.X,p.tri]=clean_mesh(p.X,p.tri,'SelfIntersections','ignore'); ``` Listing 9: From nodBC/nodinit.m; setting initial p.X as a cylinder (icsw==0) or via the parametrization (47) and (48) with (x=t) and subsequent removal of duplicate points. 10%%1stBP,double,usegentauotchoosebifdirection aux.besw=0;aux.m=2;p1=qswibra('N','bpt1',aux); p=gentau(p1,[10],'N1');p.sol.ds=0.125;p.nc.tol=1e-5;p.sw.bifcheck=0; p=cont(p,2);%2stepswithoutPC,andwithbifcheck=0, p.nc.nq=2;p.nc.ilan=[316];p.fuha.qf=@qfArot;p.fuha.qfder=@qjacArot; p.sw.bifcheck=1;p.nc.tol=1e-8;p.sw.jac=0;p=cont(p,20); ``` Listing 10: From nodDBC/cmds1.m; branch switching at double BP, and continuation with rotational PC. Figure 9 shows results from cmds1.m (see also the movie nodDBCs.avi from [10] to go step by step through the bifurcation diagram). We start at the cylinder and first continue to larger \(A\) (black branch N). The first BP at \((A,H)\approx(12.24,1.29)\) is double with angular wave number \(m=1\). We simply select one of the kernel vectors to bifurcate, and do two steps without PC (blue branch N1, lines 11-13 of Listing 10). Then we switch on the rotational PC in line 14 and continue further. As predicted, BP1 occurs when \(X\) meets the lower and upper boundary circles horizontally, and the Figure 9: Bifurcation diagram of (mostly) embedded nodoids (a), with samples in (b,c) cut open at the \(x\)–\(z\) plane (\(y=0\)). Branches N (black), Nb (grey), N1 (blue), N2 (red), N4 (orange), N5 (green), N6 (light blue), N3-1 (magenta), and Nr1, Nr2 and Nr3 (“restarts” of N, grey). See text for details, and Fig. 10 for plots of N/pt52, Nr1/pt2, and Nr2/pt12. stability changes from N to N1.12 The second BP yields the \(m=2\) branch N2 (red). These results fully agree with those from [1]. The branch Nb (grey, with pt3) is the continuation of N to smaller \(A\) (and \(V\)), where the cylinder curves inward. Footnote 12: N up to BP1, Nb, and N1 are the only stable (in the sense of VPMCF) branches in Fig. 9, and hence physically most relevant; the further branches we compute are all unstable, and hence of rather mathematical than physical interest. The third BP on N is simple with \(z\mapsto-z\) symmetry breaking, yielding branch N3 (brown). On N3 there are secondary bifurcations, and following the first we obtain N3-1 (magenta). The 4th BP on N again has \(m=2\) but is different from the 2nd BP on N as the nodoid has already "curved in" at the boundary circles, which is inherited by the bifurcating branch N4 (orange). The 5th BP on N yields a skewed \(m=2\) nodoid N5 (green).13 After the fold, the mesh in N becomes bad at the necks, see N/pt52 in Fig. 10. Thus, for accurate continuation we use (48) to remesh, see Nr1/pt2 and Remark 3.6(a) and Fig. 10(a-c), yielding the branch Nr1 (grey) in Fig. 9(a). Nr1/pt12 in Fig. 10 shows that as after a number of steps the nodoid bulges further in, the mesh at the neck deteriorates again, and so we remesh again to Nr2 (light grey). The nodoid then self-intersects at \((A,H)\approx(22.9,1.05)\), and at Nr2/pt10 we do the next restart to Nr3. Using such remeshing we can continue the branch N (as Nr1, Nr2, Nr3,...) to many loops and self-intersections, with many further BPs as predicted in Lemma 3.4. In any case, although by branch switching from Nr1/bpt1 instead of from N/bpt6 we use a somewhat adapted mesh to compute branch N6 (red), we only compute a rather short segment of N6 because on N6 we quickly run into bad meshes again. See also SS3.3.4 for further comments/experiments on the meshing of nodoids. In Fig. 10(d) we illustrate the correspondence of our numerical results for the continuation in \(A\) to Theorem 3.5, see Remark 3.6(b). Footnote 13: BP5 is an example of a BP qualitatively predicted in [11, Prop.3.9] at large \(t_{0}\). **Remark 3.6**: a) For axi- and \(Z_{2}\) symmetric nodoids, we can easily extract \(a\)=\((2HR\)\(-\)\(1)^{2}-1\) from our numerical data, with \(R\) the radius on the \(z\)=0 plane. We can then numerically solve the second equation in (49), i.e., \(1=r^{*}=\dfrac{\cos t_{0}+\sqrt{\cos^{2}t_{0}+a}}{2|H|}\) for \(t_{0}\), and use this for restarts with a new mesh, for instance from N/pt52 to Nr1/pt1 in Fig. 10. b) Similarly, given \(r^{*}=1\) and \(l=0.5\), we can solve (50) for \(a\) and \(t_{0}\) in a continuation process, see cmds1A2t.m. Then computing \(A=A(a,t_{0})\) gives the black curve in Fig. 10(d), and intersecting the \(A\) values of our numerical BPs gives the \(t_{0}\) values for BP1, BP3 and BP6 as predicted, and explains the folds FP1 and FP2. In summary, the BPs on N, their multiplicities, and their relation to Theorem 3.5 (if applicable) are \[\begin{array}{c|cccccc}\text{BP number}&\text{BP1}&\text{BP2}&\text{BP3}& \text{BP4}&\text{BP5}&\text{BP6}\\ \text{multiplicity}&2&2&1&2&2&2\\ \text{Theorem 3.5}&1.&\text{NA}&2.&\text{NA}&\text{NA}&1.\\ t_{0}&\pi/2&1.995&\pi&3.377&3.622&3\pi/2\end{array} \tag{54}\] where NA means not applicable, and where for BP1, BP3 and BP6 we give the exact values, with as indicated in Fig. 9(c) very good agreement of the numerics.14 Footnote 14: This also holds for further BPs and folds, but we refrain from plotting these in the already cluttered BD in Fig. 9. #### 3.3.4 Long nodoids In nodDBC/cmds2a.m and Fig. 11 (see also nodDBCs.avi from [15]) we consider "long" nodoids with self-intersections.15 As a slightly more explicit alternative to (48), in nodinit1.m we now parameterize an initial axisymmetric nodoid \(\tilde{\mathcal{N}}_{r,R}\) following [14] by \[X:[-\pi/2,\pi/2]\times[0,2\pi]\to\mathbb{R}^{3},\qquad(x,\varphi)\mapsto\begin{pmatrix} \frac{r}{\delta(x,k)}\cos(\varphi)\\ \frac{r}{\delta(x,k)}\sin(\varphi)\\ RE(x,k)-rF(x,k)-Rk^{2}\frac{\sin(x)\cos(x)}{\delta(x,k)}\end{pmatrix}, \tag{55}\] where \(r\) and \(R\) are the neck and the buckle radius, \(F\) and \(E\) are the elliptic integrals of the first and second kind, \(k=\sqrt{(R^{2}-r^{2})/R^{2}}\), and \(\delta(x,k)=\sqrt{1-k^{2}\sin(x)}\). It turns out that here we again need to be careful with the meshes, and besides adaptive mesh refinement we also use suitable initial meshes. We discretize the box \([-\pi/2,\pi/2]\times[0,2\pi]\) (pre-image in (55)) by Chebychev nodes in \(x\) and equidistant nodes in \(y\). This is implemented in a slight modification of stanpdeo2D.m in the current directory, and adapted to the parametrization (55), which "contracts" the mesh for the loop in the middle. As we continue the axisymmetric branch lNA (black) to larger \(A\), the inner loop "contracts and moves out", cf. 1NA/pt2 vs 1NA/pt20 in Fig. 11(b). Along the way we find several BPs, the first yielding a branch lNA1 (blue) with broken \(z\mapsto-z\) symmetry. The next three BPs yield branches with angular wave numbers \(m{=}2,m{=}1\), and \(m{=}3\). As we continue these branches to larger \(A\), the mesh quality deteriorates due to very acute triangles where the inner loop strongly contracts. This suggests coarsening by removing degenerate triangles, which we exemplarily discuss in Fig. 12, see also the end of cmds2.m. The red line in (a) shows the mesh-distortion along lNA2, and (b) shows a zoom of lNA2/pt10; the very acute triangles on the inner loop (\(\delta_{\text{mesh}}\approx 400\) at pt10) lead to stepsize Figure 10: (a–c) Continuation of Fig. 9; plots of (1/8th of) solutions on N before and after remeshing; Nr2 from Fig. 9 is from remeshing Nr1/pt12, and Nr3 from remeshing Nr2/pt10. (d) Results from cmds1A2t.m, see Rem. 3.6(b). reduction and eventual continuation failure. The brown line in (a) and the samples in (c,d) show results from the degcoarsenX-cont loop for i=1:6; p=degcoarsenX(p,sigc,nit,keepbd); p=cont(p,4); end; with sigc=\(0.5,\) it=\(6,\)keepbd=1 (cf. footnote 8) starting at 1NA2/pt3. The distortion stays smaller (with \(\delta_{\rm mesh}\approx 50\) actually at the boundary), and the continuation runs faster and more robustly (larger stepsizes feasible) than on the original mesh. The magenta line is from setting p.fuha.ufu=@coarsufu which adaptively coarsens (cf. refufu.m for refinement in Fig. 4) when \(\delta_{\rm mesh}\) exceeds 100. Both here yield quite similar results, and naturally, similar use of degcoarsenX is also useful for the other nodoids from Fig. 11, and for those from Fig. 9, in addition to the very specific remeshing used there, which is only possible because of the explicit formulas. Nevertheless, we remark again that the parameters for degcoarsenX need trial and error for robustness and efficiency. ### Nodoids with pBCs in \(z\) In [10], bifurcations of axisymmetric to non-axisymmetric nodoids are studied with the period (the "height") along the axis of revolution (wlog the \(z\)-axis) as the continuation/bifurcation parameter. This uses a different parametrization of the nodoids than (48) or (55), which we do not review here, as we shall again use (48) for the initialization. For fixed \(H=1\), [10] proves that there is a \(r_{0}>0\) such that for neck radii \(r>r_{0}\) (\(r<r_{0}\)) there are (are not) bifurcations from nodoids, and gives detailed asymptotics of bifurcation points in a regime (\(\tau\to-\infty\) in [10]) which corresponds to \((R-r)/R\to 0\) with outer radius \(R\), see below. In particular, the 2nd variation of the area functional around a given nodoid \(\mathcal{N}_{\tau}\) is analyzed with \(z\in\mathbb{R}\), i.e., for the full non-compact nodoid, not just for one period cell. This proceeds by Bloch wave analysis, and first establishes the band structure of the spectrum. Using a parametrization similar to (55), a detailed analysis of the second variation of the area functional, and ultimately two different numerical methods, [14] shows that \(r_{0}=1/2\), and the first bifurcation (i.e., at \(r_{0}\)) leads to non-axisymmetric nodoids with angular wave number \(m=2\) and same periodicity in \(z\), i.e., Bloch wave number \(\alpha=0\) in [10]. Here we also consider periodic (in \(z\)) nodoids with fixed \(H=1\) using the height \(\delta\) as continuation/bifurcation parameter. We recover the primary bifurcation at \(r=r_{0}=1/2\) from [14], and Figure 11: (a) Bifurcation diagram of self–intersecting nodoids; branch 1NA (black) starts near \((A,H)=(9,1.3)\) and shows four BPs to 1NA1 (blue, broken \(z\)–symmetry), 1NA2 (red, \(m=2\)), 1N3A (magenta, \(m=1\)), and 1NA4 (green, \(m=4\)). Samples in (b). further bifurcations, see Figs. 13 and 14. **Remark 3.7**: Similar to SS3.3.2 we distinguish between "short" and "long" nodoids. Here, this merely corresponds to computing on one respectively two period cells in \(z\), and the main distinction is as follows: All 1-periodic solutions are naturally \(n\)-periodic for any \(n\in\mathbb{N}\). With respect to bifurcations, the 1-cell computations then correspond to Bloch wave numbers \(\alpha=0\) in [13]. For \(n\geq 2\) periods cells we obtain further discrete Bloch wave numbers, e.g., additionally \(\alpha=\pi\) for \(n=2\). This then allows bifurcations which simultaneously break the \(S^{1}\) and the \(Z_{2}\) symmetry of the symmetric nodoid, and this is illustrated in Fig. 14, which only gives a basic impression of the extremely rich bifurcation picture to be expected when the computational domain is expanded further in \(z\). To avoid clutter we refrain from putting the cases \(n=1\) and \(n=2\) in one figure. \(\rfloor\) Numerically, to set up "periodic boundary conditions in \(z\)", we proceed similar to the pde2path setup for periodic boundary conditions on fixed (preimage) domains, see [14, SS4.3]. The basic idea is to identify points on \(\partial X\) at \(z=\pm\delta\). Thus, before the main step \(X_{0}\mapsto X_{0}+uN_{0}\) for all our computations, we transfer the values of \(u\) from \(\{X_{3}=-\delta\}\) to \(\{X_{3}=\delta\}\) via a suitable "fill" matrix p.mat.fill, which has to be generated at initialization and regenerated after mesh-adaptation. The essential command is box2per, which calls getPerOp to create p.mat.fill (and p.mat.drop which is used to drop redundant variables), and which rearranges \(u\) by dropping the (redundant) nodal values at points which are filled by periodicity. Similar to SS3.3.2 we need a rotational PC for non-axisymmetric branches, but here for all computations we additionally need translational PCs in \(x,y\) and \(z\) directions, i.e. \(S_{i}\vec{x}=\vec{x}+e_{i}\). These translations act infinitesimally in the tangent bundle as \(S_{i}X_{0}=\nabla_{i}X_{0}\), and hence the pertinent PCs are \[q_{i}(u)=\langle\nabla_{i}X_{0},X_{0}+uN_{0}\rangle=\langle\nabla_{i}X_{0},N_{ 0}\rangle\,u,\quad i=1,2,3, \tag{56}\] Figure 12: Results from the end of cmds2.m, example of a degcoarsenX–cont loop (INA2c, brown), and adaptive coarsening (INA2cc, magenta) for 1NA2. (a) mesh quality \(\delta_{\rm mesh}=\max(h/r)\) over \(A\), original 1NA2 in red. (b) original 1NA2/pt10 (cut open), \(n_{p}\)=3430; (c,d) samples from 1NA2c with \(n_{p}=2744\) and \(n_{p}=2347\). with derivatives \(\partial_{u}q_{i}(u)=\langle\nabla_{i}X_{0},N_{0}\rangle\). Like (53), they are implemented node-wise, and their derivatives are added to \(G\) with Lagrange multipliers \(s_{x},s_{y},s_{z}\). Table 8 comments on the files used, and Listings 11-14 show the main new issues from the otherwise typical function and script files. p.nc.ilam=[6 7 8 9]; p.nc.nq=3; % 3 translational constraints p.fuha.qf=@qf; p.fuha.qfder=@qjac; p.sw.qjac=1; p.sol.ds=-0.01; p=cont(p,1); %% refine initial mesh (twice), in particular at boundary sig=0.2; p=loadpp('bN','pt1'); p.sw.rlong=1; p.sw.nobdref=0; 15 p=refineX(p,sig); sig=0.25; p=refineX(p,sig); Fig. 13 shows some results from cmds1.m. For robustness (essentially due to the strong contractions at the inner loops later in the branches) it turns out to be useful to initialize with a rather coarse mesh and after 1 or 2 steps refine by area. As we then decrease \(\delta\) from the initial \(\delta\approx 0.88\), we find the first BP at \(\delta\approx 0.82\) and with \(r=0.5\), corroborating [14], to the angular wave number \(m=2\) branch bN1. Using suitable mesh refinement along the way we can continue bN1 to small \(\delta\), where in particular we have multiple self-intersections; first, the inner loops extend the "height" \(\delta\) for \(\delta<\delta_{0}\approx 0.78\), and second the inner loops intersect in the plane \(z=0\) for \(\delta<\delta_{1}\approx 0.43\) (not shown), making the inner radius \(r=0\) (or rather undefined). The branch bN2 from the next BP at \(\delta\approx 0.54\) has \(m=3\), and otherwise behaves like the \(m=2\) branch. All these branches are rather strongly unstable, with \(\text{ind}(X)>4\), and Footnotes 12 and 15 again apply. Figure 13: (a) Bifurcation diagram of nodoids parametrized by height \(\delta\), fixed \(H=1\). The axisymmetric branch bN (black) starts near \(\delta=0.88\) via (48), and in direction of decreasing \(\delta\) shows a sequence of BPs to nodoids with broken \(S^{1}\) symmetry, here bN1 (blue, \(m=2\)) and bN2 (red, \(m=3\)). Samples in (b–f), with bN1r and bN2r after some refinement. As indicated in Remark 3.7, the branching behavior of the periodic nodoids very much depends on which period cell in \(z\) we prescribe, with Fig. 13 corresponding to one cell. To illustrate the richness that can be expected for larger cells, in cmds2.m and Fig. 14 we consider twice the minimal cell, see also nodpBC1.avi from [13]. This yields the same primary nodoid branch, and as a subset of bifurcations the bifurcations from Fig. 13, with two stacked copies of the solutions from Fig. 13 along all these branches. Additionally we have a new BP2 around \(\delta=1.44\), with small eigenvalues \(\mu_{1,2}\approx 0.0003\) and \(\mu_{3,4}\approx 0.004\), and the next eigenvalues are \(\mu_{5}\approx-0.67\) (simple) and \(\mu_{6,7}\approx 0.87\). The (approximate) kernel vectors associated to \(\mu_{1,3}\) are \(\phi_{1},\phi_{3}\) given in Fig. 14(d,e), and additionally we have \(\phi_{2}=R_{\pi/2}\phi_{1}\) and \(\phi_{4}=R_{\pi/2}\phi_{1}\) (rotation around the \(z\) axis). From the 4 small eigenvalues separated from the rest of the spectrum we might guess that BP2 is fourfold, which should have important consequences for the branching behavior at BP2, in the sense of "mixed modes", see [14, SS2.5.4]. However, \(\phi_{1}\) and \(\phi_{3}\) do not seem related by any symmetry, and, moreover, using qswibra and cswibra (see cmds2b.m) to search for solutions of the algebraic bifurcation equations at Figure 14: (a) Branching from the \(S^{1}\) nodoid from Fig. 13 on twice the minimal cell, see text for details. BP2 only yields the "pure" modes \(\phi_{1}\) and \(\phi_{3}\) (and their rotations). Thus we conclude that BP2 is _not fourfold_, but corresponds to two double BPs close together.16 Footnote 16: See also §3.5.2, where we discuss the opposite effect in more detail: There, an analytically double (due to obvious symmetry) zero eigenvalue is split up, with a larger split there than between \(\mu_{1,2}\) and \(\mu_{3,4}\) here. To further corroborate this, in cmds2b.m we compute the "lN" branches for \(H_{0}=0.8\) and \(H_{0}=1.1\). In both cases we find a similar spectral picture at the pertinent BPs as at BP2 for \(H_{0}=1\) (4 small eigenvalues, well separated from the rest of the spectrum, with kernel vectors similar to Fig. 14(d,e)), but the two pertinent pairs are themselves more clearly separated, and \(\phi_{1},\phi_{3}\) flip order between \(H_{0}=0.8\) and \(H_{0}=1.1\). If BP2 at \(H_{0}=1\) was fourfold, then we would expect this to be due to symmetry, and hence to also hold for \(H_{0}\neq 1\). That this is not the case suggests that near \(H_{0}=1\) we rather have a "mode crossing" at BP2, and moreover do not expect bifurcating branches of "mixed modes". Therefore, in cmds2.m we simply do a branch switching in direction of the modes \(\phi_{1}\) (see Listing 15) and \(\phi_{3}\) separately, and obtain the branches lN2a (red) and lN2b (green).17 In detail, we do an initial step in direction \(\phi_{1}\) (resp. \(\phi_{3}\)), and then switch on the rotational PC as before. The branch lN2a continues to small \(\delta\) without problems. The branch lN2b folds back near \(\delta=1.18\), and after pt25 continuation fails, due to mesh degeneration in the inner bends. This can be fixed to some extent by careful mesh adaptation (yielding later continuation failure), but we do not elaborate on this here as Fig. 14 is mostly intended as an illustration of the rich bifurcation behavior over larger period cells. Footnote 17: This trick can also be summarized as follows: We do not localize each of the close–together BPs near BP2, which would require very small (arclength) stepsizes ds, and possibly many bisections. Instead, we just approximately localize some BP\({}^{*}\) near BP2, subsequently compute the (approximate) kernel at BP\({}^{*}\) using qswibra, and then select a kernel vector and try branch–switching in that direction. This “usually” works (in particular it works here), and is in particular useful if many BPs (including BPs of higher multiplicity) are close together. See also §3.5.2 for application of this trick in a different setting, and, e.g., [10, §10.1] for further discussion. ``` %%2ndBPpossiblydouble,butalgebraicbifurcationequations(ABEs)onlyyield %puremodes(seecmds2b).Hence,heresswitchoffABEsforspeed. %Forsmallds,twoseparateBPsmayalsobeddetected;buttreatingthem %viatheqswibra-trickshouldalwayswork aux.besw=0;%switchoffABEcomputations,justcomputekernel aux.mu2=0.01;aux.m=4;aux.ali=[];p1=qswibra('lN','bpt2',aux);%%goon1st p=gentau(p1,1,'lN2a',2);p.sol.ds=0.1;p.nc.tol=1e-4;p.sw.bifcheck=0; ``` 35p=cont(p,2);%twostepswithoutrotationalPC;thenswitchonandconfurther p.nc.nq=4;p.nc.ilam=[678910];p.fuha.qf=@qfrot;p.fuha.qfder=@qjacrot; p.file.count=p.file.count+1;p.nc.tol=1e-4;p.sol.ds=-0.01;p=cont(p,15); ``` Listing 15: Selection from nodpBC/cmds2.m. Branch switching from BP2 via qswibra and gentau. ### Triply periodic surfaces Triply periodic surfaces (TPS) are CMC surfaces in \(\mathbb{R}^{3}\) which are periodic wrt three independent (often but not always orthogonal) directions. Triply periodic _minimal_ surfaces (TPMS) (this implicitly also means embedded, sometimes abbreviated as TPEMS) have been studied since H.A. Schwarz in the 19th century, and have found renewed interest partly due to the discovery of new TPMS by A. Schoen in the 1970ies, and due to important (partly speculative) applications of TPMS (and their non-zero \(H\) TPS companions) in crystallography, mechanics and biology. See for instance [1] and [11], and [12] for a long list of TPMS. From the PDE point of view, TPS solve (3) with periodic BCs on a bounding box. Some families of TPMS were studied as bifurcation problems in [13], using a cell length (period) in one direction as continuation/bifurcation parameter, and combined with numerical results from [1]. Much of the theory of TPMS is based on Enneper-Weierstrass representations. See Remark 3.9, where we relate some of our numerical results for the Schwarz P surface family to results from [10] obtained via Enneper-Weierstrass representations. A way to _approximate_ TPS is as zeros of Fourier expansions of the form \[F(\vec{r})=\sum_{k\in\mathbb{Z}^{3},|k|\leq N}F(k)\cos(2\pi k\vec{r}-\alpha(\vec{ r})).\] A simple first order approximation of the Schwarz P surface (cf. Fig.1(b)) is \[\text{Schwarz P surface }\approx\{(x,y,z)\in\mathbb{R}^{3}:\cos(x)+\cos(y)+\cos(z )=0\}, \tag{57}\] Better approximations with some higher order terms are known, also for many other "standard" TPS, see, e.g., [1] for a quantitative evaluation of such approximations. In the demo TPS we focus on the Schwarz P family, and some CMC companions.18 Footnote 18: The approximation (57), and higher order corrections, also arise from solving the amplitude equations for a Turing bifurcation on a simple cubic (SC) lattice, where hence the Schwarz P surface, or, depending on volume fractions a CMC compagnion of Schwarz P, occurs as the phase separator between “hot” and “cold” phases. See, e.g., [10] and [11, 12, 13], and similarly [21] for the occurence of Scherk’s surface in 3D Turing patterns. #### 3.5.1 The Schwarz P minimal surface (family) In TPS/cmds1.m we study continuation (and bifurcation) of the Schwarz P surface in the period \(\delta\) in \(z\)-direction, focusing on one period cell, i.e., the box \[B_{\delta}:=[-\pi,\pi)^{2}\times[-\delta/2,\delta/2). \tag{58}\] To get an initial (approximate) \(X\) on \(B_{2\pi}\), we use (57) and the mesh generator distmesh[14], on one eighth of \(B_{2\pi}\), which we then mirror to \(B_{2\pi}\). The continuation in \(\delta\) proceeds similar to SS3.4, by first scaling \(X=S_{\delta}\texttt{P.X}\) to period \(\delta\) in \(z\) and then setting \(X=X+uN\) and solving for \((u,\delta)\). Subsequently, the same scaling is applied in updX to set the new p.X. As in SS3.4 we have translational invariance in \(x,y\) and \(z\), and hence exactly the same PCs, implemented in qf.m, with derivatives in qjac.m. Somewhat differently from SS3.4 we now also "fill" \(X\) by taking the \(\partial X\) values from the left/bottom/front of the box to the right/top/back of the box. While \(u\) is still filled filled via \(u=\texttt{p.mat.fill}*u\), for filling \(X\) we compute matrices p.Xfillx, p.Xfilly, p.Xfillz (in Pinit.m, via Xfillmat, which calls getPerOpX) similar to p.mat.fill, but with \(-1\) (instead of \(1\)) where we want to transfer \(X\) values from one side of the box to the opposite side (assuming symmetry wrt the origin). Finally, it turns out that the continuation is slightly more robust if in getN we correct \(N\) at the boundaries to lie _in_ the boundaries of \(B_{\delta}\), see Remark 3.8. Figure 15 shows some results from cmdsP.m. Decreasing \(\delta\) from \(2\pi\) (P/pt1 in (b) at \(\delta=6.2732\)), \(X\) gets squashed in \(z\) direction, and at \(\delta=\delta_{1}\approx 5.9146\) we find a \(D_{4}\) symmetry breaking pitchfork bifurcation (with the two directions corresponding to interchanging the \(x\) and \(y\) axis wrt shrinking and expansion) to a branch P1, which then extends to large \(\delta\). On the other hand, increasing \(\delta\) from \begin{table} \begin{tabular}{l|l} cmds1.m & Schwarz P family, relation to Weierstrass representation in cmdsaux.m, Figs. 15 and 16. \\ cmds2.m & CMC companions of Schwarz P, Fig. 17 \\ Pinit.m & Initialization, based on (57) and distmesh. \\ getN.m & mod of standard getN, applying corrections at the boundaries of \(X\), see Remark 3.8. \\ getperOpX.m & here we also fill X; see also Xfillmat.m. \\ \end{tabular} \end{table} Table 9: Selected files in TPMS/, others (getA.m, qf.m, qjac.m, updX.m) are rather standard, with minor mods to account for the “filling” of the periodic boundaries. \(2\pi\) (branch pB, grey), we find a fold on the P branch at \(\delta=\delta_{f}\approx 6.408\). Both \(\delta\) values agree well with results from [11] based on the Enneper-Weierstrass representation, summarized in Fig. 15(h), see Remark 3.9. **Remark 3.8**: a) The results from Fig. 15 can also be obtained by choosing "Neumann" BCs on \(\partial B_{\delta}\). However, for other TPMS we need the pBCs. For instance, we can also continue the H surface family on a suitable (almost minimal) rectangular box, where however solutions fulfill pBCs but not Neumann BCs. Due to the necessary larger period cell, and due to branch points of higher multiplicity, the numerics for the H family are more elaborate, and these results will be presented elsewhere. b) In fact, in the local copy TPS/getN.m we apply a trick and zero out \(N_{1}\) at \(x{=}\pm\pi\), \(N_{2}\) at \(y{=}\pm\pi\) and \(N_{3}\) at \(z{=}\pm\delta/2\). Thus, \(N\) is forced to always lie in the cube's faces, yielding a "combination of NBCs and pBCs" in the sense that the trick forces \(X\) to meet the cube's faces orthogonally, while the pBCs keep \(X\) on opposite faces together. However, the trick is for convenience as without it we get the same branches but in a less robust way, i.e., requiring finer discretizations and smaller continuation stepsizes. \(\rfloor\) **Remark 3.9**: The Enneper-Weierstrass representation of a minimal surface is \[\big{(}x,y,z\big{)}=\mathrm{Re}\bigg{[}\mathrm{e}^{\mathrm{i}\vartheta}\int_ {p_{0}}^{p}(1-z^{2},\,\mathrm{i}(1+z^{2}),\,2z)R(z)\,\mathrm{d}z\bigg{]}, \tag{59}\] \(p_{0},p\in\mathcal{M}\) with \(\mathcal{M}\) a Riemannian surface, where \(\vartheta\) is called Bonnet angle, and \(R:\mathcal{M}\to\mathbb{C}\) is called Weierstrass function. The Enneper surface \(E\) from SS3.2.3 is given by the data \(\mathcal{M}=D_{\alpha}\) (disk of radius \(\alpha\)) and \(R(z)\equiv 1\). For TPMS, \(R\) is a meromorphic function, and \(\mathcal{M}\) consists of sheets connected at branch points given by poles of \(R\). See, e.g., [12, SS8] for a very readable introduction to Weierstrass data and the connection of minimal surfaces and holomorphic functions, [10] for a basic discussion Figure 15: (a) Bifurcations in the Schwarz P family, black (P) and grey (Pb) branch; bifurcating magenta branch (P1) breaks \(D_{4}\) symmetry. Samples in (b–g). Comparison with [11] in (h), cf. Remark 3.9. of the Weierstrass data of TPMS, [11] for identifying the Riemannian surface \(\mathcal{M}\) for the Schwarz P surface with \(S^{2}\times S^{2}\) by stereographic projection, where \(S^{2}\) is the unit sphere, and [10] for further examples for construction of TPMS from Weierstrass data. Following [11], we consider \(\mathcal{M}\) a double cover of \(\mathbb{C}\), and, for \(a\in(2,\infty)\), let \[R(z)=1/\sqrt{z^{8}+az^{4}+1}, \tag{60}\] where the Schwarz P surface with period cell \([-\pi,\pi)^{3}\) is obtained for \(\vartheta=0\) and \(a=14\).19 See also [12] for the explicit computation of a fundamental patch of Schwarz P based on (59) and (60) with \(a=14\) and a small planar preimage \(\subset\mathbb{C}\). Footnote 19: For \(\vartheta=\pi/2\) we obtain the Schwarz D family, and for \(\vartheta\approx 0.9073\) Schoen’s gyroid, as two further TPMS. Moreover, since these have the same Jacobians as Schwarz P, all bifurcation results from Schwarz P carry over to Schwarz D and the gyroid, but these appear to be much more difficult to treat in our numerical setting. In [11], \(a\) is taken as a bifurcation parameter along the Schwarz P family with the _periods_ for Schwarz P given by [11, SS7.3] \[E =2\int_{0}^{1}\frac{1-t^{2}}{\sqrt{t^{8}+at^{4}+1}}\,\mathrm{d}t +4\int_{0}^{1}\frac{\mathrm{d}t}{\sqrt{16t^{4}-16t^{2}+2+a}}\quad\text{(periods in $x$ and $y$)}, \tag{61}\] \[F =8\int_{0}^{1}\frac{t}{\sqrt{t^{8}+at^{4}+1}}\,\mathrm{d}t\quad \text{(period in $z$)}, \tag{62}\] up to homotheties (uniform scaling in all directions). We have \(\delta=2\pi F/E\) for our \(\delta\), and evaluating \(E,F\) numerically (or as elliptic integrals) and plotting \(\delta(a):=2\pi F/E\) as a function of \(a\) we get the blue curve in Fig. 15(h), which corresponds to [11, Fig.13]. In particular, \(\delta(a)\) has a maximum at \(a=a_{2}\approx 28.778\), and \(\delta(a_{2})=\delta_{f}\) agrees with our fold position in Fig. 15(a). On the other hand, with suitable mesh adaptation the branch P1 continues to at least \(\delta=10\). Next, [11] based on [11] gives a bifurcation from the P family at \(a=a_{1}\approx 7.4028\), and again we find excellent agreement \(\delta(a_{1})=\delta_{1}\) with our BP at \(\delta_{1}\). \(\rfloor\) **Remark 3.10**: a) The fact that P does not extend to "large" \(\delta\) (but folds back) has also been explained geometrically in [12], without computation of the fold position. b) The stability of Schwarz P (and hence also Schwarz D) on a minimal period cell and wrt _volume preserving_ variations is shown in [11]. However, "larger pieces" of P, e.g., P on \([-\pi,\pi)^{2}\times[-2\pi,2\pi)\) are _always_ unstable, even wrt volume preserving variations. See also [1, SS8] for a useful discussion, and illustrations. Numerically, in Fig. 15 we find: \(\mathrm{ind}(X)=2\) except on the segment \(\mathcal{S}\) of P (and Pb) between the fold and the BP at \(\delta_{1}\), where \(\mathrm{ind}(X)=1\). However, the most (and on \(\mathcal{S}\) only) unstable eigenvector has a sign, see Fig. 16, and hence the solutions on \(\mathcal{S}\) are stable wrt volume preserving variations. \(\rfloor\) #### 3.5.2 CMC companions of Schwarz P In TPS/cmds2.m we compute some CMC companions of the Schwarz P surface, see Fig. 17, where all we have to do is set ilam=[1,5,6,7] as \(H\) sits at position \(1\) in the parameter vector (and the translational Lagrange multipliers at [5,6,7]). Continuing first to smaller \(H\) (black branch PH), \(X\) (the volume enclosed by \(X\) and the boundaries of the cube) "shrinks" and we find a BP at \(H\approx-0.1\). In the other direction (grey branch PHb), \(X\) (the volume enclosed by \(X\)) "expands", with a BP at \(H\approx 0.1\). The continuation of both these branches fails at \(H\approx-0.3\) and \(H\approx 0.3\) (respectively), though they can be continued slightly further with careful mesh adaptation. Our main purpose here is to show how symmetry considerations and some tricks can help to avoid numerical pitfalls. By symmetry, the BP PH/bpt1 (and similarly PHb/bpt1) must be double, although the smallest (in modulus) eigenvalues reported at PH/bpt1 are \(\mu_{1}\approx 0.005\) and \(\mu_{2}=0.02\).20 See Fig. 17(b) for PH/bpt1, and (c) for the (approximate) kernel vectors \(\phi_{1}\), \(\phi_{2}\). In fact, the plot in (b) (stronger correction along the \(z\)-axis) shows that at least the last step in the localization of PH/bpt1 violated the \(S_{4}\) symmetry of the (now fixed) cube, which explains the rather significant splitting of the in principle double eigenvalue \(\mu_{1}=0\). Clearly, we expect \(\phi_{1,2}\) to approximate two bifurcation directions, with \(D_{4}\) symmetry along the \(x\) axis (\(\phi_{1}\)) and \(y\) axis (\(\phi_{2}\)). By symmetry we then must have at least one more bifurcating branch, with \(D_{4}\) symmetry along the \(z\) axis. To find this bifurcation direction, we can use qswibra with numerical derivation and solution of the algebraic bifurcation equation (ABE) [12, SS3.2.2]. However, this is expensive and not always reliable. Here, the three bifurcation directions (oriented along \(x\), along \(y\), and along \(z\)) _are_ returned, but we have to relax the tolerance isotol for identifying solutions of the ABE as isolated. Alternatively, cf. also Footnote 17, we can use qswibra with aux.besw=0 (bifurcation equation switch\(=0\)) to let qswibra just compute and plot the (approximate) kernel \(\phi_{1},\phi_{2}\). This lets us guess to approximate the third direction as \(\phi_{3}=0.2\phi_{1}+\phi_{2}\). This turns out to be sufficiently accurate and gives the transcritical branch(es) za (dark green) and zb (other direction, lighter green). Footnote 20: Additionally, there is a simple negative eigenvalue \(\mu_{0}\approx-0.7\), and the next two eigenvalues are \(\mu_{3,4}\approx 0.5\), i.e., \(\mu_{1,2}\) are well separated from the rest of the spectrum. On za, the continuation fails after pt6. zb/pt6 is at \(H=0\) and corresponds to Pb/pt7 from Fig. 15 Subsequently, zb continues to PHb/bpt1, and is indeed identical to the branch(es) za2 (and zb2), transcritically bifurcating there. In particular, PHb/bpt1 is again double, and we can compute the three branches oriented along \(x,y\) or \(z\) as above (see cmds2.m). zb2 (light orange) then continues back to PH/bpt1, while zb2 fails after pt6 (last sample in (d)). The continuation failures of za and za2 after pt6 are due to poor meshes as the different boundaries of \(X\) come close to each other, like after PH/pt16 and PHb/pt15, and it seems difficult to automatically adapt these meshes. Figure 16: Selected eigenvectors at points as indicated, cf. Remark 3.9b). Top: the most unstable direction, which does not change sign. Bottom: the second eigenvector; in (a) this approximately spans the kernel. ### Fourth order biomembranes, cylinders The (dimensionless) Helfrich functional [11] is \[E=\int_{X}(H-c_{0})^{2}+bK\,\mathrm{d}S+\lambda_{1}(A-A_{0})+\lambda_{2}(V-V_{0}), \tag{63}\] where \(c_{0}\in\mathbb{R}\) is called spontaneous curvature parameter, \(b\in\mathbb{R}\) is called saddle-splay modulus, \(A\) and \(V\) are the area and the volume of \(X\), and \(\lambda_{1}\) and \(\lambda_{2}\) are Lagrange multipliers for area and volume constraints \(A(X)=A_{0}\) and \(V(X)=V_{0}\). For closed \(X\), \(\lambda_{1}\) corresponds to a surface tension, and \(\lambda_{2}\) to a pressure difference between outside and inside, and the Euler-Lagrange equation is \[\Delta H+2H(H^{2}-K)+2c_{0}K-2c_{0}^{2}H-2\lambda_{1}H-\lambda_{2}=0, \tag{64}\] together with the area and volume constraints. In this case, the term \(\mathrm{b}\int_{X}K\,\mathrm{d}S\) in (63) can be dropped due to the Gauss-Bonnet theorem, cf. Footnote 1, as \(\int_{X}K\,\mathrm{d}S=2\pi\chi(X)\) is a topological constant. If \(X\) is not closed, then usually the constraints \(A=A_{0}\) and \(V=V_{0}\) are dropped, and \(\lambda_{1}\) in (63) Figure 17: Some results from TPS/cmds2.m. Continuation of Schwarz P in \(H\) at fixed \(\delta=2\pi\). BD in (a): PH (black), PHb (grey), za (dark green) and zb (lighter green), and za2 and zb2 (orange), but altogether these are just two different branches. BP1 on PH and approximate kernel vectors in (b,c), and further samples in (d). See text for details. denotes an external surface tension parameter, and wlog \(A_{0}=0\), and \(\lambda_{2}\) is fixed to \(0\). If in \[\int_{X}K\,\mathrm{d}S=2\pi\chi(X)-\int_{\partial X}\kappa_{g}\,\mathrm{d}s \tag{65}\] we assume \(\gamma=\partial X\) to be parameterized by arclength, then the geodesic curvature \(\kappa_{g}\) is the projection of the curvature vector \(\gamma^{\prime\prime}(\vec{x})\) onto the tangent plane \(T_{\vec{x}}(X)\), see, e.g., [19, SS4.3]. If as before we restrict to normal variations \(\psi=uN\), which moreover fix the boundary, i.e., \[u|_{\partial X}=0, \tag{66}\] then \[\partial_{\psi}E=\int_{X}(\Delta H+2H(H^{2}-K)+2c_{0}K-2Hc_{0}^{2}-2\lambda_{1} H)u\,\mathrm{d}S+\int_{\partial X}(H-c_{0}+b\kappa_{n})\partial_{n}u\,\mathrm{d}s,\] where \(\kappa_{n}=\langle\gamma^{\prime\prime},N\rangle\) is the normal curvature of \(\gamma=\partial X\), i.e., the projection of the curvature vector onto the normal plane, see, e.g., [18] and the references therein. Thus we again obtain (64) (with \(\lambda_{2}=0\)), and additionally to (66) we can consider either of \[\partial_{n}u =0\text{ on }\partial X\text{ (clamped BCs, or Neumann BCs)}, \tag{67}\] \[H-c_{0}+b\kappa_{n} =0\text{ on }\partial X\text{ (stress free BCs)}. \tag{68}\] In the case of (67) we have \(\int_{\partial X}\kappa_{g}\,\mathrm{d}s=0\) in (65), and hence \(\int bK\,\mathrm{d}S\) again becomes constant and can be dropped from (63). Following [17], we first consider "Helfrich cylinders", i.e., (64) with cylindrical topology and BCs (66) and (67). In SS3.7 we then consider "Helfrich caps", i.e., disk type solutions with BCs (66) and (68). **Remark 3.11**: a) The original motivation of (63) are the shapes of closed vesicles with a lipid bilayer membrane, in particular red blood cells (RBCs). This motivated much work, e.g., [1, 2, 3, 4], aiming to understand the various shapes of RBCs, mostly in the axisymmetric case. See also [14, 15] for further biological and mechanical background, and [16] for non-axisymmetric shapes. Applying our algorithms to RBCs (without a priori enforcing any symmetry) we recover many of the results from the above references. However, the bifurcation diagrams quickly become very complicated, and therefore our results will be presented in detail elsewhere. See also [10] for a 1D version with an extremely rich bifurcation structure. b) For \(X\) not closed it is an open problem for what parameters, and boundaries and BCs, the minimization of (63) is a well-posed problem. In [17], the following conditions on \(c_{0},b\) and \(\lambda_{1}\) are posed for \(E\) with \(\lambda_{2}=0\) to be definite in the sense that \(E\geq C_{0}\) for some \(C_{0}>-\infty\) for all connected orientable surfaces \(X\) of regularity \(C^{2}\) with or without boundary: \[\text{(i) }\lambda_{1}\geq 0,\qquad\text{(ii) }-1\leq b\leq 0,\qquad\text{and (iii) }-bc_{0}^{2}\leq\lambda_{1}(1+b). \tag{69}\] This proceeds as in [17] by scaling properties of \(E\) for various surfaces composed of planes (of area \(A\)), cylinders (of lengths \(l\) and radius \(r_{c}\)), and (hemi)spheres (of radius \(r_{S}\)), and considering the asymptotics of \(E\) as \(A,l\to\infty\) and/or \(r_{c},r_{s}\to 0\). For instance, the condition (69)(i) arises most naturally by considering \(X\) to contain a plane with \(A\to\infty\), which for \(\lambda_{1}<0\) gives \(E\to-\infty\). On the other hand, in the physics literature no restrictions on \(c_{0},b\in\mathbb{R}\) are given, and in a given problem a fixed \(\partial X\) and the BCs (67) or (68) may make \(E\) definite for much larger ranges than given in (69). In our experiments below we do take parameters to rather extreme values, e.g., \(\lambda_{1}<0\) in Fig. 23 in the sense of a "detour", and \(b=-4\) in Fig. 27, where we find interesting solutions, which can then again be continued to moderate parameter regimes. For closed \(X\), we additionally in general have \(\lambda_{2}\neq 0\), and, moreover, what then matters is the _reduced_ volume defined by dividing \(V\) by the volume of the sphere with the same area. c) For \(\partial X\neq\emptyset\), and in particular for the cases of cylinders and caps considered below, we are not aware of analytic bifurcation results, although [10] presents some results for caps in a slightly different setting. \(\rfloor\) In the demo biocyl we consider a cylindrical topology along the \(x\)-axis with BCs (67). The equation and BCs thus are \[\Delta H+2H(H^{2}-K)+2c_{0}K-2c_{0}^{2}H-2\lambda_{1}H=0, \tag{70a}\] \[X_{2}^{2}+X_{3}^{2}|_{X_{1}=\pm 1}=\alpha^{2},\ \ \mbox{and}\ \ \ \partial_{x}X_{2}|_{X_{1}=\pm 1}=\partial_{x}X_{3}|_{X_{1}=\pm 1 }=0. \tag{70b}\] For \(c_{0}=0\) we have the explicit family \[X_{\rm cyl,\alpha}=(x,\alpha\cos\phi,\alpha\sin\phi),\ \ x\in[-1,1],\ \phi\in[0,2\pi),\ \mbox{with}\ \lambda_{1}=\frac{1}{4\alpha^{2}}, \tag{71}\] of Helfrich cylinders, and [11] proves various existence results for axisymmetric solutions near (71) and in other regimes in the \(\alpha\)-\(\lambda_{1}\) plane. See also [21] for other shapes with cylindrical topology, which fit into our setting by prescribing other contours at \(x=\pm 1\) instead of the circles of radius \(\alpha\) in (70b). The basic setup again consists in setting \(X=X_{0}+uN_{0}\) (with here \(N\) the inner normal), and then writing (64) as a system of two second order equations for \((u_{1},u_{2})=(u,H)\), namely \[Lu_{2}+Mf(u_{2})+s_{\rm rot}q(X) =0, \tag{72a}\] \[Mu_{2}-H =0. \tag{72b}\] As before, \(L\) is the cotangent Laplacian, \(M\) the (Voronoi) mass matrix, and \(f(u_{2})=2u_{2}(u_{2}^{2}-K)-2\lambda_{1}u_{2}+2c_{0}K-2c_{0}^{2}u_{2}\). The mean curvature \(H=H(u_{1})\) is computed as \(H=\frac{1}{2}\left\langle LX,N\right\rangle\), the Gaussian curvature \(K=K(u_{1})\) is obtained from discrete_curvatures, cf. (28), and \(s_{\rm rot}q(u)\) implements the rotational PC with \(s_{\rm rot}\) an active parameter for branches of non-axisymmetric solutions. The reason for the reformulation of (70) as two second order equations (72) is that this way we can easily implement the _two_ BCs (70b), see sG.m. The external parameters for (70a) are \((\alpha,\lambda_{1},c_{0})\), and continuation in any of these yields interesting results. We first continue in \(c_{0}\) at fixed \(\alpha=1\), since we believe this is the numerically most challenging case due to the exact solution (74). Subsequently we continue in \(\alpha\), and again in \(c_{0}\) at different \(\alpha\), and finally in \(\lambda_{1}\), always starting from the Helfrich cylinder (71). Table 10 lists the command scripts in biocyl/. Besides cylinit.m, sG.m, and getM.m (see Remark 3.12), we then additionally have the PC qfrot.m and its derivative qfrotder.m, and the scripts bdmov1.m and bdmov2.m to produce movies of the BDs in Fig. 23. In hcylbra.m we put \begin{table} \begin{tabular}{l|l} and the mesh-quality data on the branch. In cylinit.m we set p.sw.Xcont=2, such that the colors in solution plots mainly give some visual structure to \(X\), but do not in general give the continuation direction \(uN\), cf. the remarks after (30). The PC multiplier \(|s_{\rm rot}|<10^{-4}\) on all the branches, and in all cases the final \(u\) plotted is of order \(10^{-6}\) or smaller. **Remark 3.12**: Since \(N\) is the inner normal, the stability for (70) refers to the Helfrich flow \[\dot{X}=-[\Delta H+2H(H^{2}-K)+2c_{0}K-2c_{0}^{2}H-2\lambda_{1}H]N, \tag{73}\] with BCs (70b). This is encoded in the dynamical mass matrix \(\mathcal{M}=\begin{pmatrix}M&0\\ 0&0\end{pmatrix}\) in getM, where \(M\) is the 1-component (Voronoi) mass matrix. \(\rfloor\) #### 3.6.1 Continuation in the spontaneous curvature In cmds1.m we fix \(\alpha=1\), and start with the Helfrich cylinder (71), hence \(\lambda_{1}=1/4\). We choose a rather coarse uniform initial mesh of np=1770 points, and first continue to \(c_{0}<0\), yielding the branch c0b in Fig. 18. The solutions contract in the middle and via two folds produce a bistability region around \(c_{0}=-6.25\), but otherwise no bifurcations. As the neck thins, we refine the mesh based on e2rsshape1, cf. (32), with p.sw.rlong=1 and combined with retrigX to avoid obtuse triangles, cf. Remark 2.6. The 2nd plot in (a) shows the distortion \(\delta_{\rm mesh}\), with refinements at pt25, pt35 and pt40, and the second row in (b) shows rather strong zooms into the necks, illustrating the refinement step from pt35 to pt36, and the reasonable mesh in the neck at point pt45. In Fig. 19 we continue to \(c_{0}>0\). This is initially easy, but after a first fold near \(c_{0}\approx 1.3\) and then decreasing \(c_{0}\) below \(c_{0}\approx 0.15\) it becomes difficult to maintain mesh distortion \(\delta<100\). We give the full result of our continuation but remark that the behavior on c0 (c0r after refine Figure 18: Results for (72) from biocyl/cmds1.m. \((\alpha,\varepsilon)=(1,1/4)\) fixed, continuation to \(c_{0}<0\), with a bistability region near \(c_{0}=-6.25\). Initially uniform mesh of \(n_{p}=1770\) points, and the 2nd plot in (a) indicates that with suitable refinement (at pt25 and pt35, to \(n_{p}=3561\)) the meshes stay good. Samples in (b), with zooms of the necks to illustrate the meshes. at pt46) after pt30, say, becomes mesh dependent. First, however, at \(c_{0}\approx 0.94\), we find a BP to a non-axisymmetric branch (c1, red), 3rd sample in (c). We then get a second fold at \(c_{0}\approx 0.08\pm\eta\), with \(\eta\in(-0.02,0.02)\) dependent on the mesh. Relatedly, the further continuation becomes somewhat non-smooth and quantitatively depends on the mesh, but qualitatively we get similar behavior for different meshes, namely: The neck becomes thin and short, and the solutions seem to approximate a solution \[X_{\text{2HS}}\text{ consisting of two hemispheres with radius 1, centered at }(x,y,z)=(\pm 1,0,0), \tag{74}\] and hence touching at \((0,0,0)\), see c0r/pt46, at \(c_{0}=0.47\). \(X_{\text{2HS}}\) is an exact solution of (70) with \(\lambda_{1}=1/4\) at \(c_{0}=1/2\), and given our various experiments with mesh refinement we believe that the branch cOr connects to \(X_{\text{2HS}}\). However, the third plot in (a), and the samples in (c) (at \(c_{0}\approx 0.47\), shortly before the supposed connection to \(X_{\text{2HS}}\)) show how the mesh quality seriously degrades as we approach the supposed connection. Also, we have, e.g., \(\text{ind}(X)=3\) at pt33 (with one unstable direction from the fold at \(c_{0}\approx 1.3\), and two from the double BP at \(c_{0}\approx 0.94\)), while the "almost-\(X_{\text{2HS}}\)"-solutions near pt46 have \(\text{ind}(X)=0\), i.e., are stable. Hence, since \(\text{ind}(X)\) should decrease by one at the (supposed) fold between pt33 and pt46, we expect another double BP between pt33 and pt46. However, the behavior of the branch is quantitatively mesh-dependent also near the left fold, and while the changes in \(\text{ind}(X)\) are detected, the bisection loops to localize the fold and/or the BPs do not converge. Thus, altogether the behavior Figure 19: Further results from biocyl/cmds1.m, \((\alpha,\varepsilon)=(1,1/4)\), continuation of Helfrich cylinder to \(c_{0}>0\) (blue branch c0, c0r after mesh refinement), with bifurcation to c1r (red). (a) BDs of area \(A\), energy \(E\), and mesh distortion \(\delta_{\text{mesh}}\). (b) A full sample at c0/pt10, different colormap (yellow\(>\)pink) for better visibility of mesh. (c) further samples, cut at \(x=0\). (d) zoom into solution close to \(X_{\text{2HS}}\). On c0, results after pt33 become somewhat mesh–dependent, but our experiments suggest that the branch connects to \(X_{\text{2HS}}\) at \(c_{0}=1/2\) (with c0r/pt46 shortly before that connection). after pt33_is conjectured_. On the other hand, for \(c_{0}\in(0.3,0.5)\), say, and in particular near pt46, the solutions like pt46 stay stable under mesh refinement. In any case, a pinch-off (topological change) which should occur after pt46 as we approach \(X_{\rm 2HS}\) cannot be resolved with our methods, but probably requires the use of some phase field method, see, e.g., [10].21 Footnote 21: For Neumann BCs (67), hemispheres are highly degenerate in the following sense: Setting up (70) over a _single_ circle at wlog at \(X_{1}=1\), and wlog with \((\alpha,\lambda_{1},c_{0})=(1,1/4,1/2)\) the single unit hemisphere is again an exact solution, but we are not able to continue this in \(\lambda_{1}\) or \(c_{0}\), i.e., it seems isolated. This is different for BCs (68), see §3.7. The results for c0r up to pt33 are mesh independent, including the bifurcation of the non-axisymmetric branch c1 at c0/bpt1. The mesh then also degrades on the non-axisymmetric branch (c1, and in a similar way like on the branch c0 but at larger \(c_{0}\)), and since next we focus on non-axisymmetric branches in a slightly different setting we do not further pursue c1 here. #### 3.6.2 Intermezzo: Other radii In cmds2.m we continue \(X_{\rm cyl,\alpha}\) in \(\alpha\), with fixed \((c_{0},\lambda_{1})=(0,1/4)\), see Fig. 20. This shows no bifurcations. Numerically, the case \(\alpha\searrow 0\) is challenging due to boundary layers developing at \(x=\pm 1\), see [12]. Alternating continuation and mesh adaptation based on e2rsshape1 we can reliably continue to \(\alpha=0.05\) and slightly further. In cmds3.m and Fig. 21 we repeat the continuation in \(c_{0}\) from Fig. 18 and Fig. 19 for different starting cylinders \(X_{\rm cyl,\alpha}\), namely \(\alpha=1.4\) in Fig. 21(a) and \(\alpha=0.6\) in (b-d). The main differences to the case \(\alpha=1\) are as follows: For both, \(\alpha=1.4\) and \(\alpha=0.6\), continuing to negative \(c_{0}\) we no longer have the two folds and associated bistability range as in Fig. 18, and the branches seem to extend to arbitrary large negative \(c_{0}\). We refrain from plotting this for \(\alpha=1.4\) in (a), and for \(\alpha=0.6\) in (b) we mainly remark that large negative \(c_{0}\) yields very long and thin necks, which can be handled with adaptive mesh refinement as in Fig. 18. Continuing to positive \(c_{0}\), for \(\alpha=1.4\) in (a), the main difference to Fig. 19 is that the branch c0r only has one fold (at \(c_{0}\approx 2.3\)) and then continues to negative \(c_{0}\), see b/c0r/pt51. The difference to \(\alpha=1\) is that a solution like \(X_{\rm 2HS}\) no longer exists, i.e., two hemispheres of radius \(\alpha=1.4\) (or any \(\alpha\neq 1\)) cannot be near a (genus 1) cylinder type solution. Additionally, the larger radius makes the continuation of the red non-axisymmetric branch b/c1qr easier wrt to mesh handling, see the last sample in (a). For \(\alpha=0.6\) in (c,d), the smaller radius gives a "pearling" behavior after the fold on the blue branch, and it seems likely that the branch s/c0r continues to a solution of type hemisphere-sphere-hemisphere, which would give similar problems as discussed for \(X_{\rm 2HS}\) in Fig. 19. Here we stop the continuation after pt30 (third sample in (d)), as further continuation requires excessive mesh adaptation and the solutions are unstable anyway. The pearling shape is also inherited by the non-axisymmetric branch s/c1 (last sample in (d)). Figure 20: Results for (72) from cmds2.m, continuation in \(\alpha\). #### 3.6.3 Continuation in surface tension In cmds4.m we return to fixed \(\alpha=1\) and continue in \(\lambda_{1}\), starting at \(\lambda_{1}=0.25\) and \(c_{0}=0.7\) corresponding to Fig 19, and in cmds4b.m we repeat this for \(c_{0}=0\), starting from the genuine Helfrich cylinder \(X_{\mathrm{cyl},\alpha}\). In both cases, for increasing \(\lambda_{1}\) the solutions initially only slightly change shape, but at larger \(\lambda_{1}\) (near \(\lambda_{1}=8\) for \(c_{0}=0\), and near \(\lambda_{1}=4\) for \(c_{0}=0.7\)) we get an S-shaped bistability region due to two consecutive folds, similar to the case of \(c_{0}\approx-6\) in Fig. 18 (with fixed \(\lambda_{1}=1/4\)).22 However, we find no bifurcations to non-axisymmetric branches. The case of decreasing \(\lambda_{1}\) is more interesting, and in Fig. 22 we only show the basic result for \(c_{0}=0\) and large \(\lambda_{1}>0\) for completeness. Footnote 22: Such folds were already found in [11, §6.3] for \(c_{0}=0\) in the 1D axisymmetric setting. Figure 21: cmds3.m, continuation in \(c_{0}\) for \((\alpha,\lambda_{1})=(1.4,1/5.6)\) in (a), and \((\alpha,\lambda_{1})=(0.6,2.4)\) in (b–d). Even if \(\lambda_{1}<0\) is in general unphysical, see Remark 3.11(b), continuation to \(\lambda_{1}<0\) yields interesting results, in particular for \(c_{0}>0\). For both, \(c_{0}=0.7\) and \(c_{0}=0\), for \(\lambda_{1}<0\) the solutions start to bulge out in the "middle" (near \(x=0\)), and there are folds around \(\lambda_{1}=-2\) after which we obtain pronounced "tire shapes". Moreover, we obtain bifurcations to \(D_{m}\)-symmetric branches, with increasing angular wave numbers \(m=1,2,3,4,\ldots\). For \(c_{0}=0.7\), the bifurcating branches return to \(\lambda_{1}>0\), such that in Fig. 23(a-f) we obtain four new solutions at \(\lambda_{1}=0.25\). For \(c_{0}=0\), the \(D_{m}\) symmetric branches initially behave similarly, but do _not_ reach \(\lambda_{1}>0\) and instead asymptote to \(\lambda_{1}=0_{-}\), with arbitrary large \(A\), see Fig. 24 and again Remark 3.11(b). Therefore we put \(c_{0}=0.7\) first in Fig.23. Nevertheless, for both \(c_{0}=0\) and \(c_{0}=0.7\), the BCs do not allow "large portions of planes" as solutions and thus yield the folds of the blue branches. Moreover, \(E\) stays bounded below (see Fig. 25), and also for \(c_{0}=0\) in Fig. 24 and Fig. 25(b) it seems that \(\lambda_{1}A\to 0\) as \(\lambda_{1}\nearrow 0\), i.e., that the area grows slower than \(|1/\lambda_{1}|\). The main issue in both, cmds4.m and cmds4b.m (which is essentially a copy of cmds4.m, with a different start (at \(c_{0}=0\)) of the blue branch and hence directory names starting with \(0\)), is the mesh adaptation for the strong budding of the \(D_{m}\) symmetric branches. In detail, to compute the branch, e.g., l1 (\(m=1\), sample in Fig. 23(d)) with refinement (sample in (e)) we proceed as follows. After branch switching at l0/bpt1 (double, by rotational symmetry, but again we just choose the first of the two kernel vectors) we do a few steps without rotational PC, which we then switch on. Subsequently, we continue with mesh adaptation based on area every five continuation steps. This works very robustly, also on the branches with \(m=2,3,4\) and similarly for \(c_{0}=0\) in Fig. 24, and allows continuation to large buds. This shows again that bulging _out_ (i.e., expanding) is typically not problematic wrt meshes. The only unbeautiful effect are the kinks on the \(m=1,2,3,4\) branches in Fig. 23(a), which occur at mesh-refinement. To have smoother branches we would need more frequent Figure 23: cmds4.m, continuation in \(\lambda_{1}\) at \(c_{0}=0.7\). In (a): axisymmetric branch l0b (dark blue, sample in (b)), l0 (blue, sample in (c))), and four bifurcating branches, l1 (red, refined to l1qr, samples in (d) and (e), with zoom), l2 (magenta), l3 (orange), l4 (green), samples in (f). The kinks of l1–14 are due to adapative mesh refinement every 5th step (after pt10), from \(\texttt{np}=2700\) to \(\texttt{np}\approx 3600\) at the end of each branch. but less strong refinement, but this is clearly not critical. The branches in Fig. 23 can be continued further by alternating cont and refineX, but this requires some fine-tuning of the cont-refineX loop parameters, and eventually the continuation of the branches l*qr fails again due to bad triangles in the necks, which apparently cannot easily be fixed. Thus, to keep the script cmds4.m simple we stop at the given points. Finally we note that the stabilities in Fig. 23 and Fig. 24 are as expected, namely ind\((X)=m\) on the \(D_{m}\) branch(es). ### Biomembrane caps In the demo biocaps we consider BCs (68) for the circle \(\partial X\)=\(\{x^{2}+y^{2}\)=\(\alpha^{2}\}\) in the \(x\)-\(y\) plane. Thus, \[\Delta H+2H(H^{2}-K)+2c_{0}K-2c_{0}^{2}H-2\lambda_{1}H=0,\] (75a) on \[X\], and, on \[\partial X\], \[u=0\] (since we keep \[\alpha\] fixed throughout), (75b) \[H-c_{0}+b\kappa_{n}=0. \tag{75c}\] In our experiments we fix \(\alpha=1\) and \(\lambda_{1}=1/4\), and first \(c_{0}=1/2\) and vary \(b\), and want to start with the upper unit hemisphere. Then \(H=K=1\) (choosing the inner normal for the hemisphere) and \(\kappa_{n}=1\) and hence (75c) requires \(b=-1/2\). The files in biocaps are as usual, with the main adaption in hcapbra.m to compute \(\int_{X}K\,\mathrm{d}S\) for the energy \(E\), see Remark 3.13c). Fig.26(a) shows the continuation in \(b\). This is mainly intended for subsequent continuation in \(c_{0}\) at negative \(b\), and (b) shows the case of \(b\approx-1.66\). The problem is symmetric under \((c_{0},X_{3})\mapsto-(c_{0},X_{3})\), and in particular at \(c_{0}=0\) we have the flat disk as an exact solution (for any \(b\)). See c00b/pt11 for a nearby solution with \(c_{0}\approx-0.07\), which lies between two folds with exchange of stability. This unstable part will feature interesting bifurcations to non-axisymmetric branches at more negative \(b\), see Fig. 27, while the remainder(s) of the axisymmetric branches are all stable, with the samples c00b/pt34 and c00/pt14 in 26(b) showing the typical behavior at strongly negative or positive \(c_{0}\), **Remark 3.13**: The main numerical challenges and used tricks for (75) are: a) To have a mesh as regular as possible for the initial hemisphere we use a classical subdivision and projection algorithm, see subdiv_hemsphere.m, modified from subdivided_sphere.m from the gptoolbox. Subsequently, we do one mesh-refinement with e2rsbdry as selector, as a good resolution near \(\partial X\) turns out helpful later. The initial mesh then has np=2245 nodes, which for instance at c01b-1q/pt24 is refined (by triangle area, i.e., e2rsA, followed by retrigK) to np=4475. The mesh quality in all our solutions stays quite good, i.e., \(\delta_{\rm mesh}<20\) for all solutions, and mostly \(\delta_{\rm mesh}<10\). b) The boundary \(\gamma=\partial X\) is parameterized by arclength as \(\gamma(\phi)=\alpha(\cos(\phi/\alpha),\sin(\phi/\alpha),0)\). Then \(\kappa=\gamma^{\prime\prime}=-\gamma/\alpha\) and the normal curvature on \(\partial X\) reads \(\kappa_{n}=-\dfrac{1}{\alpha}\left\langle N,X\right\rangle\), which is used to implement the BCs (75c). c) The "integral" sum(K) over the discrete Gaussian curvature K always evaluates to \(2\pi\chi(X)\), cf. Footnote 1. Thus we once more use Gauss-Bonnet \(\int_{X}K\,{\rm d}S=2\pi\chi(X)-\int_{\partial X}\kappa_{g}\,{\rm d}s\), where \(\kappa_{g}={\rm sign}(N_{3})\frac{1}{\alpha}\|N\times\gamma\|\), to compute the energy \(E\), and in hcapbra.m we evaluate \(\int_{\partial X}\kappa_{g}\,{\rm d}s\) by a trapezoidal rule. Another alternative is to zero out \(K|_{\partial X}\) before evaluating sum(K), and for comparison we also put that into hcapbra.m. In our computations we find a relative error \(<0.01\) (usually much smaller) between the two methods, when doing the initial refinement step near \(\partial X\) from (a). Figure 26: Initial results for (64),(68) from biocaps/cmds1.m. In (a) we continue in \(b\) starting at b/pt1 from the unit hemisphere with \((\alpha,\lambda_{1},c_{0},b)=(1,1/4,1/2,-1/2)\), to increasing \(b\) (branch b, black) and decreasing \(b\) (branch bb, grey). On b we go to the flat disk (last sample), while on bb the hemisphere bulges out. This is mainly intended for later continuation in \(c_{0}\), and in (b) we do so starting from bb/pt9 at \(b\approx-1.66\). This gives the double well shape for \(E\), with a short unstable segment between the two folds at \(c_{0}\approx\pm 0.33\). See Fig. 27 for the cases of \(b\approx-3.4\) (bb/pt20) and \(b\approx-4\). In Fig. 27 we repeat the continuation in \(c_{0}\) from Fig. 26(b) at more negative \(b\), namely \(b\approx-3.4\) in (a) and \(b\approx-4\) in (b). For lower \(b\), the unstable part of the \(c_{0}\) continuation expands, and we find two (or more, for even lower \(b\)) BPs between the left fold and \(c_{0}=0\), with azimuthal wave numbers \(m=1\) and \(m=2\). As before, these bifurcations are double by \(S^{1}\) symmetry, and to continue the bifurcating branches we set the usual rotational PC after two steps. The blue \(m=1\) branch then behaves similarly in (a) and (b), i.e., it becomes stable after a fold at \(c_{0}\approx-0.2\) (\(b=-3.4\)) resp. \(c_{0}\approx-0.27\) (\(b=-4\)). However, the \(m=2\) branch behaves differently: For \(b=-3.4\) it connects to the symmetric BP at \(c_{0}>0\). For \(b=-4\), the red branch c02b-2q first shows a secondary BP to a branch (c02b-2q-1, green) with broken \(\mathbb{Z}_{2}\) symmetry, and then shows a fold at \(c_{0}\approx-0.21\) where it becomes stable. The branch c02b-2q-1 also shows a fold, at \(c_{0}\approx-0.11\), after which however one unstable eigenvalue remains, i.e., \(\text{ind}(X)=1\) at, e.g., c02b-2q-1/pt20 (last sample in (c)). The somewhat non-smooth Figure 27: Further results from cmds1.m. (a,b) Continuation in \(c_{0}\) from bb/pt20, \((\alpha,\lambda_{1},b)=(1,0.25,-3.4)\), starting from \(c_{0}=0.5\), branches c01b (black, to decreasing \(c_{0}\)) and c01 (grey, to increasing \(c_{0}\)). There are two BPs on the unstable part of c01b for \(c_{0}<0\), and the symmetric BPs for \(c_{0}>0\). The blue branch c01b-1q has azimuthal wave number \(m=1\) and is stable after its fold. The red branch c01b-2q has \(m=2\) and connects to the symmetric BP at \(c_{0}>0\). The 2nd plot in (a) shows where the part \(b\int K\,\mathrm{d}S\) of \(E\) becomes dominant, taking into account the rather large \(|b|\). (c) Similarly starting at bb/pt24 with \(b\approx-4\); zoom of BD near upper left fold of the branch c02b (black) similar to c01b from (a). The blue branch is qualitatively as in (a), but now the \(m=2\) branch c02b-2q (red) also folds back giving stable solutions, and there is a secondary BP on it, giving the green branch c02b-2q-1. shape of the red and green branches is due to repeated and heavy mesh refinement, to, e.g., \(n_{p}=5560\) at c02b-2/pt30. In Fig. 28 we continue some \(m=1\) solutions from the blue branch in Fig. 27(b) in \(b\) and in \(\lambda_{1}\), with fixed \(c_{0}\). The unstable black solutions with \(c_{0}\approx-0.35\) in (a) continue to \(b\approx-1.8\), where the branch bifurcates from the axisymmetric branch. This is in contrast to Fig. 26 where for the continuation in \(b\) at \(c_{0}=1/2\) no bifurcations from the axisymmetric branch were found. The blue branch in Fig. 28(a) with \(c_{0}\approx-0.24\) shows a fold at \(b\approx-3.11\). From these and similar experiments, \(m=1\) solutions do not seem to exist for \(b\) somewhat larger than \(-2\), and no stable ones for \(b\) larger than \(-3\). In Fig. 28(b), the black unstable branch continues to large \(\lambda_{1}\), but the stable blue branch again shows a fold, at moderate \(\lambda_{1}\approx 0.44\). This indicates that solutions of this type also do not continue to "large" \(\lambda_{1}\). Nevertheless, while the physical (biological) significance of the parameter regimes and solutions in Figs. 26-28 certainly needs to be discussed, mathematically we have obtained some stable non-axisymmetric solutions. ## 4 Summary and outlook We explained the basic setup of the pde2path extension library Xcont for continuation of 2D submanifolds \(X\) (surfaces) of \(\mathbb{R}^{3}\), and gave a number of examples. These were partly introductory, e.g. the spherical caps in SS3.1, partly classical, e.g., the Enneper minimal surface in SS3.2.3, the nodoids in SS3.3.2 and SS3.4, and the TPS in SS3.5, and partly rather specific such as the Plateau problem in SS3.2.2 or the 4th order Helfrich type cylinders and caps in SS3.6 and SS3.7. Besides [11], and to some extent [10], there seem to be few numerical continuation and bifurcation experiments for such geometric problems for 2D surfaces, i.e., without imposing some axial symmetry, and we are not aware of a general software for such tasks. Figure 28: cms2.m, experiments with continuation of \(m=1\) solutions from Fig. 27 in \(b\) (a), and in \(\lambda_{1}\) (b). The starting points in (a) are from \(c\approx-0.35\) for the black branch bc1, and from \(c\approx-0.24\) for the blue branch bc2 (this is the same solution as c01b-1q/pt16). The continuation in \(\lambda_{1}\) in (b) has the same starting points and the same colors. The unstable branch (now with fixed \((c_{0},b)=(-0.35,-3.4)\)) continues to large \(\lambda_{1}\), but the blue branch with fixed \((c_{0},b)=(-0.24,-3.4)\) has a fold at \(\lambda_{1}\approx 0.44\). The basic setup for all our problems (except (64), which is of 4th order) is similar: We consider CMC surfaces, which mainly differ wrt constraints and/or boundary conditions. Along the way we explained a number of techniques/tricks which we expect to be crucial in many applications. A major problem for continuation (over longer parameter regimes) is the mesh handling as \(X\) changes and hence the mesh distorts. We explained how this (often) can be abated via moving of mesh points (moveX), refinement (refineX which sometimes should be combined with re-triangulation by retrigX) and coarsening (coarsenX), and coarsening of degenerate triangles (degcoarsenX), although the choice of the parameters controling these functions often requires some trial and error. In any case, \(X\) bulging out (increasing area) is usually harmless, but bulging in (the development of necks) is more challenging. This is a first step. With the demos we hope to give a pool of applications which users can use as templates for their own problems, and we are curious what other applications users will consider, and of course are happy to help if problems occur. As indicated above, our own further research, to be presented elsewhere, mostly since bifurcation diagrams become complicated, but also since some of the numerics require further tricks, includes: * Further classical minimal surfaces (and CMC companions) such as Schwarz H and Scherk surfaces (surface families); * Bifurcations of _closed_ vesicles in Helfrich type problems. * Coupling of membrane curvature and reaction-diffusion equations for proteins as in, e.g., [17]. ## Appendix A Spheres, hemispheres, VPMCF, and an alternative setup ### Spheres The demo spheres, containing only the (somewhat minimally necessary) files sphereinit.m, sG.m, getM.m, and cmds1.m, is mostly meant to illustrate volume preserving mean curvature flow (VPMCF) near spheres, see Fig. 29.23 We refer to sphereinit.m and cmds1.m for comments (and to geomflow.m and vpmcf.m from libs/Xcont for the VPMCF) and here only note: Footnote 23: Additionally, the demo contains convtest.m for convergence tests, see Fig. 3. 1. The comparison between \(rA/3\) and \(V\) in Fig. 29(a) shows a very small error which indicates that the solutions are good approximations of spheres. 2. For convex closed initial \(X\) (meaning that \(X=\partial\Omega\) for a convex domain \(\Omega\subset\mathbb{R}^{3}\)), the VPMCF converges to a sphere. See also, e.g., [11] for theoretical background. This also holds for "slightly" non-convex initial \(X\) as in Fig. 29(b).24 Footnote 24: In detail, \(X(0)\) here is obtained as \(X(0)=S_{r_{0}}+0.4(\sin(\vartheta)(|x|-r_{0})+\xi)N\), where \(\vartheta\) is the azimuth, and \(\xi=0.2(\texttt{rand}-0.5)\) with \(\texttt{rand}\in[0,1]\) a Matlab random variable on each node. 3. Our (explicit Euler) implementation of the VPMCF does not conserve \(V\) exactly, but with "reasonable accuracy", i.e.: Even for quite "non-spherical" initial \(X(0)\), the error \(|1-V_{\infty}/V_{0}|\)\(<\)0.01, where \(V_{\infty}=\lim_{t\to\infty}V(t)<V_{0}\) in all our tests, i.e., \(V_{\infty}\) is always slightly smaller than \(V_{0}\).25 Footnote 25: This “volume–accuracy” of geomflow for VPMCF depends weakly on the Euler–stepsize dt, and more strongly on the fineness of the discretization of \(X\). This can be checked by changing sw in spheres/cmds1.m for initializing \(X\). 4. During continuation, the position of X is not fixed, i.e., we have the threefold (in the discrete setting approximate) translational invariance in \(x,y,z\), and hence always a three-dimensional (approximate) kernel. This could be removed by suitable translational PCs (see the demo hemispheres). However, the approximate kernel is not a problem for the continuation here since in the Newton loops the right hand sides are orthogonal to this kernel. As here we are mostly interested in the VPMCF, for which the translational invariances are irrelevant, we refrain from these PCs to keep the demo s ### Hemispheres In the demo hemispheres we continue in volume \(V\) hemispheres \(X\) sitting orthogonally on the \(z=0\)-plane, i.e., \[\partial_{r}X_{3}=0\text{ where }r=\sqrt{x^{2}+y^{2}}, \tag{76}\] and test VPMCF for perturbations of such hemispheres, see Fig. 30. Additionally we use the PCs \[\int_{X}N_{1}\,\mathrm{d}S=\int_{X}N_{2}\,\mathrm{d}S=0,\quad X=X_{0}+uN_{0}, \tag{77}\] to fix the translational invariance. Compared to the spheres in SSA.1 this requires a few more files, listed in Table 11. The PCs (77) (and \(u\)-derivatives) are implemented in qf2, qjac2, and the rhs \(G(u)=H-H_{0}\) in sGhs is augmented to \(\widetilde{G}(u)=H-H_{0}+s_{x}N_{1}+s_{y}N_{2}\) with multipliers \(s_{x},s_{y}\). These stay \(\mathcal{O}(10^{-6})\) during continuation, and the only effect of the construction is that the 2D kernel of translational invariance of \(\partial_{u}G\) is removed \begin{table} \begin{tabular}{l|l} cmds1.m & continuation in \(V\), and VPMCF flow test. \\ hsinit.m, sGhs.m & init and rhs \\ qf2.m, qjac2.m & PC (77), and derivative \\ getN.m & mod of getN, correction at \(z=0\) \\ diskpdeo2.m & mod of (pde2path–)default diskpdeo2.m to have a finer mesh at \(r=1\). \\ \end{tabular} \end{table} Table 11: Files in pde2path/demos/geomtut/hemispheres. Figure 29: Results from sphere/cmds1.m. (a) Continuation of a sphere in \(V\), with comparison of \(A\) and \(V\). (b) An IC for VPMCF from a perturbation of S3/pt30, with (c) solutions at \(t=1\) and \(t=5\), and a time–series for \(V,A\); \(V\) is conserved to within \(0.5\%\). from the linearization of the extended system \((\widetilde{G},q)\). See the end of cmds1.m for further comments.26 Footnote 26: In cmds1.m we also monitor the “positions” \((x_{0},y_{0})\) of \(X\) defined as \(x_{0}=\int_{X}X_{1}\,\mathrm{d}S\), \(y_{0}=\int_{X}X_{2}\,\mathrm{d}S\). These behave as expected, namely: very small drifts for continuation without PCs, but no drifts for continuation with the PCs (77). The BCs (76) allow motion of \(\partial X\) in the "support-plane" \(z=0\), and are implemented as \[\text{r(idx)=grXz*(X(idx,1).^{\mbox{\small-}2+X(idx,2).^{\mbox{\small-}2})} (\stackrel{{!}}{{=}}0)}\] in sGhs, where as usual idx are the boundary indices, and grXz is the \(z\)-derivative (operator) on \(X\), as before obtained from grX=grad(X,p.tri) and interpolation to the nodes. This forces the \(x,y\)-coordinates of the points on \(X\) directly above the \(z=0\) layer to also fulfill \(x^{2}+y^{2}=1\), i.e., we obtain a "cylindrical socket" for the hemispheres. To mitigate this effect, in hsinit we initialize with a somewhat specialized mesh over the unit disk \(D\) with higher density towards \(\partial D\), which is then mapped to the unit hemisphere via \(z=\sqrt{1-x^{2}-y^{2}}\). Nevertheless, after continuation to larger \(V\), which is combined with some mesh-adaptation by area, we obtain a mismatch between \(rA/3\) and \(V\), see Fig. 30(a,b), and compare Fig. 29(b). In Fig. 30(c,d) we give an example of VPMCF from a perturbation of hs1r/pt50, here of the form \(X|_{t=0}=X+0.4(\cos(\vartheta)(\max(z)-z+0.1(\operatorname{rand}-0.5))N\), where \(\vartheta\) is the angle in the \(z=0\) plane. We _do not use any BCs_ in the rhs vpmcff.m. Instead, we use the correction \[\text{N(p.idx,3)=0; N=normalizerow(N);} \tag{78}\] in getN.m, to let \(N\) lie in the \(x\)-\(y\)-plane, cf. Remark 3.8(b) for a similar trick. Without (78), the _continuation_ of hemispheres works as before, but the solutions of the VPMCF start to (slowly) lift off the \(z=0\) plane, and then (quickly) evolve towards a planar \(X\) such that also \(V\to 0\). With (78), the solutions flow back to hemispheres, and \(V\) is again conserved up to \(0.5\%\), and "after convergence" Figure 30: Results from hemisphere/cmds1.m. (a) Continuation of hemisphere in \(V\), with sample at end. (b) Comparison of \(A\) and \(V\) for (a). (c) A perturbation of hsr1/pt50 as IC for VPMCF. (d) Solutions at \(t=1\) and \(t=2\), and time–series for \(V,A\) from (c,d); \(V\) conserved to within \(0.8\%\). (e.g., from \(X|_{t=2}\)) we can start continuation again, showing consistency. ### Spherical caps via 2D finite elements For the sake of completeness and possible generalization, we show how the spherical caps with DBCs can be treated in a classical FEM setting. Let \(\Omega\subset\mathbb{R}^{2}\) be a bounded domain and \(X_{0}\) be a surface with parametrization \(\phi_{0}:\Omega\to\mathbb{R}^{3}\), and as before define a new surface via \(X=X_{0}+u\,N_{0}\), \(u:\Omega\to\mathbb{R}\). Then (1) reads \[G(u,H,V)=\begin{pmatrix}H(X)-H\\ V(X)-V\end{pmatrix}=0,\text{ with }u|_{\partial\Omega}=0. \tag{79}\] This is a quasilinear elliptic equation for \(u:\Omega\to\mathbb{R}\), and after solving (79) we can again update \(X_{0}=X_{0}+uN_{0}\) and repeat. To discretize (79) we now use the FEM in \(\Omega\). The main issue is how to compute \(H\) (and \(A\) and \(V\) and similar quantities) without the gptoolbox, and Table 12 lists the pertinent files. To implement \(H(X)\), \(X=X_{0}+uN\), we here directly use the definition \[H=\frac{1}{2}\frac{h_{11}g_{22}-2h_{12}g_{12}+h_{22}g_{11}}{g_{11}g_{22}-g_{12 }^{2}}\] of the mean curvature based on the fundamental forms of \(X\). This is brute force and in particular neither confirms to a weak (FEM) formulation nor allows simple Jacobians, see Remark A.1. **Remark A.1**: a) In getN and getiff we compute the _nodal_ values for \(X_{x}\) and \(X_{y}\) (via weighted averages of the adjacent triangles), and the corresponding nodal values of \(N\) and \(g_{ij}\); this is needed here as \(X_{x}\) and \(X_{y}\) appear nonlinearly in \(N\), and similarly \(g_{ij}\) appear nonlinearly in \(H\). On the other hand, the second derivatives \(h_{ij}\) only appear linearly in \(H\), and hence we take the second derivatives in get2ff using the _element_ differentiation matrices Kx and Ky. Thus, H in r=-2*H+p.mat.M*(H0*ones(p.nu,1)) in sG is (an approximation of) the element wise mean curvature, and hence H0 must also be multiplied by the mass matrix p.mat.M. b) Since \(G\) as implemented uses products of differentiation matrices, the associated Jacobian \(\partial_{u}G\) has more bandwidth than before, i.e., the sparsity pattern \(S\) of \(\partial_{u}G\) is that of \(M^{2}\), rather than that of \(M\), and thus we use \(\mathtt{S=M^{2}>0}\) in getGupde. \(\rfloor\) ``` functionp=spcapinit(nx,par)%sphericalcap,init,legacysetup p=stanparam();p.sw.spcalc=0;p.sw.bifcheck=0;%setstanparam,overwritesome ``` \begin{table} \begin{tabular}{l|l} cmds1 & Main script; initialization, continuation, plotting. \\ spcapinit & Initialization, rather standard, except for p.X=[x,y,0*x] for the initialization of \(X\). \\ oosetfemops & Setting mass matrix \(M\), and first order differentiation matrices which are used to compute \(H\) in sG (via getmeancurv). \\ sG, qV & rhs and volume constraint. \\ getN & compute normal vector. \\ getA, getV & compute \(A(X)\) and \(V(X)\), overload of Xcont functions. \\ getmeancurv & \(H\) from (8), see also getiff, get2ff for 1st and 2nd fundamental forms. \\ e2rs & ElementToRefineSelector function, based on triangle areas. \\ cmcbra, pplot & like Xcont/cmcbra and Xcont/pplot, but overloaded since p.tri not present here. \\ getGupde & overload of library function to deal with larger bandwidth. \\ \end{tabular} \end{table} Table 12: Overview of files in geomtut/spcap2. * pde=diskpdeo2(1,nx,round(nx/2));%diskpreimagediscretization p.pdeo=pde;p.np=pde.grid.nPoints;p.nu=p.np;p.nt=pde.grid.nElements; p.sol.xi=1/p.nu;p.nc.neq=1;p.sw.sfsem=-1;%storedimensions p.fuha.outfu=@cmcbra;p.fuha.e2rs=@e2rs;%branchdata,refinementselector p.sw.Xcont=1;p.plot.style=-1;%calluserplotforplotng * pop_getpute(p);x=p(0,1,;);y=p0(2,;);u=0*ones(p.np,1);%initialsoln p.u=[u;par];p.X=[x,y,0*x];%setIC,includingX p=oostfemops(p);%hereconstantmassandstiffnessmatrices(untilmeshchanges) p.plot.auxdict={'H','V','alpha','A'};p.u(p.nu+4)=getA(p,p.u);%initialarea ``` functionp=oostfemops(p)%legacysetting,precomputeFEMmatrices gr=p.pdeo.grid;fem=p.pdeo.fem;[~,p.mat.M,~]=fem.assema(gr,1,1,1);%mass E=center2PointMatrix(gr);%tompelemdifferentiationmatricestonodalones * p.mat.p2c=point2CenterMatrix(gr);%tinterpolatefromnodestoelemcenters * [Dx,Dy]=fem.gradientMatrices(gr);p.mat.Dx=E*Dx;p.mat.Dy=E*Dy; p.mat.Kx=fem.convection(gr,[0,1]); p.idx=unique(p.pdeo.grid.e(1:2,;));%storebdry-indicesforDBCs functionr=sG(p,u)%PDErhs par=u(p.nu+1:end);H0=par(1);u=u(1:p.nu);al=par(3);%splitinparandPDEu * N0=getN(p,p.X);X=p.X+u.*N0;H=getmeancurv(p,X);%meancurv.basedonFEM r=-2*H+p.mat.M*(H0*ones(p.nu,1));r(p.idx)=u(p.idx)-al;%residual,andDBCs * functionH=getmeancurv(p,X)%meancurvbasedon1std2ndfundamentalform * [E,F,G]=get1ff(p,X);[L,M,N]=get2ff(p,X);H=0.25*(L.*G-2*M.*F+N.*E)./(E.*G-F.-2); functionV=getV(p,u) u=u(1:p.nu);N0=getN(p,p.X);X=p.X+u.*N0; N=cross(p.mat.Dx*X,p.mat.Dy*X,2);%normalatX,NOTnormalized V=sum(p.mat.M*(dot(X,N,2)))/3; ``` Listing 16: spcapinit, oosetfemops, sG, getmeancurv and getV from spcap1. See sources for, e.g., get1ff, get2ff, and the Element2RefineSelector function e2rs used for mesh adaptation. Despite the caveats in Remark A.1, we can now set up a simple script (Listing 17) and produce a continuation diagram and sample plots fully analogous to SS3.1. For mesh-refinement, to select triangles to refine we use the areas on \(X\), see e2rs, but otherwise the adaptive mesh refinement works as usual in the legacy setting of pde2path via oomeshada. ``` 1%%cmcsphericalcaps,init;parswillbecoverwritteninspcapinit nx=10;al=0;h0=0;v0=0;a0=0;par=[h0;v0;al;a0];%initialpars p=spcapinit(nx,par);plotsol(p);p=setfn(p,'cap1');p.sol.ds=0.01; p.nc.dsmax=0.2;p.nc.uslram=[2 4];p.plot.bpcmp=1; p.nc.ilam=[2 1];p.nc.nq=1;p.fuha.qf=@qV;p.fuha.qfder=@qVjac;%contV,freeH * p.sw.jac=0;p.sw.jac=1;%usingnumericalJacsforG,andapproximateq_u p=cont(p,10);%go *%exampleofmesh-adaptation;loadingsolnusefulfortestingparameters %suchasngen(numberofadaptionloops)andsig(fracoftrianglestorefine) p=loadp('cap1','pt10','cap1r');p.nc.ngen=1;p.nc.sig=0.2;p=oomeshada(p); 11p=cont(p,20);%continuerefinedsolution ``` Listing 17: Short script cmds1.m from geomtut/spcap. As already said, the main advantage of this setup is simplicity in the sense that the function getmeancurv (based on get1ff and get2ff) is a direct translation of the differential geometric definition. Moreover, we can work with fixed preassembled differentiation matrices (independent of \(X\)) as long as the mesh (in \(\Omega\)) is fixed. The main disadvantage is that this implementation of (8) is not a weak form but mixes FEM and FD differentiation matrices.
2308.00066
Signal Synchronization Strategies and Time Domain SETI with Gaia DR3
Spatiotemporal techniques for signal coordination with actively transmitting extraterrestrial civilizations, without the need for prior communication, can constrain technosignature searches to a significantly smaller coordinate space. With the variable star catalog from Gaia Data Release 3, we explore two related signaling strategies: the SETI Ellipsoid, and that proposed by Seto, which are both based on the synchronization of transmissions with a conspicuous astrophysical event. This dataset contains more than 10 million variable star candidates with light curves from the first three years of Gaia's operational phase, between 2014 and 2017. Using four different historical supernovae as source events, we find that less than 0.01% of stars in the sample have crossing times, the times at which we would expect to receive synchronized signals on Earth, within the date range of available Gaia observations. For these stars, we present a framework for technosignature analysis that searches for modulations in the variability parameters by splitting the stellar light curve at the crossing time.
Andy Nilipour, James R. A. Davenport, Steve Croft, Andrew P. V. Siemion
2023-07-31T18:37:20Z
http://arxiv.org/abs/2308.00066v1
# Signal Synchronization Strategies and Time Domain SETI with Gaia DR3 ###### Abstract Spatiotemporal techniques for signal coordination with actively transmitting extraterrestrial civilizations, without the need for prior communication, can constrain technosignature searches to a significantly smaller coordinate space. With the variable star catalog from Gaia Data Release 3, we explore two related signaling strategies: the SETI Ellipsoid, and that proposed by Seto, which are both based on the synchronization of transmissions with a conspicuous astrophysical event. This dataset contains more than 10 million variable star candidates with light curves from the first three years of Gaia's operational phase, between 2014 and 2017. Using four different historical supernovae as source events, we find that less than 0.01% of stars in the sample have crossing times, the times at which we would expect to receive synchronized signals on Earth, within the date range of available Gaia observations. For these stars, we present a framework for technosignature analysis that searches for modulations in the variability parameters by splitting the stellar light curve at the crossing time. Astrobiology, Astrometry, Search for extraterrestrial intelligence, Technosignatures 0000-0002-4000]Andy Nilipour 0000-0002-4882-7885]James R. A. Davenport 0000-0002-4882-7885]Steve Croft 0000-0002-4882-7885]Andrew P. V. Siemion ## 1 Introduction Searches for technosignatures, the detection of which would be indicative of extraterrestrial intelligence, must grapple with a vast, multi-dimensional parameter space that leaves unknown the nature of the signal, and the spatial and temporal locations of the transmission (Wright et al., 2018). Because we are unable to constantly monitor across all possible remote sensing modalities, we must select when, where, and how to conduct our observations. An advanced extraterrestrial civilization could infer this searching difficulty, and may synchronize its signal with some conspicuous astrophysical event. These source events, which should be easily visible and uncommon, act as "Schelling" or focal points (Schelling, 1960), allowing for coordination despite a lack of communication between the two parties. The distribution of stars from which a transmission synchronized with an event that would be observable on Earth forms a so-called search for extraterrestrial intelligence (SETI) Ellipsoid, and it can be used to greatly constrain the number of technosignature observation candidates. Though this framework has been described in the past (e.g., Makovetskii, 1977; Lemarchand, 1994), a precise application of the SETI Ellipsoid requires precise stellar positions and distances, which has only recently been made possible by Gaia. Gaia Early Data Release 3 (Gaia EDR3; Gaia Collaboration et al., 2021) contains the full astrometric solution of nearly 1.5 billion stars, which has allowed for the calculation of precise photogeometric distances (Bailer-Jones et al., 2021). Decreased uncertainties in stellar distances reduce the timing uncertainty of observations using the SETI Ellipsoid scheme. However, the Gaia catalog contains stars up to tens of kpc away, and at these large distances, the errors in distance translate to impractically high timing uncertainties. To mitigate the effect of stellar distance uncertainties, Seto (2021) considers a framework in which an extraterrestrial civilization follows a certain geometric signaling scheme with a time-dependent directionality, rather than sending out an isotropic transmission at one point in time, and observers on Earth follow a closely linked receiving scheme. In this approach, shown in Figure 1, rather than observing stars lying on an ellipsoid in Galactic Cartesian coordinates at a given point in time, we would instead observe along two concentric rings on the celestial sphere, up to a certain depth. Using this, we do not need to precisely know the distance to the candidate star, only that it is less than some upper bound, giving an advantage to the Seto scheme. With the SETI Ellipsoid and the Seto scheme, we can assign each star an 'Ellipsoid' and a 'Seto' crossing time that indicates when that star should be observed according to the respective coordination framework. Scheduling future observations is possible with this methodology, as is looking through archival and publicly available data. Gaia Data Release 3 (Gaia DR3; Gaia Collaboration et al., 2023) contains variability analysis and type classification, along with epoch photometry, for approximately 10.5 million stars. Though these photometry data are limited, one way we can search for extraterrestrial intelligence signals is to perform independent variability analysis on the stellar light curves before and after the stars' crossing time. Such an analysis would be primarily sensitive to modulations in a periodic star's frequency, phase, and amplitude, as well as any other statistically significant differences in light-curve properties before and after a star's crossing time, which may be a form of information transmitted by a sufficiently advanced civilization. Figure 1: The angles of observation for the Seto scheme. Synchronized signals from stars along the four blue lines, which correspond to two conical shells when rotated about the horizontal axis, will be observable on Earth now; these systems are the current technosignature candidates for this signaling framework. The path from the source event through the potentially transmitting civilization to Earth, through a pair of red and blue lines, forms a right angle at a point on the SETI Ellipsoid, where the technosignature candidate is located. In this paper, we use Gaia to explore the SETI Ellipsoid and the Seto scheme as techniques for analyzing past observations in addition to prioritizing future technosignature searches.1 In Section 2 we present the sample of variable stars identified in Gaia DR3. In Section 3 we discuss the geometry of, and our methods of target selection relative to, the SETI Ellipsoid, and in Section 4 we do the same for the Seto scheme. In Section 5 we explore our candidate sample derived from both methodologies. In Section 6 we present a novel time domain SETI approach that we apply to a subset of the periodic variables in the sample. Finally, in Section 7 we conclude with a summary of our work, its limitations, and possibilities for future work. Footnote 1: Source code available at Nilipour et al. (2023) ## 2 Gaia Data Release 3 Gaia DR3 (Gaia Collaboration et al., 2023), the most recent data release of the Gaia mission, complements the astrometric and photometric data of the previous EDR3 (Gaia Collaboration et al., 2021) with spectra, radial velocities, astrophysical parameters, and rotational velocities for subsets of the full catalog, as well as epoch photometry and variability analysis for 10.5 million stars, which is of particular interest for this work. Each of these stars is placed into one of 24 variable classifications. Although stars that have not been classified as a known variable type by Gaia could still harbor a signal in their light curve, such as a single flux measurement with a significant deviation observed at the crossing time, the epoch photometry for these stars is not yet available, and so not considered for this analysis. However, our approach described in Section 6 can be easily applied to such stars when their light-curve data are released. With a prior that factors in Gaia's magnitude limit and interstellar extinction, Bailer-Jones et al. (2021) produced photogeometric distance estimates (using the Gaia parallax, color, and apparent magnitude) for 1.35 billion stars. Bailer-Jones et al. give the median, \(r_{\rm med}\), and the 16th and 84th percentiles, \(r_{\rm lo}\) and \(r_{\rm hi}\), respectively, of the photogeometric posterior. In this work, the distances used are \(r_{\rm med}\) and the distance uncertainties used are the symmetrized uncertainty \(\sigma_{r}=(r_{\rm hi}-r_{\rm lo})/2\). Of the 10.5 million total variable stars, 9.3 million have Bailer-Jones distances; henceforth, we refer to this set as the Gaia catalog of variable stars. As seen in Figure 2, the distance uncertainties in these measurements are generally large, which makes timing uncertainty for signal synchronization via the SETI Ellipsoid, which is directly related to the stellar distance uncertainty, unfeasible. Around 0.5% of Gaia variable stars have a distance uncertainty less than 1 ly, with the distances to all of these stars less than 2000 ly. Gaia epoch photometry has excellent long-term stability, which is useful when considering the potentially large timing uncertainties on the Figure 2: Two-dimensional histogram of the distance uncertainties for the variable stars in Gaia DR3, with distances less than 5000 lyr. The median and mode distance uncertainties are shown in the dashed and solid lines, respectively. Stars with a distance uncertainty less than 1 lyr are the most ideal; there are 43,237 such stars total, mostly at low distances. stellar crossing times. However, the light curves for individual stars are sparse, which makes searching for transient signals from extraterrestrial civilizations challenging. Instead, our search for signals, described in Section 6, relies on longer-term signals involving the variability of the host star. Additionally, the epoch photometry in Gaia DR3 only includes data from a three-year period between 2014 and 2017, and thus contains less than half of the data collected to date, which limits our target selection accordingly. ## 3 Target Selection via the Seti Ellipsoid The SETI Ellipsoid framework is based on the simplest synchronized communication system, in which an extraterrestrial civilization transmits an isotropic signal when it observes the source event, and has been previously proposed as a method to constrain the technosignature search space (Lemarchand, 1994). The geometry of the SETI Ellipsoid is shown in Figure 3 and described here. All stars that have observed a source event are contained within the sphere centered at the event with radius \[\mathcal{R}=\mathcal{D}+\mathscr{T}\] where \(\mathcal{D}\) is the distance from the event to Earth and \(\mathscr{T}=cT\) is the current time elapsed since Earth observation of the source event (\(T\)) times the speed of light (c). Suppose each of these stars hosts an extraterrestrial civilization that Figure 3: Diagram of the SETI Ellipsoid, which has its foci at the source event (purple dot) and Earth (green dot). The blue dashed line represents the information front of the source event; stars outside this sphere (blue dot 1) have not yet observed the event. Stars outside the ellipsoid (orange dot 2) have observed the event, and if they transmitted a signal upon observation, such a signal would arrive at Earth in the future. For stars on the ellipsoid (pink dot 3), their synchronized transmission would be observable on Earth now; at any time, these are the current technosignature candidates. Signals from stars inside the ellipsoid (red dot 4) would have been received on Earth in the past. has transmitted a signal upon its observation of the source event. The stars from which these emitted signals reach Earth at any given time lie on the SETI Ellipsoid. Formally, the ellipsoid, which has foci at Earth and the source event, is defined by \[d_{1}+d_{2}=\mathcal{D}+\mathscr{T}\] where \(d_{1}\) is the distance from the event to the star and \(d_{2}\) is the distance from the star to Earth. As can be seen in Figure 3, the linear eccentricity of the ellipse is \[\mathscr{C}=\frac{\mathcal{D}}{2}\] and the semi-major axis is \[\mathscr{A}=\frac{\mathcal{D}+\mathscr{T}}{2}\] So, the semi-major axis grows at half the speed of light. Using this schematic, we can categorize all stars into four interest levels, which correspond to the labels 1-4 in Figure 3: 1. Stars outside the sphere of radius \(\mathcal{R}\), which have not yet seen the source event. Synchronized transmissions from these stars will not be observable until a minimum of \(T\) years from now, which, for most events, is unreasonably far in the future. 2. Stars within the radius \(\mathcal{R}\) sphere but outside the SETI Ellipsoid, from which a synchronized signal would have already been sent but not received on Earth. Signals from these stars will not be observable for a maximum of \(T\) years; the stars in this category that are closer to Earth may be potential candidates for scheduling future technosignature searches. 3. Stars on the SETI Ellipsoid. Synchronized signals from these stars would be arriving at Earth now, making these ideal candidates for immediate observations. 4. Stars inside the SETI Ellipsoid, from which synchronized signals would have already been received in the past. Archival data Figure 4: Current SETI Ellipsoid with SN 1987A as the source event, in galactocentric \(Y\) and \(Z\) coordinates. The red dots are stars that have not yet seen SN 1987A; the blue dots have seen it, but are outside the SETI Ellipsoid. The pink dots are stars inside the SETI Ellipsoid, and the black dots are those within 0.1 ly of the SETI Ellipsoid. or previous observations can be used to explore these stars. It may be unreasonable to assume that an extraterrestrial civilization will send a signal immediately upon observing a conspicuous event. More realistically, there will be some delay between reception and transmission. Another possible reason for delay is that the civilization may wait until a time-resolved light curve of the event is complete before broadcasting a mimicked copy of the signal. For example, if the source event is a supernova (SN), they may send a transmission that will produce a light curve of the same shape and width of the supernova light curve, once the supernova luminosity has been reduced to some fraction of its maximum value. In this case, stars inside, but close to, the SETI Ellipsoid are also strong candidates for technosignature searches. Davenport et al. (2022) focus on the SETI Ellipsoid with SN 1987A as the source event utilizing the Gaia Catalog of Nearby Stars (GCNS), which consists of 331k stars within the 100 pc neighborhood. SN 1987A was chosen because of its recency and proximity, but for a Galactic signaling scheme, it may be preferable to use source events within the Milky Way. We therefore add SETI Ellipsoids with respect to SNe 1604, 1572, and 1054, which have well-known dates of observation. Figure 3 shows that the closest point on the SETI Ellipsoid to Earth is at distance \(\frac{\mathscr{T}}{2}=\frac{cT}{2}\), so for source events with an age greater than about 650 yr, the SETI Ellipsoid will be entirely outside the 100 pc neighborhood, and even for those with age of a few hundred years, the SETI Ellipsoid will be largely incomplete in the 100 pc neighborhood. We thus expand our search to fully utilize the data available in Gaia DR3. Furthermore, Davenport et al. (2022) primarily consider the _present_ SETI Ellipsoid of SN 1987A, which tells us only which category each star falls in, as described above. To calculate which stars are currently on the SETI Ellipsoid, we determine the distances \(d_{1}\) and \(d_{2}\) for each star, then use the inequality \[|d_{1}+d_{2}-2\mathscr{A}|\leq\tau\] where \(\tau\) is a 0.1 lyr distance tolerance of being on the Ellipsoid. The current SETI Ellipsoid for SN1987A is presented in Figure 4, expanded to include most of the Gaia variable star catalog. Davenport et al. (2022) also calculate the crossing time for each star, \[T_{x}=\frac{d_{1}+d_{2}-\mathcal{D}}{c}\] which tells us the time at which the star will be on the SETI Ellipsoid, and they search through the Gaia Science Alerts archive (Hodgkin et al., 2021) for any variability alerts from any stars with crossing times within Gaia's operational phase. The crossing time diagram for SN 1987A is shown in Figure 5. We further develop this idea by selecting only stars that have crossing times, including the error bounds, between mid-2014 and mid-2017, and then performing a variability analysis that compares the light curve before and after the crossing time. Our analysis is described in more detail in Section 6. The crossing time is dependent on distances to both the source event and stars, as well as the time of Earth observation of the source event, so uncertainties in these directly affect uncertainties in the time of arrival of signals to Earth. For all the SNe considered, the uncertainty in the date of observation is negligible. For small angles, the timing uncertainty is dominated by the distance uncertainty to the star, because the light travel time from the source event to Earth and the candidate star is nearly identical. This is an additional reason why Davenport et al. (2022) use SN 1987A as a source event, because its distance is large relative to nearby stars. However, this is not generally the case for Galactic SNe; the timing uncertainties for these are significantly affected by the distance uncertainties to the SNe, which are listed in Table 1. For many stars, these large SNe distance errors will translate to timing uncertainties that make their observation impractical, so we disregard these errors and echo the sentiment of Seto (2021) that we, along with several other fields of astrophysics, await high-precision distances to these objects. These are projected to become available through future projects such as the Square Kilometre Array, which will have many small dish antennas with a wider field of view that will improve parallax measurements of very radio-bright sources such as the Crab pulsar (Kaplan et al., 2008), which corresponds to SN 1054. In total, we find 465 targets with crossing times and error bounds within the date range \begin{table} \begin{tabular}{c c c c} \hline SN & Distance (kpc) & Distance Uncertainty (kpc) & Source \\ \hline \hline SN 1987A & 51.5 & 1.2 & Panagia (1999) \\ SN 1604 & 5.1 & +0.8/-0.7 & Sankrit et al. (2016) \\ SN 1572 & 2.75 & 0.25 & Tian \& Leahy (2011) \\ SN 1054 & 1.93 & 0.11 & Trimble (1973) \\ \hline \end{tabular} \end{table} Table 1: Distances and Uncertainties to the Four SNe Used for Our Signaling Methods. Figure 5: Crossing time diagram for the SETI Ellipsoid with SN 1987A as the source event, in galactocentric \(Y\) and \(Z\) coordinates, with \(|X|<150\) pc. The vast majority of stars will not be on the SETI Ellipsoid for thousands of years. The contour lines show the growth of the Ellipsoid. of available Gaia epoch photometry using the SETI Ellipsoid for the four SNe. Their spatial distribution is shown in Figure 6. ## 4 Target Selection via the Seto Scheme Though the data from Gaia are a vast improvement over previous astrometric measurements, at large distances, the uncertainties in stellar distances correspond to large uncertainties in timing, which make both scheduling observations and searching through past data non-viable. To remove this error, and to complement our search of Gaia DR3 with the SETI Ellipsoid, we also use the signaling framework described by Seto (2021), which does not require precise stellar distances. The basic geometry of the Seto scheme is shown in Figure 1 and summarized here. The Seto scheme relies on finding a unique datum point in space-time that is common between Earth and an extraterrestrial civilization; in particular, the point \(x\) in Figure 1, which is the closest point to the source event on the Figure 6: SETI Ellipsoid target selection for SNe 1987A, 1604, 1572, and 1054, in galactocentric \(X\) and \(Z\) coordinates. The red dots are the 465 stars with SETI Ellipsoid crossing times, including error bounds, within the dates of available Gaia epoch photometry. The light blue dots are background stars. line Ex connecting Earth and a technosignature candidate system, is such a point in space. And, the time interval \(t(\theta)=l\cdot\sin(\theta)/c\) plus the time epoch of the source event, \(t_{0}\), is a unique temporal point. We should then search for signals that are synchronized with a transmission from the datum point \((x,t(\theta)+t_{0})\). The line segment Ex in Figure 1, with length \(d=l\cdot\cos(\theta)\) represents the depth we can search to, because any extraterrestrial civilizations on the line beyond \(x\) would need to be able to predict the source event in order to send a signal that will reach Earth in synchronization with a signal from the datum point. The time on Earth to observe each angle is given by the time of arrival of a signal sent from the datum point, \[t_{E}(\theta)=(t_{0}+t(\theta))+\frac{l\cdot\cos(\theta)}{c}-(t_{0}+\frac{l}{ c})\] \[=\frac{l}{c}(\sin(\theta)+\cos(\theta)-1)\] where \(t_{E}\) is the time on Earth since the observation of the source event (which occurs at time \(t_{0}+\frac{l}{c}\)). To simplify, Seto (2021) introduces the normalized time, \[\tau_{\rm E}\triangleq\frac{c}{l}t_{E}\] in which case the normalized times of observation are \[\tau_{\rm E}(\theta)=\sin(\theta)+\cos(\theta)-1\] The above equation tells us the time to observe a specific angle in the sky; this can give us the crossing times of stars, directly analogous to the SETI Ellipsoid crossing times, because we can trivially calculate the angle between two sets of celestial coordinates (i.e., that of the source event and that of each star). The crossing time diagram via the Seto scheme for SN 1054 is shown in Figure 7. To find what angles to observe at given times, which is useful for forecasting targets of interest, we can invert the above equation. For \(0\leq\tau_{\rm E}\leq\sqrt{2}-1\), there exist two solutions: \[\theta_{\pm}=\frac{\pi}{4}\pm\cos^{-1}\bigl{(}\frac{1+\tau_{\rm E}}{\sqrt{2}} \bigr{)}\] So, for normalized times \(\tau_{\rm E}<\sqrt{2}-1\), the directions to observe form two concentric rings on the celestial sphere centered at the direction of the source event. The \(\theta_{-}\) ring begins as a point in the event direction and expands over time, while the \(\theta_{+}\) ring begins as a great circle perpendicular to the event direction and shrinks over time; at \(\tau_{\rm E}=\sqrt{2}-1\), the rings merge, and beyond that, the search window of the source event has closed, and half the sky has been covered. As noted previously, this search framework only requires that the distance to the star be less than the search depth of the respective angle, \(d=l\cdot\cos(\theta)\). At \(\tau_{\rm E}=0\), the \(\theta_{-}\) ring has a search depth of \(l\) and the \(\theta_{+}\) ring has a search depth of \(0\); these values decrease and increase, respectively, to a final value of \(\frac{\sqrt{2}l}{2}\). Though not essential for a civilization that is only searching, the signaling schematic is similarly time-dependent. The directions that an extraterrestrial civilization following the Seto framework must transmit in are identical to the searching directions, but reflected across the plane perpendicular to the source event direction. In other words, the two signaling rings have the same angles as the receiving rings, but are concentric about the direction antipodal to the event direction. A strong connection exists between the SETI Ellipsoid and the Seto scheme. In particular, the two search directions in the Seto scheme correspond to the angles at which the lines \(d_{1}\) and \(d_{2}\) in Figure 3 are perpendicular, and the search depth is the length \(d_{2}\). This can be seen through the equation \[t_{E}(\theta)=\frac{l}{c}(\sin(\theta)+\cos(\theta)-1)=\frac{d_{1}+d_{2}-l}{c}\] Figure 8: Stars with Seto scheme crossing times within dates of available Gaia light-curve data for SNe 1604, 1572, and 1054. Each supernova has two rings of candidates, corresponding to the \(\theta_{+}\) and \(\theta_{-}\) rings. There are 403 candidates total. Figure 7: Seto scheme crossing time diagram with SN 1054 as the source event. This scheme covers half the sky, centered at SN 1054, with two rings, one expanding with time and one contracting. The contour lines show the progression of these two rings. For SN 1054, the search window closes at around the year 3620, corresponding to a normal time of \(\sqrt{2}-1\). which is identical to the crossing time equation for the SETI Ellipsoid. As seen in Figure 1, two such angles exist, corresponding to \(\theta_{+}\) and \(\theta_{-}\), which form two rings in the celestial sphere when rotated about the Earth-source event axis. The normalized time \(\tau_{\rm E}=\sqrt{2}-1\) corresponds to the point at which no such perpendicular lines exist in the SETI Ellipsoid. Although stellar distance uncertainties do not factor into the timing uncertainties using the Seto scheme, distance errors to the SNe will have an effect. The timing uncertainty is \[dt=\partial t_{\theta}+\partial t_{l}\] This is strongly dominated by the \(\partial t_{l}\) term, because the angular positions of the stars and SNe have extremely precise values, so \(d\theta\) is small. So, \[dt=\frac{dt_{E}}{dl}dl=\frac{dl(\sin(\theta)+\cos(\theta)-1)}{c}\] which has a maximum value of \(dl\frac{\sqrt{2}-1}{c}\) in the domain of \(\theta\). Hence, precise measurements of SNe distances are necessary for constraining crossing times to within a reasonable interval. However, as noted in Section 3, the distances to supernova remnants have large uncertainties corresponding to unfeasibly large timing uncertainties for most angles. Thus, we again disregard these errors. For our Seto scheme target selection, we again choose stars with crossing times within the dates of Gaia epoch photometry, excluding error bounds as noted above, and with distances less than the search depths. We use SNe 1604, 1572, and 1054 as source events. SN 1987A is excluded because its normalized time is very small, so the \(\theta_{+}\) ring has a very low depth and the \(\theta_{-}\) ring is very thin; when calculated, there are zero stars for the Seto scheme with respect to SN 1987A within the desired date range. In total, we find 403 targets with crossing times within the date range of available Gaia epoch photometry using the Seto method for the three SNe. Their spatial distribution is shown in Figure 8. ## 5 Candidate Exploration We find a total of 868 candidates with SETI Ellipsoid or Seto crossing times falling in the time range of Gaia DR3 epoch photometry. Their astrometric properties, crossing times, and corresponding source supernovae are listed in Table 2, which is available in full in the machine-readable format. The spatial distribution and color-magnitude diagram, with Gaia variability classification, of the target stars is shown in Figure 9. The majority of candidates are classified as solar-like variables, which have rotational modulation, flares, and spots. These are more difficult to analyze, because they have a less well-defined model that involves the many parameters of the star's magnetic active regions. However, a possible advantage is their similarity to the Sun, which may indicate a greater chance at hosting life compared with other variable types, many of which clearly lie above the main sequence, but we do not eliminate the possibility of extraterrestrial civilizations with stars of different evolutionary states or with more extreme variability. Although variable stars are astrophysically interesting, and we expect a form of artificial variability when searching for extraterrestrial intelligence, it is unclear whether or not any type of technosignature signal in a stellar light curve would be classified as a variable star by Gaia's machine-learning classification algorithm (Eyer et al., 2023). Future Gaia data releases will include epoch photometry for all sources in the catalog, allowing for a more complete analysis. However, we note that the classifications from Gaia are only the most likely variable type for each star, outputted from the machine-learning algorithm, and so the stars may be classified incorrectly, may not fall within any of Gaia's predefined variability types, or may not even be Figure 9: The top panel shows the spatial distribution of all 868 candidates selected via the SETI Ellipsoid and Seto methods for all four SNe, with crossing times within the date range of available Gaia light curves, combining the targets from Figures 8 and 6. The bottom panel displays the color-magnitude diagram of these targets, each marked with its Gaia variable type classification. variable stars at all, and so a technosignature search in the light curves of these objects is not futile. Our light-curve analysis described in Section 6 is sensitive to both changes in variability parameters and flux, and so can be applied to both variable and nonvariable stars, regardless of their Gaia classification. Furthermore, it can be modified given prior information about the sample of stars to be analyzed; for example, we perform a more detailed analysis for the sample of eclipsing binaries, which comprise a significant fraction of the candidates, as seen in Figure 9. The distance errors for the 465 SETI Ellipsoid candidates must be less than 1.5 ly, because otherwise the timing uncertainties would necessarily extend outside the Gaia data window, which spans three years. The actual maximum error is 0.46 ly, and 68% of candidates have errors less than 0.19 ly, indicating that for the majority of these targets, the crossing times are accurate to approximately two months. Predictably, the distance errors for the 403 Seto method candidates are much greater, as they have no constraints apart from the upper bound distance being less than the search depth. For these targets, the maximum error is 2000 ly, and 66% have errors less than 55 ly. ## 6 Variability Analysis To fully utilize the available epoch photometry data from Gaia (see Figure 10), which is limited by sparsity and incompleteness, we use a novel technosignature search approach that splits the light curves at the crossing time and looks for variations between the two halves. We focus on periodic variables, for which the sparsity issue can be alleviated by period folding. In particular, eclipsing binaries are the most ideal variable type for our analysis, because they have easily parametrized light curves as double Gaussians. This allows for a numerical measure of the dissimilarity of the left and right halves of the light curve using an error-weighted difference for each parameter. Ranking these measures can then give the most anomalous light curve, which would correspond to the light curve with the greatest change at the crossing time and thus the highest potential to be a technosignature system. Here we point out that rather than a traditional SETI approach, which generally looks for direct electromagnetic emission from technology with spectrotemporal properties consistent with known or hypothesized behaviors of technology, our method is sensitive to civilizations that can modulate their host star's period, amplitude, or phase. While this may be possible for a sufficiently advanced civilization, we ideally would like to search for several types of potential transmissions, such as a signal mimicking the light curve of the source event, across all stars, not just those classified as periodic or variable. We attempted a search for such signals by cross-correlating a normalized supernova light curve, as well as a simpler top-hat function derived from the same parameters of the supernova light curve, with each candidate's light curve; however, the Gaia epoch photometry is too sparse for this analysis, as the timescale of a supernova is on the same order as the spacing between photometry measurements, making any correlation spurious. This can be seen in Figure 11. Of the 868 candidate targets, 73 are classified as eclipsing binaries. For each of these eclipsing binaries, we first split the normalized \(G\)-band light curve at the crossing time. We refer to the light curves before and after the crossing time as the left and right light curves, respectively. Then, with a Lomb-Scargle Periodogram, we extract the peak frequency of the full, left, and right light curves. Each of these frequencies also carries a false alarm probability, which is the probability that, assuming the light curve has no periodic signal, we will observe a periodogram power at least as high; we take these values to be the error, although they do not directly represent a statistical uncertainty. All three light curves are subsequently folded with the same frequency, namely the peak frequency with the lowest false alarm probability, and in reference to the same epoch. A fit to a double Gaussian with constant baseline is then performed for each folded light curve, and seven relevant parameters are extracted: the depth, phase, and width of the two Gaussians, and the baseline flux. For these parameters, plus the median flux and the peak frequency, we calculate the error-weighted difference, \[\frac{|X_{\rm right}-X_{\rm left}|}{\sqrt{\sigma_{\rm right}^{2}+\sigma_{\rm left }^{2}}}\] where \(\sigma\) is the square root of the variance for the seven double Gaussian parameters, the MAD for the median flux, and the false alarm probability for the peak frequency. For many of these targets, the crossing time is close to either the start or the end of available observations, which causes either the left or right light curve to contain very few data points. In these cases, we are unable to perform a double Gaussian fit at all. Thus, from Figure 10: Sample light curve for one of the selected targets, with solar-like variable type classification, in the three Gaia bands: \(G\), \(BP\), and \(RP\). The solid vertical line marks the crossing time, via either the SETI Ellipsoid or the Seto scheme. The solid horizontal lines denote the median flux, calculated separately before and after the crossing time, and the dashed horizontal lines are the \(\pm 3\times\) median absolute deviation (MAD) levels. This target is a candidate from the SETI Ellipsoid method, so we include the crossing time error due to the stellar distance uncertainty, which is denoted by the vertical dashed lines. The data are noticeably sparse and incomplete, consisting of only three years of Gaia observations, but have long-term stability. the 73 eclipsing binaries, we ultimately perform a full analysis on 45 of them. The analysis was performed solely on \(G\)-band data, because we find that trying to fit all three bands simultaneously further reduces the number of successful fits; however, if a promising signal is found from the \(G\)-band data, it would be valuable to analyze the \(RP\) and \(BP\) bands to confirm that the signal is present in them as well. For each parameter, we rank the targets in order of increasing error-weighted difference, so the \(0^{\rm th}\) target has the most similar left and right light curves with respect to a given parameter, and the \(44^{\rm th}\) target has the most dissimilar light curves. We then take the sum of the rankings for each target and divide by the maximum, leaving us with a single interest index ranging from 0 to 1, where 1 represents the most interesting target (i.e., the target with the greatest difference between its left and right light curves). The periodograms and light curves for four eclipsing binaries, including the target with the highest-interest index, are shown in Figure 12. There is not a large difference between targets of high interest, as seen in the top two panels, but this is expected given the null hypothesis of no signals. In the most interesting candidate (top left panel), the greatest difference between the left and right light curves is due to a change in the relative depths of the two Gaussians. However, upon inspection, we see that this difference, and likely most other changes between the left and right light curves for these systems, is caused by the sparsity of the light curve, even when folded. However, our variability analysis and ranking algorithm are successful in separating eclipsing binaries with clearly distinctive light curves from those without. The lowest-ranked light curves, in the bottom panels of Figure 12, clearly have much worse Lomb-Scargle Periodogram frequency extractions and double Gaussian fits, largely because there are very few points in the light-curve dips. Although a full periodicity analysis cannot be implemented on the entire sample of variable Figure 11: Sample candidate light curve with SN 1987A light curve overlain in green. The red dots show the SN light-curve interpolated to match the Gaia light curve data points; the orange dots are the same for a top-hat function with width equal to the FWHM of the SN light curve, a baseline equal to a normalized flux of 1, and a peak equal to the maximum normalized flux of the stellar light curve. The vertical blue line indicates the crossing time of the star, which is where the supernova light curve and top-hat function begin. The timescale of the SN, on the order of hundreds of days, is close to that of the Gaia epoch photometry sampling and is also longer than the periods of most stars that are classified as variable by Gaia. This approach would be better suited to a dataset that has denser time sampling and is not limited to only variable stars. Figure 12: (a) Full, left, and right light curves and their Lomb-Scargle Periodograms for two eclipsing binaries. The left panel has the highest-interest index and the right panel is near the middle of the interest rankings. The analysis seems to successfully fold the data both before and after the crossing time, and there is a small but noticeable change in the double Gaussian eclipsing binary fit between the left and right light curves, though this is most likely due to sparse sampling. It is difficult to distinguish between eclipsing binaries with high interest, as we do not normally expect the variability parameters for an eclipsing binary to change, so the left and right light curves should be similar. (b) Lomb-Scargle Periodograms and light curves for two eclipsing binaries with some of the lowest-interest indices. The periodograms are unable to find strong peak frequencies and the double Gaussian fit fails in these two panels, indicating that the analysis and ranking system successfully assigns low interest to targets that do not have clean data. stars, because many are not periodic, we perform a similar but simpler analysis on the remaining candidates. Rather than comparing the double Gaussian fit parameters before and after the crossing time, we compare the median flux, the standard deviation, and the best-fit slope for these targets. We follow the same process of calculating the error-weighted differences for the median flux and best-fit slope, but for the standard deviation, we instead normalize by the mean flux and calculate a simple difference, because there is no well-defined measure of the error of a standard deviation value. An advantage of this simpler procedure is that we can incorporate all three Gaia filter bands, because a linear fit will fail only if there are fewer than two points on either side of the crossing time, so we ultimately compute nine parameters, the same number as for the eclipsing binary analysis, for a total of 734 out of the 795 variable stars not classified as eclipsing binaries. We also repeat the same target ranking process, and show the most and least interesting candidates in Figure 13. It again appears that our analysis successfully finds uninteresting candidates with seemingly well-understood or periodic light curves, giving them low rankings and separating them from more interesting candidates, which appear to have more significant changes in light curves before and after the crossing time. However, it remains difficult to separate any possible technosignature signal Figure 12: (Continued.) from random variation and noise given the limits of the Gaia epoch photometry. ## 7 Conclusions We have presented both a spatiotemporal technosignature candidate search framework, which combines the SETI Ellipsoid and Seto methods, and a novel SETI approach that is sensitive to changes in a periodic star's variability parameters. With precise astrometry and distance from Gaia and Bailer-Jones et al. (2021), we explore this framework with several nearby SNe as source events for signal synchronization to select candidates with crossing times within the time range of available Gaia epoch photometry, and perform our variability analysis on the well-parametrized subset of eclipsing binary candidate systems. Given the lack of any statistically significant changes in the light curves of the selected eclipsing binaries, we can place constraints on the prevalence of extraterrestrial civilizations using either of these spatiotemporal signaling schemes and behaving in a manner detectable by our time domain analysis, as has been done by Price et al. (2020), Franz et al. (2022), and others for previous SETI observations. This limit can be calculated with a one-sided 95% Poisson confidence interval, assuming a conservative 50% probability of detecting such a signal if present (Gehrels, 1986). We performed our analysis on a total of 779 variable star systems, and so we place an upper limit of 0.47% on the percentage of such systems hosting civilizations that are behaving in the hypothesized manner. Although we use only the error associated with the distance to the stars to calculate crossing time uncertainties, the errors in the SNe distances are also significant, and will have a non-negligible effect on the timing uncertainty for most targets via both the SETI Ellipsoid and the Seto method. We, like Seto (2021), again emphasize that while precise distances to SNe will help us more accurately constrain our technosignature searches, the advent of these measurements is beneficial not just to SETI but to many areas of astrophysics. With tighter SNe distance errors, we will also be able to expand our framework to other historic Galactic SNe. Our geometric framework constrains the SETI parameter space by telling us where and when to search, for both planned observations and archival data, but the nature of the synchronized transmission is still unknown. To best utilize the light curves from Gaia, we look for modulations in frequency, amplitude, or phase of the candidate systems, which is reasonably feasible for an advanced civilization. Another possible intentional signals in optical bands is a peak in the light curve that mimics the shape and width of the source event. Although this is not yet feasible given the limitations of Gaia data and photometry, the situation may change in the near future. This method may also be better suited for a telescope like TESS, which has more densely sampled data at the cost of some long-term stability and discontinuous observations for most stars. Furthermore, there is the possibility of using other datasets and observatories to search for synchronized bright radio flashes or other types of transient radio signals. In particular, the SETI Ellipsoid and Seto schemes could be used to select targets for radio SETI observations. This work is also limited by the incompleteness of Gaia DR3 light curve data, which consist of a curated set of variable objects that have passed through a classification algorithm and may not include other potentially interesting candidates, such as stars that may have a short, small increase in flux. Moreover, only three years of photometric data have been released for the stars with light curves. Gaia has made this work and other geometric approaches to SETI possible, and we expect that the next big revolution will arrive with Gaia Data Re Figure 13: Light curves in all three Gaia bands for the most (top panel) and least (bottom panel) interesting targets not classified as variable stars, formatted in the same manner as in Figure 10. The most interesting candidate displays a nonsignificant but noticeable shift in the median flux before and after the crossing time, as well as possible increase in the scatter. The best-fit slope also appears to be strongly negative before the crossing time but near-horizontal afterward. On the other hand, the least interesting candidate shows almost zero change in the median flux, and seemingly small change in the scatter and best-fit slope. lease 4, which will include 66 months of epoch photometry for all 2 billion sources. The authors thank the anonymous referee for their helpful comments, which substantially improved this work. The authors wish to thank Sofia Sheikh and Barbara Cabrales for helpful conversations about the SETI Ellipsoid. The Breakthrough Prize Foundation funds the Breakthrough Initiatives which manages Breakthrough Listen. A.N. was funded as a participant in the Berkeley SETI Research Center Research Experience for Undergraduates Site, supported by the National Science Foundation under grant No. 1950897. J.R.A.D. acknowledges support from the DiRAC Institute in the Department of Astronomy at the University of Washington. The DiRAC Institute is supported through generous gifts from the Charles and Lisa Simonyi Fund for Arts and Sciences, and the Washington Research Foundation.
2309.11242
Photoemission orbital tomography based on tight-binding approach: method and application to $π$-conjugated molecules
Conventional photoemission orbital tomography based on Fourier iterative method enables us to extract a projected two-dimensional (2D) molecular orbital from a 2D photoelectron momentum map (PMM) of planar $\pi$-conjugated molecules in a single-orientation system, while not in a multi-orientation system. In this work, we demonstrate photoemission orbital tomography for $\pi$-conjugated molecules with a tight-binding ansatz (linear combination of atomic orbitals). We analyze 2D PMMs of single-orientation pentacene/Ag(110) and multi-orientation 3,4,9,10-perylenetetracarboxylic dianhydride/Ag(110) and reproduce their three-dimensional highest occupied molecular orbitals. We demonstrate that the PhaseLift algorithm can be used to analyze PMM including experimental or theoretical uncertainties. With the 2D PMM for pentacene, we simultaneously optimized the structure and the molecular orbital. The present approach enables us to extract the three-dimensional orbitals and structures of existing materials.
Misa Nozaki, Takehisa Konishi
2023-09-20T12:10:51Z
http://arxiv.org/abs/2309.11242v2
Photoemission orbital tomography based on tight-binding approach: method and application to \(\pi\)-conjugated molecules ###### Abstract Conventional photoemission orbital tomography based on Fourier iterative method enables us to extract a projected two-dimensional (2D) molecular orbital from a 2D photoelectron momentum map (PMM) of planar \(\pi\)-conjugated molecules in a single-orientation system, while not in a multi-orientation system. In this work, we demonstrate photoemission orbital tomography for \(\pi\)-conjugated molecules with a tight-binding ansatz (linear combination of atomic orbitals). We analyze 2D PMMs of single-orientation pentacene/Ag(110) and multi-orientation 3,4,9,10-perylenetracarboxylic dianhydride/Ag(110) and reproduce their three-dimensional highest occupied molecular orbitals. We demonstrate that the PhaseLift algorithm can be used to analyze PMM including experimental or theoretical uncertainties. With the 2D PMM for pentacene, we simultaneously optimized the structure and the molecular orbital. The present approach enables us to extract the three-dimensional orbitals and structures of existing materials. + Footnote †: preprint: APS/123-QED ## I Introduction Angle-resolved photoemission spectroscopy (ARPES) is an indispensable experimental method to obtain information on the electronic states. ARPES has been intensively used to address the electronic states of organic thin films adsorbed on solid surfaces [1; 2; 3; 4; 5]. The recently developed time- and angle-resolved photoemission spectroscopy [6; 7] made it possible to record real-time non-equilibrium electronic processes in organic thin films [8]. Puschnig and others [2; 9] developed a method to reproduce molecular orbitals from photoelectron momentum map (PMM) by recovering the phase of the Fourier-transformed orbital. This method is called photoemission orbital tomography (POT). PMM is proportional to the absolute square of the Fourier-transformed molecular orbital within the framework of the plane-wave final state approximation and does not contain information on the Fourier phases. The currently proposed POT [10; 11; 12; 13] uses the Fourier iterative method [14; 15] and its variants [16; 13] as phase retrieval algorithms. These POTs have been mainly used to reconstruct two-dimensional (2D) projections of molecular orbitals from 2D PMMs of single-orientated planar \(\pi\)-conjugated molecules adsorbed on metallic substrates [10; 11; 12; 13]. Luftner _et al_. also accomplished 3D reconstruction of frontier orbitals of single-orientation 3,4,9,10-perylenetetracarboxylic dianhydride (PTCDA) by varying the photon energy [17]. One should, however, note that preparing such a 3D PMM requires prudent calibration of the intensities of light for different energies [17; 18]. For microscopic insights into various organic semiconductors, further developments are desired. First, reconstructing a reliable 3D molecular orbital from a single set of 2D PMM is desired. From a 2D PMM, we can reproduce at most a 2D projection of molecular orbital with the conventional POT based on the Fourier iterative method. Kluuev _et al_. discussed the possibility to reconstruct a full 3D orbital from a 2D PMM by using an ansatz, but did not show concrete examples [12]. Second, even in epitaxial molecular films, it is quite common to find not one but several symmetry-related molecular orientations. Therefore, it is important to extend POT to multi-orientation systems. The conventional POT based on the Fourier iterative method cannot treat incoherent superposition of photoelectron intensities from molecules with various orientations [19]. In this work, we propose a POT based on a tight-binding picture and apply it to reproducing molecular orbitals of \(\pi\)-conjugated molecules on metal substrate. We formulate the POT as a fitting of experimental PMM to theoretical photoelectron intensity from molecular orbitals. By describing the molecular orbitals with localized atomic orbitals centered on fixed atomic positions, the problem becomes simpler. To solve the problem, we use the least squares fitting and the PhaseLift algorithm [20; 21; 22]. We reproduce the 3D highest occupied molecular orbitals (HOMOs) of single-orientation pentacene and multi-orientation PTCDA [Fig. 1(a) and 1(c)] from their 2D PMMs. In the case of pentacene, we also remove the constraint of fixed atomic positions by simultaneously optimizing the molecular orbital coefficients and the molecular structure with the experimental 2D PMM. ## II Theory ### Photoelectron intensity of molecules In the plane wave final state approximation, we can describe the photoelectron intensity from an adsorbed molecule on metal substrate as \[W_{\mathbf{k}}(|\psi_{i}\rangle) \propto |\langle\mathbf{k}|\mathbf{\varepsilon}\cdot\mathbf{p}|\psi_{i} \rangle|^{2}\delta\left(h\nu-E_{i}-E_{k}-\Phi\right) \tag{1}\] \[\propto |\mathbf{\varepsilon}\cdot\mathbf{k}|^{2}|\left\langle\mathbf{k} |\psi_{i}\right\rangle|^{2}\delta\left(h\nu-E_{i}-E_{k}-\Phi\right)\]
2309.12763
Reduce, Reuse, Recycle: Is Perturbed Data better than Other Language augmentation for Low Resource Self-Supervised Speech Models
Self-supervised representation learning (SSRL) has demonstrated superior performance than supervised models for tasks including phoneme recognition. Training SSRL models poses a challenge for low-resource languages where sufficient pre-training data may not be available. A common approach is cross-lingual pre-training. Instead, we propose to use audio augmentation techniques, namely: pitch variation, noise addition, accented target language and other language speech to pre-train SSRL models in a low resource condition and evaluate phoneme recognition. Our comparisons found that a combined synthetic augmentations (noise/pitch) strategy outperformed accent and language knowledge transfer. Furthermore, we examined the scaling factor of augmented data to achieve equivalent performance to model pre-trained with target domain speech. Our findings suggest that for resource-constrained languages, combined augmentations can be a viable option than other augmentations.
Asad Ullah, Alessandro Ragano, Andrew Hines
2023-09-22T10:09:09Z
http://arxiv.org/abs/2309.12763v2
Reduce, reuse, recycle: is perturbed data better than other language augmentation for low resource self-supervised speech models ###### Abstract Self-supervised representation learning (SSRL) has improved the performance on downstream phoneme recognition versus supervised models. Training SSRL models requires a large amount of pre-training data and this poses a challenge for low resource languages. A common approach is transferring knowledge from other languages. Instead, we propose to use audio augmentation to pre-train SSRL models in a low resource condition and evaluate phoneme recognition as downstream task. We performed a systematic comparison of augmentation techniques, namely: pitch variation, noise addition, accented target-language speech and other language speech. We found combined augmentations (noise/pitch) was the best augmentation strategy outperforming accent and language knowledge transfer. We compared the performance with various quantities and types of pre-training data. We examined the scaling factor of augmented data to achieve equivalent performance to models pre-trained with target domain speech. Our findings suggest that for resource constrained languages, in-domain synthetic augmentation can outperform knowledge transfer from accented or other language speech. Asad Ullah, Alessandro Ragano, Andrew Hines+School of Computer Science, University College Dublin, Ireland self-supervised learning, low resource pre-training, phoneme recognition, audio augmentation, autoregressive predictive coding Footnote †: This paper emanated from research funded by Science Foundation Ireland to the Insight Centre for Data Analytics (1/2RC/2289P2) and SFI Centre for Research Training in Machine Learning (1/RC/1683). For the purpose of Open Access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission. ## 1 Introduction Self-supervised representation learning (SSRL) has gained popularity in speech processing [1, 2, 3] thanks to its ability to learn general-purpose feature representations by using large datasets without human labels [4]. These models learn hierarchical representations of the data by solving auxiliary tasks where the labels are obtained from the data itself. Tasks such as phoneme recognition and speech recognition are solved by finetuning SSRL models using small annotated datasets. Figure 1 shows a typical SSRL model and its application to downstream phoneme recognition task. As we can see the model learns representations from the raw audio. The learned representations are then used in the downstream task and fine-tuned with small amount of labelled data. Training SSRL models requires large amount of pre-training data [5]. State of the art SSRL models use cross-lingual pre-training data, with the assumptions that all languages share a common phoneme set and that having more examples of pre-training data will help for low resource conditions [6, 7]. The XLS-R model [6] is pre-trained with data from 128 languages. However, some languages such as Irish, Oriya and Manx have contributed only a small fraction of the pre-training data in [6]. The performance of XLS-R on these low resource languages are negatively affected due to the high imbalance of languages data in pre-training. Improving the performance of these low resource languages has not been explored systematically with respect to pre-training SSRL models. Data augmentation is widely used approach for artificially increasing training data in order to improve the performance of models trained on small datasets. This approach has been used for image classification [8], text classification [9] and speech recognition [10]. The proposed approach in [10] has used perturbation for increasing the amount of training data. In speed perturbation, the speed of the original audio is increased/decreased to generate synthetic audio which capture more variability in the underline training data. This approach is also useful for low resource training conditions or domain mismatch conditions. For example, Ko et al. [11] proposed to use different noise augmentations (such as additive noise sources, reverberations, speed perturbations) to the training data in order to train noise robust speech recognition model. Recently audio augmentation has been applied to pre-training data for building robust pre-trained models [12]. In [13], the contrastive predictive coding (CPC) model is pre-trained with augmented data. Kharitonov et al. [13] used different augmentation effects such as additive noise, reverberation and pitch modification on the training data to improve the performance of a CPC-based pre-trained model on various downstream tasks. More noise-augmentation techniques have been applied to the wav2vec 2.0 model pre-training data in [14, 15, 16]. Recent work [17] used the wav2vec 2.0 model pre-trained and evaluated in low resource training conditions. Our work is similar to [17] where we target low resource training conditions depending upon the availability of the pre-training data. In this work we systematically test different augmentation strategies for scenarios where the amount of pre-training data for training SSRL models are limited. We simulate our experiment using English language as a proxy for low resource languages as it allows us to compare to pre-training with larger amounts of data but our approach can be applied to low resource languages. We first added African ac Figure 1: Pre-training and fine-tuning of SSRL Model cented English speech to the pre-training data to explore the effect of same language but different accent to the target data. Then we added Chinese language speech to see the benefits of cross-lingual data. Finally, we evaluated synthetic data augmentation of the pre-training data by using noise augmentation, pitch modification, and both techniques combined. We arbitrarily chose a simple and effective SSRL model from [1] called autoregressive predictive coding (APC). There are other variants of APC such as [18, 19, 20, 21]. By definition, the APC model is a generative approach which predicts future frames based on the previous history of frames. In this work, we have trained the APC model in a low resource training environment (i.e. with a small quantity of unlabelled data). We considered the availability of low resource training data conditions ranging from 25-hours to 100-hours of Librispeech data. Then we fine-tuned the pre-trained APC model with 10-hours of Librispeech phoneme labelled data. The fine-tuned model is evaluated on Librispeech test-clean. ## 2 Experimental Setup ### Experiments In this work, we experimented with different augmentation strategies in pre-training of the SSRL model in order to increase the training data samples. We added distinct accent speech and other language speech to the pre-training data of SSRL model. This will enable us to examine the effect of knowledge transfer in pre-training the SSRL model. These are examples of cross-lingual transfer in pre-training data. Next, we added noise-augmentation to perturb the training data so that the pre-trained model learns robust representations. For this purpose, we mixed training data with additive noise samples in the time domain with 5dB, 10dB and 15dB signal to noise (SNR) ratio. We applied pitch-modification to the training data so that the model learns pitch-invariant representations. Then we applied combined pitch-modification and noise-augmentation effects to the pre-training data and pre-trained the SSRL model. The resultant pre-trained model learns pitch-invariant and robust representations. In all these experiments, we used 80-dimension mel-spectrogram features as input to the pre-trained model and phonemes as the target labels in downstream task. We pre-trained the APC model with the following clean and augmented data. **Clean Data**: We first partitioned Librispeech train-clean-100 data into 25-hrs, 50-hrs, 75-hrs, 100-hrs and then used these subsets as clean augmented pre-training data. **Clean25 + [African accented English Data]**: We mixed baseline Librispeech 25-hrs with African accented English speech data as pre-training augmented data. **Clean25 + [Chinese speech Data]**: We added real Chinese speech data with the baseline Librispeech 25-hrs in pre-training data. **Clean25 + [Pitch-aug]**: We first applied synthetic pitch-modification effects to baseline Librispeech 25-hrs and then combined the baseline and pitch-augmented in pre-training data. **Clean25 + [Noise-aug]**: We added noise augmentations from freesound database to baseline 25-hrs data and then combined baseline 25-hrs and noise-augmented in pre-training data. **Clean25 + [Noise-aug, Pitch-aug]**: We added pitch-modification and noise-augmentation effects to the baseline Librispeech 25-hrs and combined them in the pre-training data. We increased the pre-trained augmented data with more augmentation factors. The pre-trained models were fine-tuned with 10-hrs of data labelled with phonemes for the downstream phoneme recognition task. The fine-tuned models were then evaluated on the Librispeech test-clean dataset in downstream task. Note that in all these experiments we kept the fine-tuning and evaluation data fixed. We only experimented and applied different augmentation strategies to the pre-training data. For simplicity, all these experiments are shown in Table 1. ### Datasets We used the Librispeech dataset [22], African accented English speech dataset1, Common Voice Chinese speech dataset2 and MUSAN database [23] for pre-training the SSRL models. Librispeech dataset is an American English reading speech widely used in the literature for research benchmarks. In this work, we used Librispeech train-clean-100 dataset for pre-training which is then partitioned into 25-hrs, 50-hrs, 75-hrs and 100-hrs subsets for data availability purposes. We extracted 75-hrs training data from African accent English speech challenge corpus. This 75-hrs of data is then mixed with baseline Librispeech 25-hrs in pre-training stage. Similarly, we extracted 75-hrs from Common Voice Chinese speech data and mixed with baseline Librispeech 25-hrs for pre-training purposes. For noise-augmentation, we used MUSAN database [23]. MUSAN database contains different kind of noise sources such as background music, freesound, sound-bible and overlap noise speech. In this work, we extracted freesound and sound-bible noise sources only and added to the available pre-training data for noise augmentation. We also added pitch modification to the pre-training data using WavAugment [24]. Note all these speech datasets are used for pre-training the SSRL models. Footnote 1: [https://zindi.africa/competitions/intron-afrispeech-200-automatic-speech-recognition-challenge](https://zindi.africa/competitions/intron-afrispeech-200-automatic-speech-recognition-challenge) Footnote 2: [https://commonvoice.mozilla.org/en/datasets](https://commonvoice.mozilla.org/en/datasets) For fine-tuning, we extracted a fixed 10-hrs of speech from Librispeech train-clean-360 to test a downstream phoneme recognition task. The pre-trained model was then evaluated on a phoneme recognition task. Librispeech standard test-clean dataset is used for evaluation. The fine-tuned 10-hrs and test-clean dataset used phoneme labels from [25]. \begin{table} \begin{tabular}{l l l} \hline **Experiments** & **Pre-training Data (hrs)** & **Aug. ratio** \\ \hline Clean Data & 25,50,75,100 & N/A \\ Clean25 + [African accented English Data] & 50, 75, 100 & 1, 2, 3 \\ Clean25 + [Chinese speech Data] & 50, 75, 100 & 1, 2, 3 \\ Clean25 + [Noise-aug] & 50, 75, 100 & 1, 2, 3 \\ Clean25 + [Pitch-aug] & 50, 75, 100 & 1, 2, 3 \\ Clean25 + [Noise-aug, Pitch-aug] & 50, 75, 100,175,325,425,525 & 1,2,3,4,6,12,16,20 \\ \end{tabular} \end{table} Table 1: Augmentation strategy on pre-training ### APC Architecture The architecture of the APC model is a Long Short-term Memory (LSTM) structure with skip connections [26]. LSTM network consists of 3 layers. Each layer size is 512. The input to the APC model is a sequence of mel-spectrogram features while the output is 512 size feature representations of these spectrogram. The training criteria of the APC model is the 3-step prediction of the future frames. Mean square error (MSE) loss is computed between prediction and ground truth frames. The APC model is pre-trained for 100 epochs with learning rate 0.0001 and batch size 32. Adam optimizer is used for neural network optimization. In the downstream phoneme recognition task, feed-forward linear layers are attached to the last layer of the APC model. The input dimension of the linear layer is 512 while the output dimension is the total number of phonemes. Log softmax is used to convert the output of the final layer to probabilities. Finally, the cross-entropy loss is computed between probabilities and ground truth phoneme classes. Remaining hyper-parameters are same as [1]. ## 3 Results ### Comparison of Augmentation Strategies Figure 2 show the amount of available pre-training data and the augmentation strategy (accent-aug, language-aug, noise-aug, pitch-aug, mix [pitch, noise]-aug) and the phoneme evaluation performance on the standard Librispeech test-clean dataset. We first extracted 25 hrs from train-clean-100 as available pre-training data and pre-trained the SSRL model. We consider the phoneme classification accuracy for Librispeech 25-hrs as the baseline accuracy: 51.5%. Then we pre-trained SSRL model with Librispeech 100-hrs clean and 100-hrs augmented data for a fair comparison in figure 2. We added Librispeech 75-hrs, African accented English speech 75-hrs, Chinese speech 75-hrs, noise-augmented 75-hrs, synthetic pitch-modified 75-hrs and combined noise-pitch-augmented-75-hrs to the baseline 25-hrs as augmented pre-trained data. We pre-trained the SSRL model on these augmented data and reported the delta phoneme classification accuracy as compared to baseline in figure 2. Figure 2 shows the augmentation strategies we adopted (x-axis), and the the delta (improvement percentage) of phoneme accuracy compared to the baseline 51.5%. Figure 2 shows that adding augmentation to the pre-training data improves the performance in phoneme classification. Adding African accented English speech data show the smallest improvement in the phoneme classification. Pitch-augmentation performed better than augmenting with accented or other language data. The noise-augmentation also improves phoneme accuracy but less than the other strategies tested. However, Figure 2 highlights that combining synthetic pitch-modification and noise-augmentation performs better than the transfer strategies (i.e. accent and language) or the single augmentation strategies. ### Augmentation Size Analysis A second analysis is conducted to understand the influence of the number of augmentation hours. In Figure 3 we plot the phoneme classification (%) against the total number of hours in pre-training data for each augmentation strategy. We report the phoneme classification accuracy for baseline at 25-hrs. We first added Librispeech 25-hrs to the baseline 25-hrs and pre-trained the model on 50-hrs combined data. We then added Librispeech 50-hrs to the baseline 25-hrs and pre-trained the SSRL model on 75-hrs pre-training data. In the next experiment, we added Librispeech 75-hrs to the baseline 25-hrs and pre-trained the model on combined 100-hrs data. We repeated the same procedure with augmentation data as well. Overall, we performed 4 experiments for individual (combined) augmentations for a fair comparison. We compared the performance of phoneme classification for all augmentations. Figure 3 shows that adding audio augmentations improved the performance. This improvement in performance is visible from the steep slopes of all augmentation approaches in Figure 3. However, some augmentation slopes are steeper than others. e.g: the slope for adding target Librispeech augmented data is higher than others (blue plot). Other augmentations such as accent, language, noise and pitch-modification also improves the performance. However the performance of combined pitch and noise augmentation is better than individual augmentations and cross-lingual transfer (accent, language). We noticed that the phoneme classification accuracy can be further improved by increasing the augmentation factors for our best augmentation strategy. We extended our experiments by adding 150-hrs, 300-hrs, 400-hrs and 500-hrs [noise, pitch] augmentations to the pre-training data and performed the pre-training process as shown in Figure 3. Using 525-hrs mix of augmentation in pre-training data matched the performance of SSRL model that was pre-trained using data matching the target, i.e. more data from Librispeech train-clean-100 as shown in red and blue plots in figure 3. From our results in Figure 3, we observed that adding augmentation to the baseline 25-hrs pre-training data improves the performance on downstream phoneme classification. To better understand how much augmentation data in pre-training improves the phoneme accuracy, the inset in Figure 3 shows a graph for achieving a certain level accuracy using our augmentation approach. The y-axis shows the delta phoneme accuracy (%) between the standard Librispeech train-clean-100 and the baseline 25-hrs, while the x-axis shows the additional augmentation multiplier data used in pre-training data. With 75-hrs (3-times more data) added to baseline (clean25), we observed delta phoneme classification improvement 5.6%. In order to achieve that level phoneme accuracy, we need to add 17-times more augmentation data to the pre-training data. We also noted that adding 3-times synthetic augmentation achieved a 3.3% improvement which is 60% of the improvement compared to pre-training with the same amount of Librispeech data as shown in subplot Figure 3. For the same accuracy, we require 1.6\(\times\) more augmented data than using more pre-training data from Librispeech - an Figure 2: Performance of 100-hrs augmented pre-training data (25-clean-hrs + 75-aug-hrs) compared to the baseline 25-clean-hrs (baseline acc. 51.5%) option that might not be available in resource constrained scenarios. ### Misclassification Analysis From Figures 2 and 3, we observe that adding a mix of augmentation to pre-training data improves the phoneme classification accuracy in downstream task. However, the above figures do not show the impact of augmentation on the misclassification of individual phonemes. In Figure 4, we show the impact of augmentation on the phonemes that were most frequently misclassified. The predicted phonemes are on the x-axis with the y-axis showing the ground truth phonemes. For clarity of illustration we only present the three most frequently misclassifications and hide both the least frequent misclassifications and correct classifications. Figure 4(a) shows results for the SSRL model pre-trained with Librispeech baseline 25-hrs. Phonemes /Y1, /Z2, /ZH/ are the most misclassified phonemes. The misclassification of these phonemes can be due to the fact that the APC model is not robust to the closer phonemes (e.g., /S/ and /Z/) or due to noisy labels in the fine-tuned data. The confusion matrix shows that data augmentation is able to reduce misclassified phonemes (Figure 4(c)) similarly to adding clean data (Figure 4(b)). This further demonstrates the ability of data augmentation to mitigate issues such as noisy labels and improve APC feature robustness in resource constrained scenarios. A further analysis should be conducted to understand whether some augmentation techniques are better at specific phoneme classes which could suggest applying a class-conditional data augmentation strategy, if phoneme labels are available in the pre-training data. ## 4 Conclusions This study proposes various augmentation strategies when the amount of pre-training data is limited. Our systematic analysis demonstrates that the following augmentations applied to the pre-training data lead to better performance in the SSRL model for the downstream phoneme recognition task: inclusion of African accented English speech data, Chinese speech data, noise augmentation, and pitch modification. In particular, we found that combining noise and pitch augmentation yields more promising results than incorporating data from different languages or accents. We scaled the augmentation for the strategy with the best results and achieved the performance equivalent to the performance of model pre-trained with target speech data with 17\(\times\) augmentation. This paper highlights the potential of artificial data augmentation as an effective technique for resource-constrained SSRL pre-training, proving more beneficial than equivalent transferring knowledge from alternative language or accents to the low-resource target. In future, we will consider adding synthetic speech generated from text-to-speech (TTS) systems in the SSRL model pre-training process and extend our work to both phoneme and speech recognition downstream tasks. Figure 4: Improvement of most frequent misclassified phonemes from (a), (b) and (c); x-axis show the predicted phonemes, y-axis show the ground truth phonemes Figure 3: Performance of augmented vs clean data in pre-training
2310.00204
Finding Pragmatic Differences Between Disciplines
Scholarly documents have a great degree of variation, both in terms of content (semantics) and structure (pragmatics). Prior work in scholarly document understanding emphasizes semantics through document summarization and corpus topic modeling but tends to omit pragmatics such as document organization and flow. Using a corpus of scholarly documents across 19 disciplines and state-of-the-art language modeling techniques, we learn a fixed set of domain-agnostic descriptors for document sections and "retrofit" the corpus to these descriptors (also referred to as "normalization"). Then, we analyze the position and ordering of these descriptors across documents to understand the relationship between discipline and structure. We report within-discipline structural archetypes, variability, and between-discipline comparisons, supporting the hypothesis that scholarly communities, despite their size, diversity, and breadth, share similar avenues for expressing their work. Our findings lay the foundation for future work in assessing research quality, domain style transfer, and further pragmatic analysis.
Lee Kezar, Jay Pujara
2023-09-30T00:46:14Z
http://arxiv.org/abs/2310.00204v1
# Finding Pragmatic Differences Between Disciplines ###### Abstract Scholarly documents have a great degree of variation, both in terms of content (semantics) and structure (pragmatics). Prior work in scholarly document understanding emphasizes semantics through document summarization and corpus topic modeling but tends to omit pragmatics such as document organization and flow. Using a corpus of scholarly documents across 19 disciplines and state-of-the-art language modeling techniques, we learn a fixed set of domain-agnostic descriptors for document sections and "retrofit" the corpus to these descriptors (also referred to as "normalization"). Then, we analyze the position and ordering of these descriptors across documents to understand the relationship between discipline and structure. We report within-discipline structural archetypes, variability, and between-discipline comparisons, supporting the hypothesis that scholarly communities, despite their size, diversity, and breadth, share similar avenues for expressing their work. Our findings lay the foundation for future work in assessing research quality, domain style transfer, and further pragmatic analysis. ## 1 Introduction Disciplines such as art, physics, and political science contain a wide array of ideas, from specific hypotheses to wide-reaching theories. In scholarly research, authors are faced with the challenge of clearly articulating a set of those ideas and relating them to each other, with the ultimate goal of expanding our collective knowledge. In order to understand this work, human readers situate meaning in context Justin Garten and Deghani (2019). Similarly, methods for scholarly document processing (SDP) have semantic and pragmatic orientations. The semantic orientation seeks to understand and evaluate the ideas themselves through information extraction Singh et al. (2016), summarization Chandrasekaran et al. (2020), automatic fact-checking Sathe et al. (2020), etc. The pragmatic orientation, on the other hand, seeks to understand the context around those ideas through rhetorical and style analysis August et al. (2020), corpus topic modeling Paul and Girju (2009), quality prediction Maillette de Buy Wenniger et al. (2020), etc. Although both orientations are essential for understanding, the pragmatics of disciplinary writing are very weakly understood. In this paper, we investigate the structures of disciplinary writing. We claim that a "structural archetype" (defined in Section 3) can succinctly capture how a community of authors choose to organize their ideas for maximum comprehension and persuasion. Analogous to how syntactic analysis deepens our understanding of a given sentence and document structure analysis deepens our understanding of a given document, structural archetypes, we argue, deepen our understanding of domains themselves. In order to perform this analysis, we classify sections according to their pragmatic intent. We contribute a data-driven method for deriving the types of pragmatic intent, called a "structural vocabulary", alongside a robust method for this classification. Then, we apply these methods to 19k scholarly documents and analyze the resulting structures. ## 2 Related Work We draw from two areas of related work in SDP: interdisciplinary analysis and rhetorical structure prediction. In interdisciplinary analysis, we are interested in comparing different disciplines, whether by topic modeling between select corpora/disciplines Paul and Girju (2009) or by domain-agnostic language modeling Wang et al. (2020). These comparisons are more than simply interesting; they allow for models that can adapt to different disciplines, helping the generalizability for downstream tasks like information extraction and summarization. In rhetorical structure prediction, we are inter ested in the process of implicature, whether by describing textual patterns in an unsupervised way (O Seaghdha and Teufel, 2014) or by classifying text as having a particular strategy like "statistics" (Al-Khatib et al., 2017) or "analogy" (August et al., 2020). These works descend from argumentative zoning (Lawrence and Reed, 2020) and the closely related rhetorical structure theory (Mann and Thompson, 1988), which argue that many rhetorical strategies can be described in terms of _units_ and their relations. These works are motivated by downstream applications such as predicting the popularity of a topic (Prabhakaran et al., 2016) and classifying the quality of a paper (Maillette de Buy Wenniger et al., 2020). Most similar to our work is Arnold et al. (2019). Here, the authors provide a method of describing Wikipedia articles as a series of section-like topics (e.g. disease.symptom) by clustering section headings into topics and then labeling words and sentences with these topics. We build on this work by using domain-agnostic descriptors instead of domain-specific ones and by comparing structures across disciplines. ## 3 Methods In this section, we define structural archetypes (3.1) and methods for classifying pragmatic intent through a structural vocabulary (3.2). ### Structural Archetypes We coin the term "structural archetype" to focus and operationalize our pragmatic analysis. Here, a "structure" is defined as _a sequence of domain-agnostic indicators of pragmatic intent_, while an "archetype" refers to _a strong pattern across documents_. In the following paragraphs, we discuss the components of this concept in depth. Pragmatic IntentIn contrast to verifiable propositions, "indicators of pragmatic intent" refer to instances of meta-discourse, comments on the document itself (Ifantidou, 2005). There are many examples, including background (comments on what the reader needs in order to understand the content), discussions (comments on how results should be interpreted), and summaries (comments on what is important). These indicators of pragmatic intent serve the critical role of helping readers "digest" material; without them, scholarly documents would only contain isolated facts. We note that the boundary between pragmatic intent and argumentative zones (Lawrence and Reed, 2020) is not clear. Some argumentative zones are more suitable for the sentence- and paragraph-level (e.g. "own claim" vs. "background claim") while others are interpretative (e.g. "challenge"). This work does not attempt to draw this boundary, and the reader might find overlap between argumentative zoning work and our section types. SequencesAs a sequence, these indicators reflect how the author believes their ideas should best be received in order to remain coherent. For example, many _background_ indicators reflects a belief that the framing of the work is very important. Domain-agnostic archetypesFinally, the specification that indicators must be domain-agnostic and that the structures should be widely-held are included to allow for cross-disciplinary comparisons. We found that the most straightforward way to implement structural archetypes is through classifying section headings according to their pragmatic intent. With this comes a few challenges: (1) defining a set of domain-agnostic indicators, which we refer to as a "structural vocabulary"; (2) parsing a document to obtain its structure; and (3) finding archetypes from document-level structures. In the proceeding section, we address (1) and (2), and in Section 4 we address (3). ### Deriving a Structural Vocabulary Although indicators of pragmatic intent can exist on the sentence level, we follow Arnold et al. (2019) and create a small set of types that are loosely related to common section headings (e.g. "Methods"). We call this set a "structural vocabulary" because it functions in an analogous way to a vocabulary of words; any document can be described as a sequence of items that are taken from this vocabulary. There are three properties that the types should satisfy: 1. **domain independence**: types should be used by different disciplines 2. **high coverage**: unlabeled instances should be able to be classified as a particular type. 3. **internal consistency**: types should accurately reflect their instances Domain IndependenceAs pointed out by Arnold et al. (2019), there exists a "vocabulary mismatch problem" where different disciplines talk about their work in different ways. Indeed, 62% of the sampled headings only appear once and are not good choices for section types. On the other hand, the most frequent headings are a much better choice, especially those that appear in all domains. After merging a few popular variations among the top 20 section headings (e.g. _conclusion_ and _summary_, _background_ and _related work_), we yield the following types1: _introduction_ (a section which introduces the reader to potentially new concepts; \(n=10916\)), _methods_ (a section which details how a hypothesis will be tested; \(n=2116\)), _results_ (a section which presents findings of the method; \(n=3119\)), _discussion_ (a section which interprets and summarizes the results; \(n=3118\)), _conclusion_ (a section which summarizes the entire paper; \(n=7738\)), _analysis_ (a section which adds additional depth and nuance to the results; \(n=951\)), and _background_ (a section which connects ongoing work to previous related work; \(n=800\)). Figure 2 contains discipline-level counts. Footnote 1: Although _abstract_ is extremely common we found it redundant as a section type as it only exists once per paper and in a predictable location. High CoverageWe can achieve high coverage by classifying any section as one of these section types through language modeling. Specifically, the hidden representation of a neural language model \(h(\cdot)\) can act as an embedding of its input. We use the [CLS] tag of SciBERT's hidden layer, selected for its robust representations of scientific literature (Beltagy et al., 2019). To classify, we define a distance score \(d(\cdot)\) for a section \(s\) and a type \(T\) as the distance between \(h(s)\) and the average embedding across all instances of a type, i.e. \[d(s,T)=\left|h(s)-\frac{\sum_{t\in T}h(t)}{\|T\|}\right|\] Note that since the embedding is a vector, addition and division are elementwise. Then, we compute the distance for all types in the vocabulary \(V\) and select the minimum, i.e. \[s_{type}=\operatorname*{arg\,min}_{T\in V}(d(s,T))\] Internal ConsistencySome sections do not adequately fit any section type, so nearest-neighbor classification will result in very inconsistent clusters. We address this problem by imposing a threshold on the maximum distance for \(d(\cdot)\). Further, since the types have unequal variance (that is, the ground truth for some types are more consistent than other types), we define a type-specific threshold as half of the distance from the center of \(T\) to the furthest member of \(T\), i.e. \[\text{threshold}_{T}=0.5\cdot\max_{t\in T}(d(t,T))\] The weight of 0.5 was found to remove outliers appropriately an maximize retrofitting performance (Section 4.2). We also note that some headings, especially brief ones, leave much room for interpretation and make retrofitting challenging. We address this problem by concatenating tokens of each section's heading and body, up to 25 tokens, as input to the language model. This ensures that brief headings contain enough information to make an accurate representation without including too many details from the body text. ## 4 Results and Discussion ### Data We use the Semantic Scholar Open Research Corpus (S2ORC) for all analysis (Lo et al., 2020). This corpus, which is freely available, contains approximately 7.5M PDF-parsed documents from 19 disciplines, including natural sciences, social sciences, arts, and humanities. For our experiments, we randomly sample 1k documents for each discipline, yielding a total of 19k documents. ### Retrofitting Performance Retrofitting (or normalizing) section headers refers to re-labeling sections with the structural vocabulary. We evaluate retrofitting performance by manually tagging 30 of each section type and comparing the true labels to the predicted values. Our method yields an average F1 performance of 0.76. The breakdown per section type, shown in Table 1, reveals that _conclusion_, _background_, and _analysis_ sections were the most difficult to predict. We attribute this to a lack of textual clues in the heading and body, and also a semantic overlap with _introduction_ sections. Future work can improve the classifier with more nuanced signals, such as position, length, number of references, etc. ### Analyzing Position with Aggregate Frequency A simple yet expressive way of showing the structural archetypes of a discipline is to consider the frequency of a particular type at any point in the article (normalized by length). This analysis reveals general trends throughout a discipline's documents, such as where a section type is most frequent or where there is homogeneity. To illustrate the practicality of this analysis, consider the hypothesis that Physics articles are more empirically-motivated while Political Science articles are more conceptually-motivated, i.e. that they are on opposing ends of the _concrete_ versus _abstract_ spectrum. We operationalize this by claiming that Physics articles have more _methods_, _results_, and _analysis_ sections than Political Science. Figure 1 shows the difference between Physics and Political Science at each point in the article. It reveals that not only do Physics articles contain more _methods_ and _results_, but also that Physics articles introduce _methods_ earlier than Political Science, and that both contain the same amount of _analysis_ sections. ### Analyzing Ordering with State Transitions A more structural analysis of a discipline is to look at the frequency of sequence fragments through computing transition probabilities. As a second example, suppose we have a more nuanced hypothesis: that Psychology papers tend to separate claims and evaluate them sequentially (_methods_, _results_, _discussion_, repeat) whereas Sociology papers tend to evaluate all claims at once. We can operationalize these hypotheses by calculating the transition probability between section \(s_{i}\) and \(s_{i-1}\) conditioned on some discipline. In Table 2, we see evidence that _methods_ sections are more likely to be preceded by _results_ sections in Psychology than Sociology, implying a new iteration of a cycle. We might conclude that Psychology papers are more likely to have cyclical experiments, but not that Sociology papers conduct multiple experiments in a linear fashion. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Type & Precision & Recall & F1 \\ \hline introduction & 0.77 & 0.97 & 0.85 \\ \hline conclusion & 0.67 & 0.72 & 0.69 \\ \hline discussion & 0.88 & 0.88 & 0.88 \\ \hline results & 0.80 & 0.85 & 0.83 \\ \hline methods & 0.83 & 0.91 & 0.87 \\ \hline background & 0.63 & 0.77 & 0.69 \\ \hline analysis & 0.50 & 0.61 & 0.55 \\ \hline **overall** & **0.72** & **0.88** & **0.76** \\ \hline \end{tabular} \end{table} Table 1: Type-level and overall performance for section type retrofitting. Figure 1: A comparison between the positions (normalized by document length; \(x\) axis) and frequencies (\(y\) axis) of section types in Physics and Political Science. Comparable distributions of _introduction_, _methods_, _analysis_, _discussion_, and _conclusion_, but different distributions of background and results. ## 5 Conclusion and Future Work In this paper, we have shown a simple method for constructing and comparing structural archetypes across different disciplines. By classifying the pragmatic intent of section headings, we can visualize structural trends across disciplines. In addition to utilizing a more complex classifier, future directions for this work include (1) further distinguishing between subdisciplines (e.g. abnormal psychology vs. developmental psychology) and document type (e.g. technical report vs. article); (2) learning relationships between structures and measures of research quality, such as reproducibility; (3) learning how to convert one structure into another, with the ultimate goal of normalizing them for easier comprehension or better models; (4) deeper investigations into the selection of a structural vocabulary, such as including common argumentative zoning types or adjusting the scale to the sentence-level; and (5) drawing comparisons, such as by clustering, between different documents based strictly on their structure. ## 6 Acknowledgements This work was funded by the Defense Advanced Research Projects Agency with award W911NF-19-20271. The authors would like to thank the reviewers of this paper for their detailed and constructive feedback, and in particular their ideas for future directions.
2309.14312
Chow Rings of Matroids as Permutation Representations
Given a matroid and a group of its matroid automorphisms, we study the induced group action on the Chow ring of the matroid. This turns out to always be a permutation action. Work of Adiprasito, Huh and Katz showed that the Chow ring satisfies Poincar\'e duality and the Hard Lefschetz theorem. We lift these to statements about this permutation action, and suggest further conjectures in this vein.
Robert Angarone, Anastasia Nathanson, Victor Reiner
2023-09-25T17:31:24Z
http://arxiv.org/abs/2309.14312v4
# Chow rings of matroids as permutation representations ###### Abstract. Given a matroid and a group of its matroid automorphisms, we study the induced group action on the Chow ring of the matroid. This turns out to always be a permutation action. Work of Adiprasito, Huh and Katz showed that the Chow ring satisfies Poincare duality and the Hard Lefschetz theorem. We lift these to statements about this permutation action, and suggest further conjectures in this vein. Key words and phrases:matroid, Chow ring, Koszul, log-concave, unimodal, Kahler package, Burnside ring, equivariant, Polya freqency, real-rooted. 2010 Mathematics Subject Classification: 05B35, 05E18, 05E14 ## 1. Introduction A _matroid_\(\mathcal{M}\) is a combinatorial abstraction of lists of vectors \(v_{1},v_{2},\ldots,v_{n}\) in a vector space, recording only the information about which subsets of the vectors are linearly independent or dependent, forgetting their coordinates- see Section 2.1 for definitions and references. In groundbreaking work, Adiprasito, Huh and Katz [1] affirmed long-standing conjectures of Rota-Heron-Welsh and Mason about vectors and matroids via a new methodology. Their work employed a certain graded \(\mathbb{Z}\)-algebra \(A(\mathcal{M})\) called the _Chow ring_ of \(\mathcal{M}\), introduced by Feichtner and Yuzvinsky [11] as a generalization of the Chow ring of DeConcini and Procesi's _wonderful compactifications_ for hyperplane arrangement complements. A remarkable integral Grobner basis result proven by Feichtner and Yuzvinsky [11, Thm. 2] shows that for a matroid \(\mathcal{M}\) of rank \(r+1\) with Chow ring \(A(\mathcal{M})=\bigoplus_{k=0}^{r}A^{k}\), each homogeneous component is free abelian: \(A^{k}\cong\mathbb{Z}^{a_{k}}\) for some Hilbert function \((a_{0},a_{1},\ldots,a_{r})\). A key step in the work of Adiprasito, Huh and Katz shows not only _symmetry_ and _unimodality_ for the Hilbert function \[a_{k} =a_{r-k}\text{ for }r\leq k/2 \tag{2}\] \[a_{0} \leq a_{1}\leq\cdots\leq a_{\lfloor\frac{r}{2}\rfloor}=a_{ \lceil\frac{r}{2}\rceil}\geq\cdots\geq a_{r-1}\geq a_{r}, \tag{1}\] but in fact proves that \(A(\mathcal{M})\) enjoys a trio of properties referred to as the _Kahler package_, reviewed in Section 2.4 below. The first of these properties is _Poincare duality_, proving (1) via a natural \(\mathbb{Z}\)-module isomorphism \(A^{r-k}\cong\operatorname{Hom}_{\mathbb{Z}}(A^{k},\mathbb{Z})\). The second property, called the _Hard Lefschetz Theorem_, shows that after tensoring over \(\mathbb{Z}\) with \(\mathbb{R}\) to obtain \(A(\mathcal{M})_{\mathbb{R}}=\bigoplus_{k=0}A^{k}_{\mathbb{R}}\), one can find _Lefschetz elements_\(\omega\) in \(A^{1}_{\mathbb{R}}\) such that multiplication by \(\omega^{r-2k}\) give \(\mathbb{R}\)-linear isomorphisms \(A^{k}_{\mathbb{R}}\to A^{r-k}_{\mathbb{R}}\) for \(k\leq\frac{r}{2}\). In particular, multiplication by \(\omega\) mapping \(A^{k}_{\mathbb{R}}\to A^{k+1}_{\mathbb{R}}\) is _injective_ for \(k<\frac{r}{2}\), strengthening the unimodality (2). We are interested here in how these _Poincare duality_ and _Hard Lefschetz_ properties interact with the group \(G:=\operatorname{Aut}(\mathcal{M})\) of symmetries of the matroid \(\mathcal{M}\). It is not hard to check (see Section 2.1 below) that \(G\) acts via graded \(\mathbb{Z}\)-algebra automorphisms on \(A(\mathcal{M})\), giving \(\mathbb{Z}G\)-module structures on each \(A^{k}\), and \(\mathbb{R}G\)-module structures on each \(A^{k}_{\mathbb{R}}\). One can also check (see the proof of Corollary 2.14 below) that \(A^{r}\cong\mathbb{Z}\) with trivial \(G\)-action. From this, the Poincare duality pairing immediately gives rise to a \(\mathbb{Z}G\)-module isomorphism \[A^{r-k}\cong\operatorname{Hom}_{\mathbb{Z}}(A^{k},\mathbb{Z}) \tag{3}\] where \(g\) in \(G\) acts on \(\varphi\) in \(\operatorname{Hom}_{\mathbb{Z}}(A^{k},\mathbb{Z})\) via \(\varphi\mapsto\varphi\circ g^{-1}\); similarly \(A^{r-k}\cong\operatorname{Hom}_{\mathbb{R}}(A^{k},\mathbb{R})\) as \(\mathbb{R}G\)-modules. Furthermore, it is not hard to check (see Corollary 2.15 below) that one can pick an explicit Lefschetz element \(\omega\) as in [1] which is \(G\)-fixed, giving \(\mathbb{R}G\)-module isomorphisms and injections \[A^{k}_{\mathbb{R}} \overset{\sim}{\longrightarrow}A^{r-k}_{\mathbb{R}}\quad\text{ for }r\leq\frac{k}{2}\] \[a \longmapsto a\cdot\omega^{r-2k} \tag{4}\] \[A_{\mathbb{R}}^{k}\hookrightarrow A_{\mathbb{R}}^{k+1}\quad\text{ for }r<\frac{k}{2}\] \[a\longmapsto a\cdot\omega. \tag{5}\] Our goal in this paper is to use Feichtner and Yuzvinsky's Grobner basis result to prove a combinatorial strengthening of the isomorphisms and injections (3), (4), (5). To this end, recall (or see Section 2.1 below) that the matroid \(\mathcal{M}\) can be specified by its family \(\mathfrak{F}\) of _flats_; in the case where \(\mathcal{M}\) is realized by a list of vectors \(v_{1},v_{2},\ldots,v_{n}\) in a vector space, a subset \(F\subseteq\{1,2,\ldots,n\}=:E\) is a flat when \(\{v_{j}\}_{j\in F}\) is linearly closed, meaning that every vector \(v_{i}\) for \(i\) in \(E\) that lies in the linear span of \(\{v_{j}\}_{j\in F}\) already has \(i\) in \(F\). Then the _Chow ring_\(A(\mathcal{M})\) is presented as a quotient of the polynomial ring \(S:=\mathbb{Z}[x_{F}]\) having one variable \(x_{F}\) for each nonempty flat \(F\) in \(\mathfrak{F}\setminus\{\varnothing\}\). The presentation takes the form \(A(\mathcal{M}):=S/(I+J)\) where \(I,J\) are certain ideals of \(S\) defined more precisely in Definition 2.2 below. Feichtner and Yuzvinsky exhibited (see Theorem 2.8, Corollary 2.9 below) a Grobner basis for \(I+J\) that leads to the following standard monomial \(\mathbb{Z}\)-basis for \(A(\mathcal{M})\), which we call the _FY-monomials_ of \(\mathcal{M}\): \[\text{FY}:=\{x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}}: (\varnothing=:F_{0})\subsetneq F_{1}\subsetneq F_{2}\subsetneq\cdots \subsetneq F_{\ell},\text{ and }m_{i}\leq\text{rk}(F_{i})-\text{rk}(F_{i-1})-1\}\] Here \(\text{rk}(F)\) denotes the matroid rank of the flat \(F\) (= the dimension of the linear span of \(\{v_{j}\}_{j\in F}\) when \(\mathcal{M}\) is realized by a list of vectors). The subset \(\text{FY}^{k}\) of \(\text{FY}\)-monomials \(x_{F_{1}}^{m_{1}}\cdots x_{F_{\ell}}^{m_{\ell}}\) of total degree \(m_{1}+\cdots+m_{\ell}=k\) then gives a \(\mathbb{Z}\)-basis for \(A^{k}\). One can readily check (see Corollary 2.10 below) that the group \(G=\text{Aut}(\mathcal{M})\) permutes the \(\mathbb{Z}\)-basis \(\text{FY}^{k}\) for \(A^{k}\), endowing \(A^{k}\) with the structure of a _permutation representation_, or \(G\)_-set_. Our main result, proven in Section 3, is this strengthening of the isomorphisms and injections (3), (4), (5). **Theorem 1.1**.: _For every matroid \(\mathcal{M}\) of rank \(r+1\), there exist_ 1. \(G\)_-equivariant bijections_ \(\pi:\text{FY}^{k}\xrightarrow{\sim}\text{FY}^{r-k}\) _for_ \(k\leq\frac{r}{2}\)_, and_ 2. \(G\)_-equivariant injections_ \(\lambda:\text{FY}^{k}\hookrightarrow\text{FY}^{k+1}\) _for_ \(k<\frac{r}{2}\)_._ **Example 1.2**.: Let \(\mathcal{M}=U_{4,5}\) be the uniform matroid of rank \(4\) on \(E=\{1,2,3,4,5\}\), associated to a list of \(5\)_generic_ vectors \(v_{1},v_{2},v_{3},v_{4},v_{5}\) in a \(4\)-dimensional vector space, so that any quadruple \(v_{i},v_{j},v_{k},v_{\ell}\) is linearly independent. One has these flats of various ranks: \begin{tabular}{|c|c|} \hline rank & flats \(F\in\mathfrak{F}\) \\ \hline \hline \(0\) & \(\varnothing\) \\ \hline \(1\) & \(1,2,3,4,5\) \\ \hline \(2\) & \(12,13,14,15,23,24,25,34,35,45\) \\ \hline \(3\) & \(123,124,125,134,135,145,234,235,245,345\) \\ \hline \(4\) & \(E=12345\) \\ \hline \end{tabular} The Chow ring \(A(\mathcal{M})=S/(I+J)\), where \(S=\mathbb{Z}[x_{i},x_{jk},x_{\ell mn},x_{E}]\) with \(\{i\},\{j,k\},\{\ell,m,n\}\) running through all one, two and three-element subsets of \(E=\{1,2,3,4,5\}\), and \[I=\Big{(}x_{F}x_{F^{\prime}}\Big{)}_{F\not\subset F^{\prime},F^{\prime}\not \subset F},\qquad J=\bigg{(}x_{i}+\sum_{\begin{subarray}{c}1\leq j<k\leq 5\\ i\in\{j,k\}\end{subarray}}x_{jk}+\sum_{\begin{subarray}{c}1\leq\ell<m<n \leq 5\\ i\in\{\ell,m,n\}\end{subarray}}x_{\ell mn}\ +x_{E}\bigg{)}_{i=1,2,3,4,5}.\] The FY-monomial bases for \(A^{0},A^{1},A^{2},A^{3}\) are shown here, together with the \(G\)-equivariant maps \(\lambda\): \[\begin{array}{ccccc}\text{FY}^{\mathbf{0}}&&\text{FY}^{\mathbf{1}}&&\text{FY }^{\mathbf{2}}&&\text{FY}^{\mathbf{3}}\\ \\ 1&\overset{\lambda}{\longmapsto}&x_{E}&&\overset{\lambda}{\longmapsto}&x_{E}^{2} &x_{E}^{3}\\ &&&&\begin{array}{ccccc}x_{ijk}&\overset{\lambda}{\longmapsto}&x_{ijk}^{2} \\ 1\leq i<j<k\leq 5&&&&\\ \end{array}\\ &&&&\begin{array}{ccccc}x_{ij}&\overset{\lambda}{\longmapsto}&x_{ij}\cdot x_{E} \\ 1\leq i<j\leq 5&&&&\\ \end{array}\\ \end{array}\] Thus \(A(\mathcal{M})\) has Hilbert function \((a_{0},a_{1},a_{2},a_{3})=(1,21,21,1)\). Here the bijection \(\pi:\text{FY}^{0}\to\text{FY}^{3}\) necessarily maps \(1\longmapsto x_{E}^{3}\), and the bijection \(\pi:\text{FY}^{1}\to\text{FY}^{2}\) coincides with the map \(\lambda:\text{FY}^{1}\to\text{FY}^{2}\) above. Before proving Theorem 1.1 in Section 3, the background Section 2 reviews matroids and Chow rings, and collects a few simple observations. Section 4 recasts the maps \(\lambda\) from Theorem 1.1(ii) in the language of unimodality within the _Burnside ring_ of \(G\)-permutation representations. This motivates some analogous conjectures about Burnside rings that would extend recent log-concavity and total positivity conjectures for Hilbert functions of Chow rings \(A(\mathcal{M})\). Section 5 poses some additional questions and conjectures. ## Acknowledgements The authors thank Alessio D'Ali, Chris Eur, Luis Ferroni, Matt Larson, Hsin-Chieh Liao, Diane Maclagan, Matt Maestroni, Lorenzo Vecchi and Peter Webb for helpful conversations and references. They thank Trevor Karn for his wonderful Sage/Cocalc code that checks whether a symmetric group representation is a permutation representation. The authors received partial support from NSF grants DMS-1949896, DMS-1745638, DMS-2053288. Second author received partial support from PROM project no. POWR.03.03.00-00-PN13/18. ## 2. Background ### Matroids There are many equivalent definitions of a matroid; the one most useful here uses their _flats_. For matroid basics and background, good references are Oxley [11] and Ardila [1]. **Definition 2.1**.: A matroid \(\mathcal{M}=(E,\mathfrak{F})\) consists of a (finite) ground set \(E\) and a collection of subsets \(\mathfrak{F}=\{F\}\subseteq 2^{E}\) called _flats_, satisfying these axioms: 1. \(E\in\mathfrak{F}\). 2. If \(F,F^{\prime}\in\mathfrak{F}\), then \(F\cap F^{\prime}\in\mathfrak{F}\). 3. For any \(F\in\mathfrak{F}\), and any \(i\in E\setminus F\), there is a unique \(F^{\prime}\in\mathfrak{F}\) containing \(i\) which _covers_\(F\) in this sense: \(F\subsetneq F^{\prime}\), and no other \(F^{\prime\prime}\) has \(F\subsetneq F^{\prime\prime}\subsetneq F\). These axioms combinatorially abstract properties from the case of a _realizable matroid_\(\mathcal{M}\) associated to a list of vectors \(v_{1},v_{2},\ldots,v_{n}\) in a vector space over some field \(\mathbb{F}\). This realizable matroid \(\mathcal{M}\) has ground set \(E=\{1,2,\ldots,n\}\), and a subset \(F\subseteq E\) is a flat in \(\mathfrak{F}\) if and only if this inclusion is an equality: \[F\subseteq\{i\in E:v_{i}\in\operatorname{span}_{\mathfrak{F}}\{v_{j}\}_{j\in F }\}.\] Axioms (F1),(F2) imply that the inclusion order \((\mathfrak{F},\subseteq)\) will be a lattice, with _meets_\(F\wedge F^{\prime}=F\cap F^{\prime}\) and _joins_\(F\lor F^{\prime}=\bigcap_{F^{\prime\prime}\supsetneq F,F^{\prime}}F^{\prime\prime}\). Axiom (F3) further implies that the lattice \(\mathfrak{F}\) will be _geometric_, that is, both _atomic_ (every \(F\) is the join of the atoms below it) and _upper-semimodular_: it has a rank function \(\operatorname{rk}:\mathfrak{F}\to\{0,1,2,\ldots\}\) satisfying \[\operatorname{rk}(F\lor F^{\prime})+\operatorname{rk}(F\wedge F^{\prime}) \leq\operatorname{rk}(F)+\operatorname{rk}(F^{\prime}).\] The _rank_ of the matroid \(\mathcal{M}\) itself is defined to be \(\operatorname{rk}(E)\), and we assume throughout that \(\operatorname{rk}(E)=r+1\). An _automorphism_ of the matroid \(\mathcal{M}\) is any permutation \(g:E\to E\) of the ground set \(E\) that carries flats to flats: for all \(F\) in \(\mathfrak{F}\) one has \(g(F)\) in \(\mathfrak{F}\). Let \(G=\operatorname{Aut}(\mathcal{M})\) denote the group of all automorphisms of \(\mathcal{M}\). Since such automorphisms respect the partial order via inclusion on \(\mathfrak{F}\), they also preserve the rank function: \(\operatorname{rk}(g(F))=\operatorname{rk}(F)\) for all \(g\) in \(G\) and \(F\) in \(\mathfrak{F}\). ### Chow Rings As defined in the Introduction, Feichtner and Yuzvinsky [13] introduced the Chow ring \(A(\mathcal{M})\) of a matroid \(\mathcal{M}\). Their goal was to model the Chow ring of DeConcini and Procesi's _wonderful compactification_ of the complement \(V\setminus\bigcup_{i=1}^{n}H_{i}\) of an arrangement of hyperplanes \(H_{1},\ldots,H_{n}\), inside a vector space \(V\). Here \(\mathcal{M}\) is the matroid in the dual space \(V^{*}\) realized by linear forms \(f_{1},\ldots,f_{n}\) in \(V^{*}\) with \(H_{i}=\ker(f_{i})\) for \(i=1,2,\ldots,n\). **Definition 2.2**.: The _Chow ring_\(A(\mathcal{M})\) of a matroid \(\mathcal{M}\) is the quotient \(Z\)-algebra \[A(\mathcal{M}):=S/(I+J)\] where \(S=\mathbb{Z}[x_{F}]\) is a polynomial ring having one variable \(x_{F}\) for each nonempty flat \(\varnothing\neq F\in\mathfrak{F}\), and where \(I,J\) are the following ideals of \(S\): * \(I\) is generated by products \(x_{F}x_{F^{\prime}}\) for non-nested flats \(F,F^{\prime}\) (neither \(F\subseteq F^{\prime}\) nor \(F^{\prime}\subseteq F\)), * \(J\) is the ideal of \(S\) generated by the linear elements \(\sum_{a\in F\in\mathfrak{F}}x_{F}\) for each atom \(a\) (= flat of rank 1) in \(\mathfrak{F}\). The fact that the presentation of the Chow ring \(A(\mathcal{M})\) only uses the information about the partial order on the lattice of flats \(\mathfrak{F}\) has some consequences. * \(A(\mathcal{M})\) depends only upon the associated _simple matroid_ of \(\mathcal{M}\). That is, one may assume \(\mathcal{M}\) has * no _loops_, meaning that one has empty intersection \(\varnothing=\bigcap_{F\in\mathfrak{F}}F\), and * no elements \(i\neq j\) in \(E\) which are _parallel_ in the sense \(\{F\in\mathfrak{F}:i\in F\}=\{F\in\mathfrak{F}:j\in F\}\). Thus without loss of generality, one may assume that \(\mathcal{M}\) is a simple matroid throughout. * Any element \(g\) in \(G=\operatorname{Aut}(\mathcal{M})\) will send the generators of the ideals \(I,J\) to other such generators. Thus \(I+J\) is a \(G\)-stable ideal, and \(G\) acts on \(A(\mathcal{M})\). **Remark 2.3**.: A few remarks are in order, comparing the above definition of the Chow ring \(A(\mathcal{M})\) to that given by Feichtner-Yuzvinsky [10], as well as the one used in Adiprasito-Huh-Katz [1]. Feichtner and Yuzvinsky in [10] consider Chow rings which are more general in two ways: they are associated to the more general class of atomic lattices (not necessarily geometric lattices), and they incorporate the choice of a subset of the lattice called a _building set_. Here we are both assuming that the lattice is geometric, so that it corresponds to a (simple) matroid \(\mathcal{M}\), and we are also considering only the case of the _maximal building set_, which is the set of all nonempty flats in \(\mathfrak{F}\). A reason for this choice is so that the entire group \(G=\operatorname{Aut}(\mathcal{M})\) acts on the Chow ring, which requires the building set to be stable under \(G\), which does not always occur. In [1], they do consider certain other choices of nonmaximal building sets which we are ignoring. However, they also alter the presentation of \(A(\mathcal{M})\) to eliminate the variable \(x_{E}\) from the polynomial ring \(S=\mathbb{Z}[x_{F}]_{\varnothing\subsetneq\,F\subseteq E}\). Specifically they write \(A(\mathcal{M})=\hat{S}/(\hat{I}+\hat{J})\) where \(\hat{S}=\mathbb{Z}[x_{F}]\) has a variable \(x_{F}\) for each nonempty, _proper_ flat \(F\) with \(\varnothing\subsetneq\,F\subsetneq\,E\). Then \(\hat{I}\) is again the ideal generated by the monomials \(x_{F}x_{F^{\prime}}\) where \(F,F^{\prime}\) are not-nested, but now \(\hat{J}\) is generated by the linear elements \[\sum_{a\in F\neq E\in\mathfrak{F}}x_{F}-\sum_{a^{\prime}\in F\neq E\in \mathfrak{F}}x_{F}\] for each pair of distinct atoms \(a\neq a^{\prime}\) in \(\mathfrak{F}\). It is not hard to check that this presentation of \(A(\mathcal{M})\) is equivalent to Definition 2.2: mutually inverse isomorphisms are induced by the map \(\hat{S}\to S\) sending \(x_{F}\longmapsto x_{F}\), and the backward map \(S\to\hat{S}\) sending \(x_{F}\longmapsto x_{F}\) for \(F\neq E\) and \(x_{E}\longmapsto-\sum_{a\in F\neq E\in\mathfrak{F}}x_{F}\) for any atom \(a\) in \(\mathfrak{F}\). Note if one considers \(S=\mathbb{Z}[x_{F}]\) as a graded \(\mathbb{Z}\)-algebra in which \(\deg(x_{F})=1\) for all nonempty flats \(F\), then the ideals \(I,J\) are generated by homogeneous elements: all quadratic generators for \(I\), all linear generators for \(J\). Hence the quotient \(A(\mathcal{M})=S/(I+J)\) inherits the structure of a graded \(\mathbb{Z}\)-algebra \(A(\mathcal{M})=\bigoplus_{k=0}^{\infty}A^{k}\). Since the action of \(G=\operatorname{Aut}(\mathcal{M})\) on \(A(\mathcal{M})\) preserves degrees, both \(A(\mathcal{M})\) and each homogeneous component \(A^{k}\) become \(\mathbb{Z}G\)-modules. It is not yet clear that \(A(\mathcal{M})\) has only finitely many non-vanishing components \(A^{k}\), nor that it vanishes beyond \(A^{r}\) where \(r=\operatorname{rk}(\mathcal{M})-1\). For this and other purposes, we next consider Feichtner and Yuzvinsky's remarkable Grobner basis for \(I+J\) mentioned in the Introduction. ### In praise of the Feichtner-Yuzvinsky Grobner basis We recall here one version of Grobner basis theory with respect to a monomial order on a polynomial ring \(\mathbb{Z}[x_{1},\ldots,x_{n}]\), as used in [10]. **Definition 2.4**.: A linear order \(\prec\) on the set of all monomials \(\{\mathbf{x}^{\alpha}:=x_{1}^{\alpha_{1}}\cdots x_{n}^{\alpha_{n}}\}\) in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\) is called a _monomial ordering_ if it is a _well-ordering_ (every subset of monomials has a \(\prec\)-minimum element) and \(\mathbf{x}^{\alpha}\prec\mathbf{x}^{\beta}\) implies \(\mathbf{x}^{\alpha}\cdot\mathbf{x}^{\gamma}\prec\mathbf{x}^{\beta}\cdot \mathbf{x}^{\gamma}\) for all \(\alpha,\beta,\gamma\) in \(\{0,1,2,\ldots\}^{n}\). **Example 2.5**.: After fixing a linear order on the variables \(x_{1}<\cdots<x_{n}\), the associated _lexicographic order_\(\prec\) has \(\mathbf{x}^{\alpha}\prec\mathbf{x}^{\beta}\) if there exists some \(k=1,2,\ldots,n\) with \(\alpha_{1}=\beta_{1},\alpha_{2}=\beta_{2},\ldots,\alpha_{k-1}=\beta_{k-1}\), but \(\alpha_{k}<\beta_{k}\). **Definition 2.6**.: For \(f=\sum_{\alpha}c_{\alpha}\mathbf{x}^{\alpha}\) in \(\mathbb{Z}[x_{1},\ldots,x_{n}]\), let \(\mathrm{in}_{\prec}(f)=\mathbf{x}^{\beta}\) denote the \(\prec\)-largest monomial in \(f\) having \(c_{\beta}\neq 0\). Say \(f\) is \(\prec\)_-monic_ if the initial monomial \(\mathrm{in}_{\prec}(f)=\mathbf{x}^{\beta}\) has its coefficient \(c_{\beta}=\pm 1\). Having fixed a monomial order \(\prec\), given an ideal \(I\subset\mathbb{Z}[x_{1},\ldots,x_{n}]\), say that a subset \(\{g_{1},\ldots,g_{t}\}\subset I\) is a _monic Grobner basis_ for \(I\) (with respect to \(\prec\)) if each \(g_{i}\) is \(\prec\)-monic, and every \(f\) in \(I\) has \(\mathrm{in}_{\prec}(f)\) divisible by at least one of the intial monomials \(\{\mathrm{in}_{\prec}(g_{1}),\ldots,\mathrm{in}_{\prec}(g_{t})\}\). Call \(\mathbf{x}^{\alpha}\) a _standard monomial_ for \(\{g_{1},\ldots,g_{t}\}\) with respect to \(\prec\) if \(\mathbf{x}^{\alpha}\) is divisible by none of \(\{\mathrm{in}_{\prec}(g_{1}),\ldots,\mathrm{in}_{\prec}(g_{t})\}\). It should be noted that some ideals \(I\subset\mathbb{Z}[x_{1},\ldots,x_{n}]\) have _no_ monic Grobner basis, e.g. \(I=(2)\subset\mathbb{Z}[x]\). However, whenever \(I\)_does_ have a monic Grobner basis, it has the following strong consequence, proven exactly as for Grobner bases over field cofficients; see, e.g., Cox, Little, O'Shea [15, SS2.5, 5.3]. **Proposition 2.7**.: _If \(\{g_{1},\ldots,g_{t}\}\) is a monic Grobner basis for \(I\subset S=\mathbb{Z}[x_{1},\ldots,x_{n}]\) with respect to \(\prec\), then_ * \(I=(g_{1},\ldots,g_{t})\)_, that is,_ \(\{g_{1},\ldots,g_{t}\}\) _generate_ \(I\) _as an ideal._ * _The quotient ring_ \(S/I\) _is a free_ \(\mathbb{Z}\)_-module, with a_ \(\mathbb{Z}\)_-basis given by the standard monomials_ \(\{x^{\alpha}\}\)_._ The following crucial result appears as [14, Thm. 2]. To state it, define an _FY-monomial order_ on \(S=\mathbb{Z}[x_{F}]_{\varnothing\neq F\in\mathfrak{F}}\) to be any monomial order based on a linear order of the variables with \(x_{F}>x_{F^{\prime}}\) if \(F\subsetneq F^{\prime}\). **Theorem 2.8**.: _Given a matroid \(\mathcal{M}\) and any FY-monomial order on \(S=\mathbb{Z}[x_{F}]_{\varnothing\neq F\in\mathfrak{F}}\), the ideal \(I+J\) presenting \(A(\mathcal{M})=S/(I+J)\) has a monic Grobner basis \(\{g_{F,F^{\prime}}\}\) indexed by \(F\neq F^{\prime}\) in \(\mathfrak{F}\), with \(g_{F,F^{\prime}}\) and their initial terms \(\mathrm{in}_{\prec}(g_{F,F^{\prime}})\) as shown here:_ \begin{tabular}{|c|c|c|} \hline _condition on_ \(F\neq F^{\prime}\) _in_ \(\mathfrak{F}\) & \(g_{F,F^{\prime}}\) & \(\mathrm{in}_{\prec}(g_{F,F^{\prime}})\) \\ \hline \hline \(F,F^{\prime}\) _non-nested_ & \(x_{F}x_{F^{\prime}}\) & \(x_{F}x_{F^{\prime}}\) \\ \hline \(\varnothing\neq F\subsetneq F^{\prime}\) & \(x_{F}\left(\sum_{\begin{subarray}{c}F^{\prime\prime}\in\mathfrak{F}:\\ F^{\prime\prime}\supseteq F^{\prime}\end{subarray}}x_{F^{\prime\prime}}\right) ^{\mathrm{rk}(F^{\prime})-\mathrm{rk}(F)}\) & \(x_{F}\cdot x_{F^{\prime}}^{\mathrm{rk}(F^{\prime})-\mathrm{rk}(F)}\) \\ \hline \(\varnothing=F\subsetneq F^{\prime}\) & \(\left(\sum_{\begin{subarray}{c}F^{\prime\prime}\in\mathfrak{F}:\\ F^{\prime\prime}\supseteq F^{\prime}\end{subarray}}x_{F^{\prime\prime}}\right) ^{\mathrm{rk}(F^{\prime})}\) & \(x_{F^{\prime}}^{\mathrm{rk}(F^{\prime})}\) \\ \hline \(\varnothing=F\subsetneq F^{\prime}\) & \(\left(\sum_{\begin{subarray}{c}F^{\prime\prime}\in\mathfrak{F}:\\ F^{\prime\prime}\supseteq F^{\prime}\end{subarray}}x_{F^{\prime\prime}}\right) ^{\mathrm{rk}(F^{\prime})}\) & \(x_{F^{\prime}}^{\mathrm{rk}(F^{\prime})}\) \\ \hline \end{tabular} **Corollary 2.9**.: _([14, Cor. 1]) For a matroid \(\mathcal{M}\) of rank \(r+1\), the Chow ring \(A(\mathcal{M})\) has these properties:_ * \(A(\mathcal{M})\) _is free as a_ \(\mathbb{Z}\)_-module, with_ \(\mathbb{Z}\)_-basis given by the set of FY-monomials_ * \(\mathrm{FY}:=\{x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell} }:(\varnothing=:F_{0})\subsetneq F_{1}\subsetneq F_{2}\subsetneq\cdots\subsetneq F _{\ell},\text{ in }\mathfrak{F},\text{ and }m_{i}\leq\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i-1})-1\}\)_._ * \(A(\mathcal{M})\) _vanishes in degrees strictly above_ \(r\)_, that is,_ \(A(\mathcal{M})=\bigoplus_{k=0}^{r}A^{k}\)_._ * \(A^{r}\) _has_ \(\mathbb{Z}\)_-basis_ \(\{x_{E}^{r}\}\)_, and hence one has a_ \(\mathbb{Z}\)_-module isomorphism_ \(\deg:A^{r}\longrightarrow\mathbb{Z}\) _sending_ \(x_{E}^{r}\longmapsto 1\)_._ Proof.: Assertion (i) follows from Theorem 2.8 after checking that the FY-monomials in (6) are exactly the standard monomials for the \(\prec\)-monic Grobner basis \(\{g_{F,F^{\prime}}\}\), that is, the monomials divisible by no \(\mathrm{in}_{\prec}(g_{F,F^{\prime}})\). For assertion (ii), note that the typical FY-monomial \(x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}}\), has total degree \[\sum_{i=1}^{\ell}m_{i}\leq\sum_{i=1}^{\ell}(\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i -1})-1)=\mathrm{rk}(F_{\ell})-\ell\leq(r+1)-1=r.\] For assertion (iii), note equality occurs only if \(\ell=1\) and \(F_{\ell}=E\), in which case the FY-monomial is \(x_{E}^{r}\). Note that for any matroid automorphism \(g\), the fact that \(\mathrm{rk}(g(F))=\mathrm{rk}(F)\) for every flat \(F\) in \(\mathfrak{F}\) implies that \(g\) sends any FY-monomial to another FY-monomial: \[x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}}\ \stackrel{{ g}}{{\longmapsto}}\ x_{g_{(F_{1})}}^{m_{1}}x_{g_{(F_{2})}}^{m_{2}} \cdots x_{g_{(F_{\ell})}}^{m_{\ell}}.\] This has an immediate corollary, inspired by work of H.-C. Liao on Boolean matroids [12, Thm. 2.5]. **Corollary 2.10**.: _For any matroid \(\mathcal{M}\), the group \(G=\operatorname{Aut}(\mathcal{M})\) permutes the set \(\operatorname{FY}\), as well as its subset of degree \(k\) monomials \(\operatorname{FY}^{k}\subset\operatorname{FY}\). Consequently, the \(\mathbb{Z}G\)-modules on the Chow ring \(A(\mathcal{M})\) and each of its homogeneous components \(A^{k}\) lift to \(G\)-permutation representations on \(\operatorname{FY}\) and each \(\operatorname{FY}^{k}\)._ **Remark 2.11**.: It is rare to find families of ideals \(I\) inside polynomial rings \(S\) that are stable under a finite group \(G\) acting on \(S\), and which _also have a \(G\)-stable Grobner basis_\(\{g_{i}\}\) with respect to some monomial order \(\prec\). This occurs, for example, with the _Hibi rings_ studied in [10], and the more general \(P\)-partition rings studied by Feray and the third author in [10, Thm. 6.3]. More often, when starting with a \(G\)-stable ideal \(I\), passing to an intial ideal destroys the \(G\)-symmetry. One then usually needs alternative techniques to work with the quotient \(S/I\), as discussed, e.g., by Faugere [11] and Sturmfels [12, SS2.6]. **Remark 2.12**.: Although the \(\mathbb{Z}G\)-module structure on \(A(\mathcal{M})\) and \(A^{k}\) are canonical, their lifts to permutation representations on the sets \(\operatorname{FY}\) and \(\operatorname{FY}^{k}\) are not. In general, one can have two different \(G\)-permutation representations on sets \(X,X^{\prime}\) (that is, with no \(G\)-equivariant set bijection \(X\xrightarrow{\sim}X\)) but with a \(\mathbb{Z}G\)-module isomorphism \(\mathbb{Z}[X]\cong\mathbb{Z}[X^{\prime}]\); see, e.g., Conlon [14] and Scott [15]. ### The Kahler package The following theorem on the Kahler package for \(A(\mathcal{M})\) compiles some of the main results of the work of Adiprasito, Huh and Katz [1]. **Theorem 2.13**.: _For a matroid \(\mathcal{M}\) of rank \(r+1\), the Chow ring \(A(\mathcal{M})\) satisfies the Kahler package:_ * _(Poincare duality)_ _For every_ \(k\leq\frac{r}{2}\)_, one has a perfect_ \(\mathbb{Z}\)_-bilinear pairing_ \[A^{k}\times A^{r-k} \longrightarrow\mathbb{Z}\] \[(a,b) \longmapsto\deg(a\cdot b)\] _that is,_ \(b\longmapsto\varphi_{b}(-)\) _defined by_ \(\varphi_{b}(a)=\deg(a\cdot b)\) _is a_ \(\mathbb{Z}\)_-linear isomorphism_ \(A^{r-k}\cong\operatorname{Hom}_{\mathbb{Z}}(A^{k},\mathbb{Z})\)_._ * _(Hard Lefschetz)_ _Tensoring over_ \(\mathbb{Z}\) _with_ \(\mathbb{R}\)_, the (real) Chow ring_ \(A_{\mathbb{R}}(\mathcal{M})=\sum_{k=0}^{r}A_{\mathbb{R}}^{k}\) _contains_ _Lefschetz elements_ \(\omega\) _in_ \(A_{\mathbb{R}}^{1}\)_, meaning that_ \(a\mapsto a\cdot\omega^{r-2k}\) _is an_ \(\mathbb{R}\)_-linear isomorphism_ \(A_{\mathbb{R}}^{k}\to A_{\mathbb{R}}^{r-k}\) _for_ \(k\leq\frac{r}{2}\)_._ _In particular, multiplication by_ \(\omega\) _is an injection_ \(A_{\mathbb{R}}^{k}\to A_{\mathbb{R}}^{k+1}\) _for_ \(k<\frac{r}{2}\)_._ * _(Hodge-Riemann-Minkowski inequalities)_ _The Lefschetz elements_ \(\omega\) _define quadratic forms_ \(a\longmapsto(-1)^{k}\deg(a\cdot\omega^{r-2k}\cdot a)\) _on_ \(A_{\mathbb{R}}^{k}\) _that become positive definite upon restriction to the kernel of the map_ \(A_{\mathbb{R}}^{k}\longrightarrow A_{\mathbb{R}}^{r-k+1}\) _that sends_ \(a\longmapsto a\cdot\omega^{r-2k+1}\)_._ In fact, they show that one obtains a Lefschetz element \(\omega\) whenever \(\omega=\sum_{\varnothing\neq F\in\mathfrak{F}}c_{F}x_{F}\) has coefficients \(c_{F}\) coming from restricting to \(\mathfrak{F}\) any function \(A\mapsto c_{A}\) that maps \(2^{E}\to\mathbb{R}\) and satisfies these two properties: 1. the _strict submodular inequality_ \(c_{A}+c_{B}>c_{A\cap B}+c_{A\cup B}\) for all \(A\neq B\), and 2. \(c_{\varnothing}=c_{E}=0\). This has consequences for \(G\) acting on \(A(\mathcal{M})\) and each \(A^{k}\). The first will be refined by Theorem 1.1(i). **Corollary 2.14**.: _For any matroid \(\mathcal{M}\), one has an isomorphism of \(\mathbb{Z}G\)-modules \(A^{r-k}\to A^{k}\) for each \(k\leq\frac{r}{2}\)._ Proof.: Corollary 2.9(iii) shows that \(A^{r}\) has only one \(\mathbb{Z}\)-basis element \(x_{E}^{r}\), fixed by every \(g\) in \(G\), so the degree map \(\deg:A^{r}\longrightarrow\mathbb{Z}\) is \(G\)-equivariant for the trivial \(G\)-action on the target \(\mathbb{Z}\). Thus the Poincare duality isomorphism \(A_{r-k}\longrightarrow\operatorname{Hom}_{\mathbb{Z}}(A_{k},\mathbb{Z})\), sending \(b\longmapsto\varphi_{b}(-)\) with \(\varphi_{b}(a)=\deg(a\cdot b)\), is also \(G\)-equivariant. It only remains to exhibit a \(G\)-equivariant isomorphism \(\operatorname{Hom}_{\mathbb{Z}}(A_{k},\mathbb{Z})\to A_{k}\). To this end, use Corollary 2.10 to pick a \(\mathbb{Z}\)-basis \(\{e_{i}\}\) permuted by \(G\), so that each element \(g\) in \(G\) acts by a permutation matrix \(P(g)\) in this basis. Letting \(\{f_{i}\}\) be the dual \(\mathbb{Z}\)-basis for \(\operatorname{Hom}_{\mathbb{Z}}(A_{k},\mathbb{Z})\), one finds that \(g\) acts via the matrix \(P(g^{-1})^{T}=(P(g)^{-1})^{T}=P(g)\), since \(P(g)\) is a permutation matrix. Hence the map \(e_{i}\longmapsto f_{i}\) is such a \(G\)-equivariant isomorphism \(\operatorname{Hom}_{\mathbb{Z}}(A_{k},\mathbb{Z})\to A_{k}\). The next consequence will be refined by Theorem 1.1(ii). **Corollary 2.15**.: _One has \(\mathbb{R}G\)-module maps \(A_{\mathbb{R}}^{k}\to A_{\mathbb{R}}^{k+1}\) which are injective for \(k<\frac{r}{2}\)._ Proof.: There exist Lefschetz elements \(\omega\in A^{1}_{\mathbb{R}}\) which are \(G\)-fixed, such as those exhibited on [1, p. 384] having \(\omega=\sum_{F}c_{F}x_{F}\) with \(c_{F}=|F|\cdot|E\setminus F|\). Multiplication by such \(\omega\) give the asserted \(\mathbb{R}G\)-module injections. ## 3. Proof of Theorem 1.1 We recall the statement of the theorem, involving the FY-monomial \(\mathbb{Z}\)-basis for \(A(\mathcal{M})\) in Corollary 2.9: \[\mathrm{FY}:=\{x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}} :(\varnothing=:F_{0})\subsetneq F_{1}\subsetneq F_{2}\subsetneq\cdots\subsetneq F _{\ell}\ \mathrm{in}\ \mathfrak{F},\ \mathrm{and}\ m_{i}\leq\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i-1})-1\}\] This also means that the FY-monomials \(\mathrm{FY}^{k}\) of degree \(k\) form a \(\mathbb{Z}\)-basis for \(A^{k}\) for each \(k=0,1,2,\ldots,r\). **Theorem 1.1**: _For every matroid \(\mathcal{M}\) of rank \(r+1\), there exist_ * \(G\)_-equivariant bijections_ \(\pi:\mathrm{FY}^{k}\xrightarrow{\sim}\mathrm{FY}^{r-k}\) _for_ \(k\leq\frac{r}{2}\)_, and_ * \(G\)_-equivariant injections_ \(\lambda:\mathrm{FY}^{k}\hookrightarrow\mathrm{FY}^{k+1}\) _for_ \(k<\frac{r}{2}\)_._ We offer two (related) proofs, the first slightly more conceptual, the second with more explicit maps. ### First proof This proof organizes the FY-monomials according to the fibers of a map \(\mathrm{supp}_{+}:\mathrm{FY}\to 2^{\mathfrak{F}}\). **Definition 3.1**.: For an FY-monomial \(a=x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}}\), define its _extended support_\(\mathrm{supp}_{+}(a)\subset\mathfrak{F}\) by \[\mathrm{supp}_{+}(a):=\{F_{1},\ldots,F_{\ell}\}\cup\{E\}=\begin{cases}\{F_{1}, \ldots,F_{\ell}\}\cup\{E\}&\text{ if }F_{\ell}\subsetneq E,\\ \{F_{1},\ldots,F_{\ell}\}&\text{ if }F_{\ell}=E.\end{cases}\] Define a partial order \(<_{+}\) on the FY-monomials in which \(a<_{+}b\) if \(a\) divides \(b\) and \(\mathrm{supp}_{+}(a)=\mathrm{supp}_{+}(b)\). For integers \(p<q\), let \([p,q]\) denote the usual linear order on the integers \(\{p,p+1,\ldots,q-1,q\}\). Given a sequence of such pairs \(p_{i}<q_{i}\) for \(i=1,2,\ldots,m\), let \[\prod_{i=1}^{n}[p_{i},q_{i}]=[p_{1},q_{1}]\times[p_{2},q_{2}]\times\cdots \times[p_{m},q_{m}] \tag{7}\] denote their Cartesian product, partially ordered componentwise. **Proposition 3.2**.: _For any nested flag \(\{F_{1}\subsetneq\cdots\subsetneq F_{\ell}\subsetneq E\}\) in \(\mathfrak{F}\) containing \(E\), with convention_ \[F_{0} :=\varnothing,\] \[F_{\ell+1} :=E\] _the fiber \(\mathrm{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\) is the set of monomials \(\{x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell}}x_{E}^{m_{ \ell+1}}\}\) satisfying these inequalities:_ \[1\leq m_{i}\leq\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i-1})-1 \text{for }i=1,2,\ldots,\ell\] \[0\leq m_{\ell+1}\leq\mathrm{rk}(E)-\mathrm{rk}(F_{\ell})-1=r- \mathrm{rk}(F_{\ell}).\] _Consequently, the minimum and maximum degree of monomials in \(\mathrm{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\) are \(\ell\) and \(r-\ell\), and one has a poset isomorphism_ \[(\mathrm{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\},<_{+}) \longrightarrow \prod_{i=1}^{\ell}[1,\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i-1})-1] \ \times\ [0,r-\mathrm{rk}(F_{\ell})]\] \[x_{F_{1}}^{m_{1}}x_{F_{2}}^{m_{2}}\cdots x_{F_{\ell}}^{m_{\ell+1}} x_{E}^{m_{\ell+1}} \longmapsto (m_{1},m_{2},\ldots,m_{\ell},m_{\ell+1}).\] Proof.: Most assertions of the proposition are immediate from the definition of \(<_{+}\) and the map \(\mathrm{supp}_{+}\). The minimum and maximum degrees of monomials in \(\mathrm{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\)) are achieved by \[\deg(x_{F_{1}}^{1}x_{F_{2}}^{1}\cdots x_{F_{\ell}}^{1}x_{E}^{0}) =\ell\] \[\deg\left(\prod_{i=1}^{\ell}x_{F_{i}}^{\mathrm{rk}(F_{i})-\mathrm{ rk}(F_{i})-1}\cdot x_{E}^{\mathrm{rk}(E)-\mathrm{rk}(F_{\ell})-1}\right) =\sum_{i=1}^{\ell+1}(\mathrm{rk}(F_{i})-\mathrm{rk}(F_{i-1}-1)= \mathrm{rk}(E)-(\ell+1)\ =r-\ell.\qed\] The first proof of Theorem 1.1 stems from the observation that all products of chains, as in (7), have _symmetric chain decompositions_, which can then be pulled back to each fiber \(\operatorname{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\). **Definition 3.3**.: A _symmetric chain decomposition (SCD)_ of a finite ranked poset \(P\) of rank \(r\) is a disjoint decomposition \(P=\bigsqcup_{i=1}^{\ell}C_{i}\) in which each \(C_{i}\) is a totally ordered subset containing one element of each rank \(\{\rho_{i},\rho_{i}+1,\ldots,r-\rho_{i}-1,r-\rho_{i}\}\) for some \(\rho_{i}\in\{0,1,2,\ldots,\lfloor\frac{r}{2}\rfloor\}\). It is not hard to check that when posets \(P_{1},P_{2}\) each have an SCD, then so does their Cartesian product. In particular, all products of chains have an SCD; see, e.g., Anderson [1, SS3.1]. _Fix one such SCD for each product poset in (7)_, _once and for all,_ and use the isomorphisms from Proposition 3.2 to induce an SCD on each fiber \(\operatorname{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\). An important point for the proof is that the structure of these symmetric chains will depend only upon the rank sets \(\{\operatorname{rk}(F_{i})\}_{i=1}^{\ell}\). **Example 3.4**.: Assume \(\mathcal{M}\) has \(\operatorname{rk}(E)=10=r+1\) with \(r=9\), and one has a pair of nested flats \(F\subset F^{\prime}\) with \(\operatorname{rk}(F)=3,\operatorname{rk}(F^{\prime})=7\). Then the poset \(\operatorname{supp}_{+}^{-1}\{F,F^{\prime},E\}\) and one choice of SCD for it look as follows: _First proof of Theorem 1.1._ Let \(a\) be any FY-monomial \(a\) with \(k:=\deg(a)\). Then \(a\) lies in a unique fiber \(\operatorname{supp}_{+}^{-1}\{F_{1},\ldots,F_{\ell},E\}\) of the map \(\operatorname{supp}_{+}\), and on a unique chain \(C_{i}\) in our fixed SCD of this fiber. Writing \[C_{i}=\{a_{\rho}\lessdot a_{\rho+1}\lessdot\cdots\lessdot a_{r-\rho-1} \lessdot a_{r-\rho}\}, \tag{8}\] where \(\deg(a_{j})=j\) for \(j=\rho,\rho+1,\ldots,r-\rho\), so that \(a=a_{k}\), then define the bijection \(\pi\) and maps \(\lambda\) via \[\pi(a) :=a_{r-k},\] \[\lambda(a) :=a_{k+1}\text{ if }k<\frac{r}{2}.\] To see that \(\pi,\lambda\) are equivariant, note that having fixed \(\{F_{1},\ldots,F_{\ell},F_{\ell+1}=E\}\), both maps \(f=\pi,\lambda\) will send a monomial of the form \(a=\prod_{i=1}^{\ell+1}x_{F_{i}}^{m_{i}}\) with \(m_{i}\geq 1\) to one of the form \(f(a)=\prod_{i=1}^{\ell+1}x_{F_{i}}^{m_{i}^{\prime}}\) in such a way that the exponents \((m_{1}^{\prime},\ldots,m_{\ell}^{\prime},m_{\ell+1}^{\prime})\) are determined uniquely as a function of the pair of data \[\left(\ (m_{1},\ldots,m_{\ell},m_{\ell+1})\,\ \{\operatorname{rk}(F_{i})\}_{i =1}^{\ell}\ \right).\] Thus for any \(g\) in \(G\), since \(\{\operatorname{rk}(g(F_{i}))\}_{i=1}^{\ell}=\{\operatorname{rk}(F_{i})\}_{i =1}^{\ell}\), either of the maps \(f=\pi,\lambda\) will send \[g(a)=\prod_{i=1}^{\ell+1}x_{g(F_{i})}^{m_{i}}\quad\longmapsto\quad f(g(a))= \prod_{i=1}^{\ell+1}x_{g(F_{i})}^{m_{i}^{\prime}}=g(f(a)).\qed\] ### Proof via parenthesis pairing This proof borrows an idea from the famous _parenthesis-pairing_ SCD of Boolean algebras due to Greene and Kleitman [6]. We begin with a two-step encoding of FY-monomials within a fiber of the map \(\operatorname{supp}_{+}\). **Definition 3.5**.: Consider all FY-monomials \(a=x_{F_{1}}^{m_{1}}\cdots x_{F_{\ell}}^{m_{\ell}}x_{E}^{m_{\ell+1}}\) having a fixed extended support set \(\operatorname{supp}_{+}(a)=\{F_{1}\subsetneq\cdots\subsetneq F_{\ell} \subsetneq F_{\ell+1}=E\}\), so \(m_{1},\ldots,m_{\ell}\geq 1\) and \(m_{\ell+1}\geq 0\). In the first step, encode such monomials \(a\) via a sequence \(\mathfrak{D}(a)\) of length \(r\) in three symbols \(\times,\bullet\), and a blank space, defined as follows: * \(\mathfrak{D}(a)\) has \(\bullet\) in the positions \(\{\operatorname{rk}(F_{1}),\ldots,\operatorname{rk}(F_{\ell})\}\). * \(\mathfrak{D}(a)\) has \(\times\) in the first consecutive \(m_{i}\) positions to the left of \(\operatorname{rk}(F_{i})\) for each \(i=1,2,\ldots,\ell,\ell+1\). * \(\mathfrak{D}(a)\) has a blank space in the remaining positions. **Example 3.6**.: Continuing with the matroid \(\mathcal{M}\) and its flats \(F\subset F^{\prime}\) as discussed in Example 3.4. Here the monomials lie in the fiber \(\operatorname{supp}_{+}^{-1}\{F,F^{\prime},E\}\) where \(\{\operatorname{rk}(F),\operatorname{rk}(F^{\prime}),\operatorname{rk}(E)\}= \{3,7,10\}\), so \(r=9\), and the positions \(\{\operatorname{rk}(F),\operatorname{rk}(F^{\prime})\}=\{3,7\}\) are shown in green. The monomial \(x_{F}x_{F^{\prime}}^{2}\) gets encoded as \[\begin{array}{ccccccccc}1&2&3&4&5&6&7&8&9\\ &\times&\bullet&&\times&\times&\bullet\end{array}\] Note that one can recover \(a\) from \(\operatorname{supp}_{+}(a)=\{F_{1},\ldots,F_{\ell},E\}\) and \(\mathfrak{D}(a)\), since \(m_{i}\) can be read off as the number of \(\times\) in \(\mathfrak{D}(a)\) between positions \(\operatorname{rk}(F_{i-1})\) and \(\operatorname{rk}(F_{i})\), with usual conventions \(F_{0}=\varnothing,F_{\ell+1}:=E\). **Definition 3.7**.: The second step encodes \(\mathfrak{D}(a)\) as a length \(r\) parenthesis sequence in \(\{(,)\}^{r}\), having * a right parenthesis "\()\)" in the positions of each \(\bullet\) and each blank space, and * a left parenthesis "\((\)" in the positions of the \(\times\). **Example 3.8**.: Continuing Example 3.6, the diagram \(\mathfrak{D}(x_{F}x_{F^{\prime}}^{2})\) gets encoded as the following sequence of parentheses: \[\begin{array}{ccccccccc}1&2&3&4&5&6&7&8&9\\ &\times&\bullet&&\times&\times&\bullet\\ )&(&(&))&(&(&(&)&)\end{array}\] Note that one can recover \(\mathfrak{D}(a)\) from this \(\{(,)\}^{r}\) sequence as follows: * \(\mathfrak{D}(a)\) has \(\times\) occurring in the positions of the left parentheses, and * \(\mathfrak{D}(a)\) has the \(\bullet\) occurring exactly in the positions of the right parenthesis in a consecutive pair \(()\), while blank spaces occur in the position of all other right parentheses. Second proof of Theorem 1.1.: Given an FY-monomial \(a\) in \(\operatorname{FY}^{k}\) with \(k\leq\frac{r}{2}\), we will use its parenthesis sequence in \(\{(,)\}^{r}\) to place \(a\) within a chain of monomials as in (8), of the form \[C_{i}=\{a_{\rho}\lessdot a_{\rho+1}\lessdot\cdots\lessdot a_{r-\rho-1} \lessdot a_{r-\rho}\},\] where \(\deg(a_{j})=j\) for \(j=\rho,\rho+1,\ldots,r-\rho\), so that \(a=a_{k}\). To this end, define the set of _paired parentheses_ in \(a\) (shown underlined in Example 3.9) by including all consecutive pairs \(()\), and after removing these consecutive pairs, including the new \(()\) pairs which have become consecutive, repeating this recursively. After removing some number \(\rho\) of pairs \(()\) via this pairing process, the process ends when one reaches a sequence of \(r-2\rho\) remaining unpaired parentheses of this form: \[\underbrace{))\cdots))}_{k-\rho}\underbrace{((\cdots(((}\,. \tag{9}\] The monomials in the chain \(C_{i}\) are defined to be those whose set of paired parentheses agree exactly with those of \(a\), both in their postitions, and left-right pairing structure- see the underlined parentheses in Example 3.9. As in the first proof of Theorem 1.1, one then defines the bijections \(\pi\) and maps \(\lambda\) via \[\pi(a) :=a_{r-k},\] \[\lambda(a) :=a_{k+1}\text{ if }k<\frac{r}{2}.\] In other words, both maps \(\lambda\) and \(\pi\) when applied to \(a\) will keep all of the paired parentheses fixed, but * \(\lambda\) changes the leftmost unpaired left parenthesis "\()\)" into an unpaired right parenthesis "\((\)", and * \(\pi\) swaps the numbers \(k-\rho\) and \(r-\rho-k\) of unpaired right and left parentheses in (9). The equivariance of these two maps \(\pi,\lambda\) is argued exactly as in the first proof of the theorem. **Example 3.9**.: Here is an example of a symmetric chain from Example 3.4, explained via this two-step encoding with \(\mathfrak{D}(a)\) and \(\{(,)\}^{r}\)-sequences, as we have seen preceding Examples 3.6 and 3.8. Paired parentheses are underlined, and are fixed throughout the chain. Moving up the chain, unpaired right parentheses change one-by-one to left parentheses (depicted orange here), in order from right to left: \[\begin{array}{cccccccccccc}&1&2&3&4&5&6&7&8&9\\ x_{F}^{2}x_{F^{\prime}}^{3}x_{E}&\times&\times&\bullet&\times&\times&\times& \bullet&&\times\\ &(&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&(& \underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{ \phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_ {F}^{2}}}&\underline{\phantom{x_{F}^{2}}}\\ \uparrow&&&&&&&&\\ x_{F}x_{F^{\prime}}^{3}x_{E}&&\times&\bullet&\times&\times&\times&\bullet&& \times\\ &)&\underline{\phantom{x_{F}^{2}}}&(&\underline{\phantom{x_{F}^{2}}}& \underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{ \phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{ x_{F}^{2}}}\\ \uparrow&&&&&&&&\\ x_{F}x_{F^{\prime}}^{2}x_{E}&&\times&\bullet&&\times&\times&\bullet&&\times\\ &)&\underline{\phantom{x_{F}^{2}}}&(&\underline{\phantom{x_{F}^{2}}}& \underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{ \phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_ {F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}\\ \uparrow&&&&&&&&\\ x_{F}x_{F^{\prime}}^{2}&&\times&\bullet&&\times&\times&\bullet&&\\ &)&\underline{\phantom{x_{F}^{2}}}&(&\underline{\phantom{x_{F}^{2}}}& \underline{\phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{ \phantom{x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}&\underline{\phantom{ x_{F}^{2}}}&\underline{\phantom{x_{F}^{2}}}\\ \end{array}\] Here \(r=9\) and the number of parenthesis pairs is \(\rho=3\), so that this chain \(C_{i}\) is symmetrically placed within the degrees of \(A(\mathcal{M})\), containing monomials of degrees \([\rho,r-\rho]=[3,6]=\{3,4,5,6\}\) out of the list of possible degrees \([0,r]=[0,9]=\{0,1,2,\mathbf{3},\mathbf{4},\mathbf{5},\mathbf{6},7,8,9\}\). ## 4. Conjectures on equivariant and Burnside ring inequalities We have mentioned that the unimodality statement (2), asserting for \(k<\frac{r}{2}\) that one has \[a_{k}\leq a_{k+1},\] is weaker than the statement in Corollary 2.15 asserting that there are injective \(\mathbb{R}G\)-module maps \[A_{\mathbb{R}}^{k}\to A_{\mathbb{R}}^{k+1},\] which is weaker than Theorem 1.1(ii) asserting that there are injective \(G\)-equivariant maps of the \(G\)-sets \[\mathrm{FY}^{k}\hookrightarrow\mathrm{FY}^{k+1}.\] In this section, we wish to consider not only unimodality for \((a_{0},a_{1},\ldots,a_{r})\), but other properties like _log-concavity_, _the Polya frequency property_, and how to similarly lift them to statements regarding \(\mathbb{R}G\)-modules and \(G\)-permutation representations. In phrasing this, it helps to consider certain algebraic objects. ### Virtual character rings and Burnside rings **Definition 4.1**.: _(Virtual character ring) For a finite group \(G\), its virtual (complex) character ring \(R_{\mathbb{C}}(G)\) is the free \(\mathbb{Z}\)-submodule of the ring of (conjugacy) class functions \(\{f:G\to\mathbb{C}\}\) with pointwise addition and multiplication, having as a \(\mathbb{Z}\)-basis the irreducible complex characters \(\{\chi_{1},\ldots,\chi_{M}\}\), where \(M\) is the number of conjugacy classes of \(G\). Thus every virtual character \(\chi\) in \(R_{\mathbb{C}}(G)\) has a unique expansion \(\chi=\sum_{i=1}^{M}a_{i}\chi_{i}\) for some \(a_{i}\in\mathbb{Z}\). If \(a_{i}\geq 0\) for \(i=1,2,\ldots,M\), call \(\chi\) a genuine character, and write \(\chi\geq_{R_{\mathbb{C}}(G)}0\). Similarly, we write \(\chi\geq_{R_{\mathbb{C}}(G)}\chi^{\prime}\) when \(\chi-\chi^{\prime}\geq_{R_{\mathbb{C}}(G)}0\)._ **Definition 4.2**.: _(Burnside ring)_ For a finite group \(G\), to define its _Burnside ring_\(B(G)\) one starts with a free \(\mathbb{Z}\)-module having as basis the isomorphism classes \([X]\) of finite \(G\)-sets \(X\). Then \(B(G)\) is the quotient \(\mathbb{Z}\)-module that mods out by the span of all elements \([X\sqcup Y]-([X]+[Y])\). Multiplication in \(B(G)\) is induced from the rule \([X]\cdot[Y]=[X\times Y]\). It turns out that \(B(G)\) has a \(\mathbb{Z}\)-basis \(\{[G/G_{i}]\}_{i=1}^{N}\) as \(G_{1},\ldots,G_{N}\) run through representatives of the \(G\)-conjugacy classes of subgroups of \(G\). Thus every element \(b\) of \(B(G)\) has a unique expansion \(b=\sum_{i=1}^{N}a_{i}[G/G_{i}]\) for some \(a_{i}\in\mathbb{Z}\). If \(a_{i}\geq 0\) for \(i=1,2,\ldots,N\), call \(b\) a _genuine permutation representation_, and write \(b\geq_{B(G)}0\). Similarly, write \(b\geq_{B(G)}0\) when \(b-b^{\prime}\geq_{B(G)}0\). Note that there are natural ring maps \[B(G) \longrightarrow R_{\mathbb{C}}(G),\] \[R_{\mathbb{C}}(G) \longrightarrow\mathbb{Z}.\] The first map sends the class \([X]\) of a \(G\)-set \(X\) to the character \(\chi_{\mathbb{C}[X]}\) of its \(G\)-permutation representation \(\mathbb{C}[X]\), having character values \(\chi_{\mathbb{C}[X]}(g)=\#\{x\in X:g(x)=x\}\). The second map sends a virtual character \(\chi\) to its value \(\chi(e)\) on the identity \(e\) of \(G\). These maps carry genuine elements \(b\geq_{B(G)}0\) in \(B(G)\) to genuine characters \(\chi\geq_{R_{\mathbb{C}}(G)}0\) which are then carried to nonnegative integers \(\mathbb{Z}_{\geq 0}\). In this way, inequalities in \(B(G)\) lift inequalities in \(R_{\mathbb{C}}(G)\), which lift inequalities in \(\mathbb{Z}\). **Example 4.3**.: We saw that the unimodality inequality (2) \(a_{k}\leq_{\mathbb{Z}}a_{k+1}\) lifts to the inequality \(A_{\mathbb{R}}^{k}\leq_{R_{\mathbb{C}}(G)}A_{\mathbb{R}}^{k+1}\) in Corollary 2.15, which lifts to the inequality \([\mathrm{FY}^{k}]\leq_{B(G)}[\mathrm{FY}^{k+1}]\) in Theorem 1.1(ii). It should also be noted that, just as one can multiply inequalities in \(\mathbb{Z}\) like \(a<b\) and \(c<d\) to get new inequalities \(ac<bd\), the same works in \(R_{\mathbb{C}}(G)\) and in \(B(G)\). This is because \(\chi,\chi^{\prime}\geq_{R_{\mathbb{C}}(G)}0\) implies \(\chi\cdot\chi^{\prime}\geq_{R_{\mathbb{C}}(G)}0\), and similarly \(b,b^{\prime}\geq_{B(G)}0\) implies \(b\cdot b^{\prime}\geq_{B(G)}0\). ### PF sequences and log-concavity For a sequence of _positive_ real numbers \((a_{0},a_{1},\ldots,a_{r})\), the property of _unimodality_ lies at the bottom of a hierarchy of concepts \[\begin{array}{ccccccccccccc}\text{unimodal}&\Leftarrow&PF_{2}&\Leftarrow& PF_{3}&\Leftarrow&PF_{4}&\Leftarrow&\cdots&\Leftarrow&PF_{\infty}\\ &&\parallel&&&&&&&&\parallel\\ &&\text{(strongly) log-concave}&&&&&&&&PF\end{array} \tag{10}\] which we next review, along with their equivariant and Burnside ring extensions. For background on the non-equivariant versions, see Brenti [10] and Stanley [11]. For the equivariant versions, see Gedeon, Proudfoot and Young [12], Matherne, Miyata, Proudfoot and Ramos [13], Gui [14], Gui and Xiong [15], and Li [16]. **Definition 4.4**.: Say a sequence of positive reals \((a_{0},a_{1},\ldots,a_{r})\) is _unimodal_ if there is some index \(m\) with \[a_{0}\leq a_{1}\leq\cdots\leq a_{m-1}\leq a_{m}\geq a_{m+1}\geq\cdots\geq a_{r- 1}\geq a_{r}.\] Say the sequence is _strongly1 log-concave_ (or \(PF_{2}\)) if \(0\leq i\leq j\leq k\leq\ell\leq r\) and \(i+\ell=j+k\) implies Footnote 1: The word “strongly” here is superfluous, since we assumed each \(a_{k}>0\), so they are strongly log-concave if and only they are weakly log-concave: \(a_{k}^{2}\geq a_{k-1}a_{k+1}\). The distinction becomes important for the equivariant analogue; see [13, §2]. \[a_{i}a_{\ell}\leq a_{j}a_{k},\text{ or equivalently, }\det\begin{bmatrix}a_{j}&a_{ \ell}\\ a_{i}&a_{k}\end{bmatrix}\geq 0.\] For \(\ell=2,3,4,\ldots\), say that the sequence is \(PF_{\ell}\) if the associated (infinite) _Toeplitz matrix_ \[T(a_{0},\ldots,a_{r}):=\begin{bmatrix}a_{0}&a_{1}&a_{2}&\cdots&a_{r-1}&a_{r}& 0&0&\cdots\\ 0&a_{0}&a_{1}&\cdots&a_{r-2}&a_{r-1}&a_{r}&0&\cdots\\ 0&0&a_{0}&\cdots&a_{r-3}&a_{r-2}&a_{r-1}&a_{r}&\cdots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{bmatrix}\] has all _nonnegative_ square minor subdeterminants of size \(m\times m\) for \(1\leq m\leq\ell\). Say that the sequence is a _Polya frequency sequence_ (or \(PF_{\infty}\), or just \(PF\)) if it is \(PF_{\ell}\) for all \(\ell=2,3,\ldots\). One can check the implication (\(PF_{2}\) implies unimodality) from (10) using the assumption \(a_{k}>0\) for all \(k\). It also turns out that \((a_{0},a_{1},\dots,a_{r})\) is \(PF\) if and only if the polynomial \(a_{0}+a_{1}t+a_{2}t^{2}+\dots+a_{r}t^{r}\) has only (negative) real roots; see [1, SS2.2, 4.5]. **Definition 4.5**.: For a finite group \(G\) and (genuine, nonzero) \(\mathbb{C}G\)-modules \((A^{0},A^{1},\dots,A^{r})\), define the analogous notions of _equivariant unimodality_, _equivariant strong log-concavity_, _equivariant \(PF_{r}\) or \(PF_{\infty}\)_ by replacing the numerical inequalities in Definition 4.4 by inequalities in the representation ring \(R_{\mathbb{C}}(G)\). Similarly, for (nonempty) \(G\)-sets \((X_{0},X_{1},\dots,X_{r})\), define the notions of _Burnside unimodality_, _Burnside strong log-concavity_, _Burnside \(PF_{r}\) or \(PF_{\infty}\)_ by replacing them with inequalities in the Burnside ring \(B(G)\). **Example 4.6**.: We've seen for Chow rings \(A(\mathcal{M})=\bigoplus_{k=0}^{r}A^{k}\) of rank \(r+1\) matroids \(\mathcal{M}\), and \(G=\operatorname{Aut}(\mathcal{M})\), * the sequence \((a_{0},a_{1},\dots,a_{r})\) with \(a_{k}:=\operatorname{rk}_{2}A_{k}\) is _unimodal_, * after tensoring with \(\mathbb{C}\), the sequence of \(\mathbb{C}G\)-modules \((A^{0}_{\mathbb{C}},A^{1}_{\mathbb{C}},\dots,A^{r}_{\mathbb{C}})\) is _equivariantly unimodal_, and * the sequence of \(G\)-sets \((\operatorname{FY}^{0},\operatorname{FY}^{1},\dots,\operatorname{FY}^{r})\) is _Burnside unimodal_. **Conjecture 4.7**.: _In the same Chow ring context as Example 4.6, one has that_ 1. _(Ferroni-Schroter [11, Conj. 10.19])_ \((a_{0},\dots,a_{r})\) _is_ \(PF_{\infty}\)_._ 2. \((A^{0}_{\mathbb{C}},\dots,A^{r}_{\mathbb{C}})\) _is equivariantly_ \(PF_{\infty}\)_._ 3. \((\operatorname{FY}^{0},\dots,\operatorname{FY}^{r})\) _is Burnside_ \(PF_{2}\) _(Burnside log-concave), that is,_ \[[\operatorname{FY}^{i}][\operatorname{FY}^{\ell}]\leq_{B(G)}[\operatorname{ FY}^{j}][\operatorname{FY}^{k}]\qquad\text{for for }\ i\leq j\leq k\leq\ell\ \text{ with }\ i+\ell=j+k.\] Of course, in Conjecture 4.7, assertion (ii) implies assertion (i). However assertion (iii) would only imply the weaker \(PF_{2}\) part of the conjectural assertion (ii), and only imply the \(PF_{2}\) part of Ferroni and Schroter's assertion (i), but not their \(PF_{\infty}\) assertions. Even the \(PF_{2}\) property for \((a_{0},\dots,a_{r})\) is still conjectural; see [11, SS10.3] and [12, SS3.7] for a discussion of the current evidence for Conjecture 4.7(i). **Example 4.8**.: We explain here why Conjecture 4.7_does not_ assert that \((\operatorname{FY}^{0},\dots,\operatorname{FY}^{r})\) is Burnside \(PF_{\infty}\). In fact, \((\operatorname{FY}^{0},\dots,\operatorname{FY}^{r})\)_fails even to be Burnside \(PF_{3}\)_, already when \(\mathcal{M}\) is a rank 4 Boolean matroid. Its Chow ring \(A(\mathcal{M})=A^{0}\oplus A^{1}\oplus A^{2}\oplus A^{3}\) has \(A_{0},A_{3}\) carrying the trivial \(\mathbb{C}\mathfrak{S}_{4}\)-module, and \(A^{1},A^{2}\) carrying isomorphic permutation representations, each having three orbits, whose three \(\mathfrak{S}_{4}\)-stabilizer groups are the Young subgroups \(\mathfrak{S}_{4},\mathfrak{S}_{3}\times\mathfrak{S}_{1},\mathfrak{S}_{2} \times\mathfrak{S}_{2}\). The red \(3\times 3\) minor of the Toeplitz matrix shown here \[\begin{bmatrix}a_{0}&a_{1}&a_{2}&a_{3}&0&0&\dots\\ 0&a_{0}&a_{1}&a_{2}&a_{3}&0&\dots\\ 0&0&a_{0}&a_{1}&a_{2}&a_{3}&\dots\\ 0&0&0&a_{0}&a_{1}&a_{2}&\dots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{bmatrix}\] has determinant \(a_{1}^{2}a_{2}-a_{1}a_{3}-a_{2}^{2}\). Hence the Burnside \(PF_{3}\) condition would require that the following genuine \(\mathfrak{S}_{4}\)-character should come from a genuine permutation representation \[(\chi_{A^{1}})^{2}\cdot\chi_{A^{2}}-\chi_{A^{1}}\chi_{A^{3}}-(\chi_{A^{2}})^{2 }=29\chi^{(1,1,1,1)}+124\chi^{(2,1,1)}+103\chi^{(2,2)}+172\chi^{(3,1)}+76\chi ^{(4)},\] where here \(\chi^{\lambda}\) denotes the irreducible \(\mathfrak{S}_{n}\)-representation [10], [11, SS7.18] indexed by the partition \(\lambda\) of \(n\); this expansion was computed using Sage/Cocalc. But one can check that this is _not_ a permutation representation, as its character value on the conjugacy class of 4-cycles in \(\mathfrak{S}_{4}\) is \(+76-172+124-29=-1<0\). **Remark 4.9**.: Although Example 4.8 shows that even the Boolean matroids contradict strengthening Conjecture 4.7(iii) to Burnside \(PF_{3}\), they seem to satisfy a _different_ strengthening of strong log-concavity: **Conjecture 4.10**.: _For a Boolean matroid \(\mathcal{M}\) of rank \(n\) and \(i\leq j\leq k\leq\ell\) with \(i+\ell=j+k\), not only is the element \([\operatorname{FY}^{j}][\operatorname{FY}^{k}]-[\operatorname{FY}^{i}][ \operatorname{FY}^{\ell}]\geq_{B(\mathfrak{S}_{n})}0,\) so that it is a genuine permutation representation, but furthermore one whose orbit-stabilizers are all Young subgroups \(\mathfrak{S}_{\lambda}:=\mathfrak{S}_{\lambda_{1}}\times\mathfrak{S}_{\lambda_{2 }}\times\dots\times\mathfrak{S}_{\lambda_{\ell}}\)._ We note here a small amount of evidence for Conjecture 4.7(ii), (iii), namely their strong log-concavity assertions hold for the case \(i=0\). For assertion (ii), this is an easy consequence of the fact that the Chow ring \(A(\mathcal{M})\) is generated by the variables \(\{y_{F}\}\) spanning its degree one component \(A^{1}\), which shows that this \(G\)-equivariant multiplication map surjects: \[A^{j}\otimes A^{k}\twoheadrightarrow A^{j+k}\left(\cong A^{0}\otimes A^{j+k} \right). \tag{11}\] We next check that the stronger assertion of Conjecture 4.7(iii) also holds in the special case \(i=0\). **Proposition 4.11**.: _For matroids \(\mathcal{M}\) and \(j,k\geq 0\), one has a \(G\)-equivariant injection \(\mathrm{FY}^{j+k}\hookrightarrow\mathrm{FY}^{j}\times\mathrm{FY}^{k}\)._ Proof.: Given \(a=x_{F_{1}}^{m_{1}}\cdots x_{F_{\ell}}^{m_{\ell}}\) in \(\mathrm{FY}^{j+k}\), so that \(j+k=\sum_{i=1}^{\ell}m_{i}\), let \(p\) be the smallest index such that \[\sum_{i=1}^{p-1}m_{i}<j\leq\sum_{i=1}^{p}m_{i} \tag{12}\] and factor the monomial \(a=b\cdot c\) where \[a=\underbrace{x_{F_{1}}^{m_{1}}\cdots x_{F_{p-1}}^{m_{p-1}}x_{F_{p}}^{\delta}} _{b:=}\quad\underbrace{x_{F_{p}}^{m_{p}-\delta}x_{F_{p+1}}^{m_{p+1}}\cdots x_ {F_{\ell}}^{m_{\ell}}}_{c:=}\] with \(\delta:=j-\sum_{i=1}^{p-1}m_{i}\) (so \(\delta>0\) by (12)), and \(m_{p}-\delta\geq 0\). One can check that, since \(a\) lies in \(\mathrm{FY}^{j+k}\), one will also have \(b,c\) lying in \(\mathrm{FY}^{j},\mathrm{FY}^{k}\), respectively. It is easily seen that the map \(a\longmapsto(b,c)\) is injective, since its inverse sends \((b,c)\longmapsto bc\). It is also not hard to check that it is \(G\)-equivariant. **Remark 4.12**.: Note that one can iterate the map in the previous proof to construct \(G\)-equivariant injections \(\prod_{i=1}^{q}\mathrm{FY}^{\alpha_{i}}\hookrightarrow\prod_{j=1}^{p}\mathrm{ FY}^{\beta_{j}}\) whenever \(\beta=(\beta_{1},\beta_{2},\ldots,\beta_{p})\) is a composition refining \(\alpha=(\alpha_{1},\alpha_{2},\ldots,\alpha_{q})\). As another small piece of evidence for Conjecture 4.7 (ii), (iii), we show that \((\mathrm{FY}^{0},\ldots,\mathrm{FY}^{r})\) is Burnside \(PF_{2}\) for \(r\leq 5\), that is, for matroids of rank at most \(6\). **Proposition 4.13**.: _For any matroid \(\mathcal{M}\) with \(\mathrm{rk}(\mathcal{M})\leq 6\), the sequence \((\mathrm{FY}^{0},\ldots,\mathrm{FY}^{r})\) is Burnside \(PF_{2}\)._ Proof sketch.: We check it for \(\mathrm{rk}(\mathcal{M})=6\), and \(\mathrm{rk}(\mathcal{M})\leq 5\) is similar. Theorem 1.1(i) shows that in \(B(G)\), \[\left([\mathrm{FY}^{0}],\,[\mathrm{FY}^{1}],\,[\mathrm{FY}^{2}],\,[\mathrm{FY} ^{3}],\,[\mathrm{FY}^{4}],\,[\mathrm{FY}^{5}]\right)=\left(1,\,[\mathrm{FY}^{1}],\,[\mathrm{FY}^{2}],\,[\mathrm{FY}^{2}],\,[\mathrm{FY}^{1}],\,1\right).\] Hence one must check nonnegativity in \(B(G)\) for all \(2\) x \(2\) minors in this infinite Toeplitz matrix: \[\begin{bmatrix}1&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{2}]&[ \mathrm{FY}^{1}]&1&0&0&\ldots\\ 0&1&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{1}]&1&0& \ldots\\ 0&0&1&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{1}]&1& \ldots\\ 0&0&0&1&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{1}] &\ldots\\ 0&0&0&0&1&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{2}]&\ldots\\ \vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\vdots&\ddots\end{bmatrix}\] From periodicity of the matrix, one may assume without loss of generality that the \(2\times 2\) minor has its top-left entry in the first row. If the minor has a \(0\) as either its lower-left or upper-right entry, then its determinant is a product \([\mathrm{FY}^{i}][\mathrm{FY}^{j}]=[\mathrm{FY}^{i}\times\mathrm{FY}^{j}]\geq_ {B(G)}0\). This already leaves only finitely many \(2\times 2\) minors to check. Additionally, if it has \(1\) as its lower left entry, then it was shown to be Burnside-nonnegative in Theorem 4.11. All of the remaining \(2\times 2\) minors we claim are Burnside-nonnegative because they compare two (possibly non-consecutive) terms in this chain of inequalities: \[1\stackrel{{(a)}}{{\leq}}_{B(G)}[\mathrm{FY}^{1}]\stackrel{{ (b)}}{{\leq}}_{B(G)}[\mathrm{FY}^{2}]\stackrel{{(c)}}{{\leq}}_{B (G)}[\mathrm{FY}^{1}][\mathrm{FY}^{1}]\stackrel{{(d)}}{{\leq}}_{B (G)}[\mathrm{FY}^{1}][\mathrm{FY}^{2}]\stackrel{{(e)}}{{\leq}}_{B (G)}[\mathrm{FY}^{2}][\mathrm{FY}^{2}]\] where inequalities (a),(b) follow from Theorem 1.1(ii), inequality (c) follows from Theorem 4.11, and inequality (d),(e) come from multiplying inequality (b) by \([\mathrm{FY}^{1}]\) and multiplying inequality (a) by \([\mathrm{FY}^{2}]\). **Remark 4.14**.: When \(\mathrm{rk}(\mathcal{M})\geq 7\), one encounters the first \(2\times 2\) minor in \(B(G)\) for \(G=\mathrm{Aut}(\mathcal{M})\) \[\det\begin{bmatrix}[\mathrm{FY}^{2}]&[\mathrm{FY}^{3}]\\ [\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]\end{bmatrix}=[\mathrm{FY}^{2}][\mathrm{FY}^{ 2}]-[\mathrm{FY}^{1}][\mathrm{FY}^{3}]=[\mathrm{FY}^{2}\times\mathrm{FY}^{2}]-[ \mathrm{FY}^{1}\times\mathrm{FY}^{3}]\] whose Burnside nonnegativity does not already follow from our previous results. ### Koszulity The surjection in (11) that proved a special case of Conjecture 4.7(ii) turns out to be the \(2\times 2\) special case of more general equivariant \(\ell\times\ell\) Toeplitz minor inequalities for Chow rings \(A(\mathcal{M})\). These inequalities follow from general theory of _Koszul algebras_, along with a recent result of Maestroni and McCullough [13] showing \(A(\mathcal{M})\) is Koszul. After reviewing these results, we state a conjecture that would generalize Proposition 4.11 and upgrade these Toeplitz minor inequalities from the representation ring \(R_{\mathbb{C}}(G)\) to the Burnside ring \(B(G)\). A good reference on Koszul algebras is Polishchuk and Positselski [14]. **Definition 4.15**.: Let \(\mathbb{F}\) be a field, and \(A\) a _finitely generated standard graded associative \(\mathbb{F}\)-algebra_. This means \(A\) is a quotient \(A=T/I\) where \(T=\mathbb{F}\langle x_{1},\ldots,x_{n}\rangle\) is the free associative algebra on \(n\) noncommuting variables \(x_{1},\ldots,x_{n}\), considered to all have \(\deg(x_{i})=1\), and \(I\) is a homogeneous two-sided ideal in \(T\). Writing2\(A=\bigoplus_{k=0}^{\infty}A_{k}\), let \(A_{+}:=\bigoplus_{k=1}^{\infty}A_{k}\) be the maximal graded two-sided ideal of \(A\). Regard the field \(\mathbb{F}\) as an \(A\)-module via the quotient surjection \(A\twoheadrightarrow A/A_{+}\cong\mathbb{F}\). In other words, each \(x_{i}\) acts as \(0\) on \(\mathbb{F}\). Footnote 2: Apologies to the reader that we are writing \(A_{k}\) here rather than \(A^{k}\) as we did for the Chow rings. Say that \(A\) is a _Koszul algebra_ if the above surjection \(A\twoheadrightarrow\mathbb{F}\) extends to a _linear_ graded free \(A\)-resolution of \(\mathbb{F}\) as an \(A\)-module, meaning that the \(i^{th}\) resolvent \(F_{i}=A(-i)^{\beta_{i}}\) for some \(\beta_{i}\geq 0\): \[0\leftarrow\mathbb{F}\gets A\gets A(-1)^{\beta_{1}}\gets A(-2)^{ \beta_{2}}\gets A(-3)^{\beta_{3}}\leftarrow\cdots\] There are several equivalent ways to say when \(A\) is Koszul, such as requiring that the polynomial grading of \(\operatorname{Tor}_{i}^{A}(\mathbb{F},\mathbb{F})\) is concentrated in degree \(i\). Equivalently, this means that if one starts with the _bar complex_\(\mathcal{B}_{A}\) as an \(A\)-resolution of \(\mathbb{F}\), and then tensors over \(A\) with \(\mathbb{F}\), one obtains a complex \(\mathbb{F}\otimes_{A}\mathcal{B}_{A}\) of graded \(\mathbb{F}\)-vector spaces whose \(i^{th}\) homology is concentrated in degree \(i\). The latter characterization leads to the following result of Polishchuk and Positselski. **Theorem 4.16**.: _[_14_, Chap. 2, Prop. 8.3]_ _For any Koszul algebra \(A\), and any composition \((\alpha_{1},\ldots,\alpha_{r})\) of \(m=\sum_{i}\alpha_{i}\), there exists a subcomplex \((C_{*},d)\) of \(\mathbb{F}\otimes_{A}\mathcal{B}_{A}\) of the form \(0\to C_{\ell}\to C_{\ell-1}\to\cdots\to C_{1}\to 0\) starting with \(C_{\ell}=A_{\alpha_{1}}\otimes\cdots\otimes A_{\alpha_{\ell}}\) at left, ending with \(C_{1}=A_{\alpha_{1}+\cdots+\alpha_{\ell}}=A_{m}\) at right, and with \(i^{th}\) term_ \[C_{i}=\bigoplus_{\beta}A_{\beta_{1}}\otimes\cdots\otimes A_{\beta_{i}}\] _where \(\beta\) in the direct sum runs over all compositions with \(i\) parts that coarsen \(\alpha\). This complex \((C_{*},d)\) is exact except at the left end \(C_{\ell}\), meaning that this complex is exact:_ \[0\to\ker(d_{\ell})\to C_{\ell}\to C_{\ell-1}\to\cdots\to C_{1}\to 0 \tag{13}\] _The complex \((C_{*},d)\) is also \(G\)-equivariant for any group \(G\) of graded \(\mathbb{F}\)-algebra automorphisms of \(A\)._ Taking the alternating sum of the Euler characteristics term-by-term in (13) yields the following, where here we conflate a \(\mathbb{F}G\)-module \(A_{k}\) with its character \(\chi_{A_{k}}\). **Corollary 4.17**.: _In the above setting, the character of the \(\mathbb{F}G\)-module \(\ker(d_{\ell}:C_{\ell}\to C_{\ell-1})\) has this expression_ \[\chi_{\ker(d_{\ell})}=\sum_{i=1}^{\ell}(-1)^{\ell-i}\chi_{C_{i}} =\sum_{i=1}^{\ell}(-1)^{\ell-i}\sum_{\begin{subarray}{c}\beta:\, \ell(\beta)=i\\ \beta\,\,\text{\tiny coarsens }\alpha\end{subarray}}A_{\beta_{1}}\otimes\cdots\otimes A_{\beta_{i}}\] \[=\det\begin{bmatrix}A_{\alpha_{1}}&A_{\alpha_{1}+\alpha_{2}}&A_{ \alpha_{1}+\alpha_{2}+\alpha_{3}}&\cdots&A_{m}\\ A_{0}&A_{\alpha_{2}}&A_{\alpha_{2}+\alpha_{3}}&\cdots&A_{m-\alpha_{1}}\\ 0&A_{0}&A_{\alpha_{3}}&\cdots&A_{m-(\alpha_{1}+\alpha_{2})}\\ 0&0&&&\vdots\\ \vdots&\vdots&&&A_{\alpha_{\ell-1}+\alpha_{\ell}}\\ 0&0&\cdots&A_{0}&A_{\alpha_{\ell}}\end{bmatrix}\] _as an \(\ell\times\ell\) Toeplitz matrix minor for the sequence of \(\mathbb{F}G\)-modules \((A_{0},A_{1},A_{2},\ldots)\). In particular, when \(\mathbb{F}=\mathbb{C}\), then all Toeplitz minors of this form are genuine characters in \(R_{\mathbb{C}}(G)\)._ **Example 4.18**.: When \(\ell=2\) so that \(\alpha=(j,k)\), the exact sequence (13) looks like \[0\to\ker(d_{2})\to A_{j}\otimes A_{k}\to A_{j+k}\to 0\] giving this character identity \[\det\begin{bmatrix}A_{j}&A_{j+k}\\ A_{0}&A_{k}\end{bmatrix}=\chi_{\ker(d_{2})}\quad(\geq_{R_{\mathbb{C}}(G)}0\text{ if }\mathbb{F}=\mathbb{C}).\] When \(\ell=3\) so that \(\alpha=(a,b,c)\), the exact sequence (13) looks like \[0\to\ker(d_{3})\to A_{a}\otimes A_{b}\otimes A_{c}\to\begin{array}{c}A_{a+b} \otimes A_{c}\\ \oplus\\ A_{a}\otimes A_{b+c}\end{array}\to A_{a+b+c}\to 0\] giving this character identity \[\det\begin{bmatrix}A_{a}&A_{a+b}&A_{a+b+c}\\ A_{0}&A_{b+c}\\ 0&A_{0}&A_{c}\end{bmatrix}=\chi_{\ker(d_{3})}\quad(\geq_{R_{\mathbb{C}}(G)}0 \text{ if }\mathbb{F}=\mathbb{C}).\] When \(\ell=4\) so that \(\alpha=(a,b,c,d)\), the exact sequence (13) looks like \[A_{a+b}\otimes A_{c}\otimes A_{d}\quad\quad A_{a+b+c}\otimes A_{d}\] \[0\to\ker(d_{4})\to A_{a}\otimes A_{b}\otimes A_{c}\otimes A_{d} \to A_{a}\otimes A_{b+c}\otimes A_{d}\to A_{a+b}\otimes A_{c+d}\to A_{a+b+c+d}\to 0\] \[\oplus\] \[A_{a}\otimes A_{b}\otimes A_{c+d}\quad\quad A_{a}\otimes A_{b+c+d}\] giving this character identity \[\det\begin{bmatrix}A_{a}&A_{a+b}&A_{a+b+c}&A_{a+b+c+d}\\ A_{0}&A_{b}&A_{b+c}&A_{b+c+d}\\ 0&A_{0}&A_{c}&A_{c+d}\\ 0&0&A_{0}&A_{d}\end{bmatrix}=\chi_{\ker(d_{4})}\quad(\geq_{R_{\mathbb{C}}(G)}0 \text{ if }\mathbb{F}=\mathbb{C}).\] One can now apply this to the case of Chow rings of matroids, via this result. **Theorem 4.19**.: _(Maestroni and McCullough [14]) For any matroid \(\mathcal{M}\), the Chow ring \(A(\mathcal{M})\) is Koszul._ This gives the following promised generalization of (11). **Corollary 4.20**.: _For a matroid \(\mathcal{M}\) of rank \(r+1\) with Chow ring \(A(\mathcal{M})=\bigoplus_{k=0}^{r}A_{k}\), and any composition \(\alpha=(\alpha_{1},\dots,\alpha_{\ell})\) with \(m:=\sum_{i}\alpha\leq r\), the \(\ell\times\ell\) Toeplitz minor determinant as shown in in Corollary 4.17 is a genuine character in \(R_{\mathbb{C}}(G)\) for \(G=\operatorname{Aut}(\mathcal{M})\)._ Here is the conjectural lift of the previous corollary to Burnside rings, whose \(2\times 2\)-case is Proposition 4.11. **Conjecture 4.21**.: _In the same context as Corollary 4.20, the analogous Toeplitz minors of \(G\)-sets have_ \[\det\begin{bmatrix}[\mathrm{FY}^{\alpha_{1}}]&[\mathrm{FY}^{\alpha_{1}+ \alpha_{2}}]&[\mathrm{FY}^{\alpha_{1}+\alpha_{2}+\alpha_{3}}]&\cdots&[\mathrm{ FY}^{m}]\\ [\mathrm{FY}^{0}]&[\mathrm{FY}^{\alpha_{2}}]&[\mathrm{FY}^{\alpha_{2}+ \alpha_{3}}]&\cdots&[\mathrm{FY}^{m-\alpha_{1}}]\\ 0&[\mathrm{FY}^{0}]&[\mathrm{FY}^{\alpha_{3}}]&\cdots&[\mathrm{FY}^{m-(\alpha _{1}+\alpha_{2})}]\\ 0&0&&\vdots\\ \vdots&\vdots&&\vdots&[\mathrm{FY}^{\alpha_{\ell-1}+\alpha_{\ell}}]\\ 0&0&\cdots&[\mathrm{FY}^{0}]&[\mathrm{FY}^{\alpha_{\ell}}]\end{bmatrix}\geq_ {B(G)}0.\] As a bit of further evidence for Conjecture 4.21, we check it for the Toeplitz minors with \(\alpha=(1,1,1)\). **Theorem 4.22**.: _For any matroid \(\mathcal{M}\), the Chow ring \(A(\mathcal{M})\) has_ \[\det\begin{vmatrix}[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]&[\mathrm{FY}^{3}]\\ [\mathrm{FY}^{0}]&[\mathrm{FY}^{1}]&[\mathrm{FY}^{2}]\\ 0&[\mathrm{FY}^{0}]&[\mathrm{FY}^{1}]\end{vmatrix}\geq_{B(G)}0.\] Proof.: Multiplying out the determinant, one needs to prove the following inequality in \(B(G)\): \[[\operatorname{FY}^{1}\times\operatorname{FY}^{1}\times\operatorname{FY}^{1}]- \begin{pmatrix}[\operatorname{FY}^{2}\times\operatorname{FY}^{1}]\\ +\\ [\operatorname{FY}^{1}\times\operatorname{FY}^{2}]\end{pmatrix}+[ \operatorname{FY}^{3}]\ \geq_{B(G)}0,\] or equivalently, one must show the inequality \[[\quad(\operatorname{FY}^{2}\times\operatorname{FY}^{1})\ \sqcup\ (\operatorname{FY}^{1} \times\operatorname{FY}^{2})\quad]\ \leq_{B(G)}\ [\quad(\operatorname{FY}^{1}\times \operatorname{FY}^{1}\times\operatorname{FY}^{1})\ \sqcup\ \operatorname{FY}^{3}\quad].\] For this, it suffices to provide an injective \(G\)-equivariant map \[(\operatorname{FY}^{1}\times\operatorname{FY}^{2})\sqcup(\operatorname{FY}^{ 2}\times\operatorname{FY}^{1})\ \hookrightarrow\ (\operatorname{FY}^{1}\times \operatorname{FY}^{1}\times\operatorname{FY}^{1})\sqcup\operatorname{FY}^{3}.\] Such a map is summarized schematically in Figures 1 and 2, with certain abbreviation conventions: the variables \(x,y,z\) always abbreviate the variables \(x_{F_{1}},x_{F_{2}},x_{F_{3}}\) for a generic nested flag of flats \(F_{1}\subset F_{2}\subset F_{3}\), while the variable \(w\) abbreviates \(x_{F}\) for a flat \(F\) incomparable to any of \(F_{1},F_{2},F_{3}\). The Figures 1 and 2 describe for each type of element in \(\operatorname{FY}^{1}\times\operatorname{FY}^{2}\) and \(\operatorname{FY}^{2}\times\operatorname{FY}^{1}\) an appropriate image in either \(\operatorname{FY}^{1}\times\operatorname{FY}^{1}\times\operatorname{FY}^{1}\) or \(\operatorname{FY}^{3}\). Loosely speaking, the maps try to send elements to \(\operatorname{FY}^{3}\) whenever possible, that is, whenever their product is a valid element of \(\operatorname{FY}^{3}\). When this fails, we find an image in \(\operatorname{FY}^{1}\times\operatorname{FY}^{1}\times\operatorname{FY}^{1}\), carefully trying to keep track of which images have been used by noting various conditions on the \(x,y,z\), and \(w\), involving their ranks and sometimes their _coranks_, denoted \(\operatorname{cork}(F):=\operatorname{rk}(E)-\operatorname{rk}(F)\). Conditions in gray are forced by the form of the given tuple, while conditions in black are assumed to separate the map into disjoint cases. ## 5. Further questions and conjectures In addition to Conjectures 4.7, 4.10, 4.21 above, we collect here are a few more questions and conjectures. ### Other building sets with symmetry? Here we have focused on the Chow ring of a matroid \(\mathcal{M}\) using its _maximal_ building set. However, Feichtner and Yuzvinsky [13] give a presentation for their Chow ring with respect to _any_ building set. Their result [13, Thm. 2] also provide a Grobner basis for the ideal presenting the Chow ring that again has the pleasant properties of Theorem 2.8 and Corollary 2.9: whenever the building set is stable under some subgroup \(G\) of \(\operatorname{Aut}(\mathcal{M})\), the initial ideal for their Grobner basis is \(G\)-stable, as is the standard monomial basis for the Chow ring. One relevant example of such a building set is the _minimal_ building set, which is stable under the full automorphism group \(\operatorname{Aut}(\mathcal{M})\), and which arises, for example, in the study of the moduli space \(\overline{M}_{0,n}\) of genus \(0\) curves with \(n\) marked points; see, e.g., Dotsenko [12], Gibney and Maclagan [10], and Keel [14]. Furthermore, the Chow ring of \(\mathcal{M}\) with respect to any building set still satisfies the Kahler package. This follows3 combining the results of [1] for the maximal building set, and a theorem of Ardila, Denham and Huh [1, Thm. 1.6] asserting that having the Kahler package depends only on the support of the Bergman fan of \(\mathcal{M}\), not on how it is subdivided according to the building set. By the same arguments as in Corollaries 2.14, 2.15, the equivariant versions of Poincare duality and the Hard Lefschetz theorem will also hold. This raises the following question. Footnote 3: The authors thank Chris Eur for pointing them to this result. **Question 5.1**.: _Does the analogue of Theorem 1.1 hold for the Chow ring of a matroid \(\mathcal{M}\) with respect to any \(G\)-stable building set? In particular, what about the minimal building set?_ ### Explicit formulas for Chow rings as permutation representations? In [14, Lem. 3.1], Stembridge provides a generating function for the symmetric group representations on each graded component of the Chow ring for all Boolean matroids; see also Liao [15]. Furthermore, Stembridge's expression exhibits them as _permutation representations_, whose orbit-stabilizers are all _Young subgroups_ in the symmetric group. **Question 5.2**.: _Can one provide such explicit expressions as permutation representations for other families of matroids with symmetry?_ ### Equivariant \(\gamma\)-positivity? Hilbert functions \((a_{0},a_{1},\dots,a_{r})\) for Chow rings of rank \(r+1\) matroids are not only symmetric and unimodal, but satisfy the stronger condition of _\(\gamma\)-positivity_: one has _nonnegativity_ for all coefficients \(\gamma=(\gamma_{0},\gamma_{1},\dots,\gamma_{\lfloor\frac{r}{2}\rfloor})\) appearing in the unique expansion \[\sum_{i=0}^{r}a_{i}t^{i}=\sum_{i=0}^{\lfloor\frac{r}{2}\rfloor}\gamma_{i}\ t^{(1 +t)^{r-2i}}. \tag{14}\] See Athanasiadis [1] for a nice survey on \(\gamma\)-positivity. It has been shown, independently by Ferroni, Matherne, Schroter and Vecchi [12, Thm. 3.25] and by Wang (see [12, p. 29]), that the \(\gamma\)-positivity for Hilbert series of Chow rings of matroids follows from results of Braden, Huh, Matherne, Proudfoot and Wang [13] on _semismall decompositions_. One also has the notion of _equivariant \(\gamma\)-positivity_ for a sequence of \(G\)-representations \((A_{0},A_{1},\dots,A_{r})\), due originally to Shareshian and Wachs [11, SS5] (see also [1, SS5.2], [13]): upon replacing each \(a_{i}\) in (14) with the element \([A_{i}]\) of \(R_{\mathbb{C}}(G)\), one asks that the uniquely defined coefficients \(\gamma_{i}\) in \(R_{\mathbb{C}}(G)\) have \(\gamma_{i}\geq_{R_{\mathbb{C}}(G)}0\). Computations suggests the following. **Conjecture 5.3**.: _For any matroid \(\mathcal{M}\) of rank \(r+1\) and its Chow ring \(A(\mathcal{M})=\bigoplus_{i}A^{i}\), the sequence of \(G\)-representations \((A_{\mathbb{C}}^{0},A_{\mathbb{C}}^{1},\dots,A_{\mathbb{C}}^{r})\) is equivariantly \(\gamma\)-positive._ For example, [11, Cor. 5.4] verifies Conjecture 5.3 for Boolean matroids. However, one can check that the stronger conjecture of _Burnside \(\gamma\)-nonnegativity_ for \((\mathrm{FY}^{0},\mathrm{FY}^{1},\dots,\mathrm{FY}^{r})\) would _fail_ already for the Boolean matroid of rank \(3\): here \(\mathrm{FY}^{0},\mathrm{FY}^{2}\) carry the trivial \(\mathfrak{S}_{3}\) permutation representation \(\mathbf{1}\), while \(\mathrm{FY}^{1}\) carries the defining \(\mathfrak{S}_{3}\)-permutation representation on the set \(X=\{1,2,3\}\), so \(\gamma_{0}=[\mathbf{1}]\), but \(\gamma_{1}=[X]-[\mathbf{1}]\not\geq_{B(\mathfrak{S}_{3})}0\).
2309.07408
An Explicit Method for Fast Monocular Depth Recovery in Corridor Environments
Monocular cameras are extensively employed in indoor robotics, but their performance is limited in visual odometry, depth estimation, and related applications due to the absence of scale information.Depth estimation refers to the process of estimating a dense depth map from the corresponding input image, existing researchers mostly address this issue through deep learning-based approaches, yet their inference speed is slow, leading to poor real-time capabilities. To tackle this challenge, we propose an explicit method for rapid monocular depth recovery specifically designed for corridor environments, leveraging the principles of nonlinear optimization. We adopt the virtual camera assumption to make full use of the prior geometric features of the scene. The depth estimation problem is transformed into an optimization problem by minimizing the geometric residual. Furthermore, a novel depth plane construction technique is introduced to categorize spatial points based on their possible depths, facilitating swift depth estimation in enclosed structural scenarios, such as corridors. We also propose a new corridor dataset, named Corr\_EH\_z, which contains images as captured by the UGV camera of a variety of corridors. An exhaustive set of experiments in different corridors reveal the efficacy of the proposed algorithm.
Yehao Liu, Ruoyan Xia, Xiaosu Xu, Zijian Wang, Yiqing Ya, Mingze Fan
2023-09-14T03:24:03Z
http://arxiv.org/abs/2309.07408v1
# An Explicit Method for Fast Monocular Depth Recovery in Corridor Environments ###### Abstract Monocular cameras are extensively employed in indoor robotics, but their performance is limited in visual odometry, depth estimation, and related applications due to the absence of scale information.Depth estimation refers to the process of estimating a dense depth map from the corresponding input image, existing researchers mostly address this issue through deep learning-based approaches, yet their inference speed is slow, leading to poor real-time capabilities. To tackle this challenge, we propose an explicit method for rapid monocular depth recovery specifically designed for corridor environments, leveraging the principles of nonlinear optimization. We adopt the virtual camera assumption to make full use of the prior geometric features of the scene. The depth estimation problem is transformed into an optimization problem by minimizing the geometric residual. Furthermore, a novel depth plane construction technique is introduced to categorize spatial points based on their possible depths, facilitating swift depth estimation in enclosed structural scenarios, such as corridors. We also propose a new corridor dataset, named Corr_EH_2, which contains images as captured by the UGV camera of a variety of corridors. An exhaustive set of experiments in different corridors reveal the efficacy of the proposed algorithm. ## I Introduction Monocular cameras play a pivotal role in indoor robotics[1, 2, 3];nevertheless, their performance is constrained in certain application fields, such as visual odometry and 3D object detection, due to the absence of scale information [1, 4, 5, 6]. To address this limitation and obtain scale information from images, researchers often employ supplementary auxiliary methods. While RGBD cameras offer the advantage of obtaining relatively highprecision scene depth images, their resolution remains comparably low, rendering them susceptible to the influence of deep black objects, translucent materials, specular reflections, and parallax effects, thereby leading to reduced accuracy. Conversely, stereo cameras can calculate pixel depth through triangulation; however, this advantage comes at the expense of increased computational overhead, and their distance estimation is subject to limitations imposed by the baseline length. Depth estimation refers to the process of estimating a dense depth map from the corresponding input image[2]. Monocular depth estimation holds significant research value [5, 3]. When combined with object detection, it can achieve the effect of 3D reconstruction of general Lidar detection[7]. Furthermore, through integration with semantic segmentation, the approach can be extended from 2D to 3D, allowing the acquisition of both semantic and depth information for pixels. Monocular depth estimation is an ill-posed problem that requires the introduction of sufficient prior information for its resolution. Monocular depth estimation methods can be categorized into structure from motion (SFM) based methods[8, 9, 10, 11], hand-crafted feature based methods, and deep learning-based methods[13, 14, 15, 16]. Each approach explores different strategies to address the challenge of recovering depth from a single camera input. With the rapid advancement of deep neural networks, monocular depth estimation based on deep learning has attracted considerable interest and demonstrated remarkable accuracy[17, 5]. The impressive performance of deep learning methods relies on thorough training on extensive datasets, and the accuracy heavily hinges on the quality of precisely annotated data. Acquiring highquality data for depth/parallax reconstruction involves substantial time and labor costs [5]. Furthermore, deep learning-based methods exhibit limited generalization capacity in depth estimation due to the influence of image size and scene characteristics present in the training data[4]. Additionally, the majority of deep learning approaches demonstrate slow inference speeds and insufficient realtime performance. SfM-based methods perform 3D scene reconstruction using multiple image sequences from different perspectives. They extract feature points from the images for feature matching, estimate camera motion and 3D positions of pixels, and construct sparse depth maps by assembling point cloud information of 3D space points[8, 9, 10, 11]. However, SfM-based methods require matching alignment between multiple frames with continuous motion, and their accuracy is highly dependent on the results of inter-frame registration, thereby limiting their application in certain scenarios. Long corridors/hallways are characteristic of challenging scenarios with limited texture features, and a high degree of similarity between frames hinders reliable inter-frame alignment. Thus, SfM-based methods are susceptible to failure and reduced accuracy in such degraded scenes. Nonetheless, long corridors/hallways manifest strong structured characteristics, encompassing abundant geometric information, such as parallel walls on both sides and maintaining parallel lines at the junction of the floor and walls. By fully leveraging these structured features, Inverse Projective IPM (Inverse Projective IPM)[18, 19, 20, 21] can be employed to derive scale information for these characteristics, enabling depth plane construction and subsequent scene depth recov ery. In this context, we propose a novel display method for rapid monocular depth estimation. Our approach differs from existing methods as it eliminates the need for inter-frame matching assistance and avoids extensive training on large datasets, thereby saving on model training and transfer costs. By leveraging the virtual camera assumption and minimizing geometric residuals, we transform the depth estimation problem into an optimization task. Furthermore, we introduce a depth plane construction method, which categorizes spatial points based on their possible depths, enabling fast depth estimation in enclosed structural scenes such as long corridors/hallways. Our proposed method achieves state-of-the-art depth estimation accuracy in long corridor/hallway scenarios while significantly accelerating the depth recovery process. Moreover, it can achieve realtime monocular depth recovery on mid-to-low-performance processors. ## II Related Work ### _Explicit Method for Depth Estimation_ The explicit method for depth estimation refers to an approach in which the entire process of depth estimation, from feature extraction and feature transformation to the output of prediction results, can be explained using mathematical formulas. This method is commonly employed in depth estimation techniques based on SFM. On the other hand, implicit methods achieve the same task through techniques such as convolutional neural networks (CNNs), where the processes of feature extraction, feature space transformation, and depth prediction are encapsulated within an end-to-end deep network model. The SFM algorithm receives input image sequences taken from different viewing angles and first extracts features such as Harris, SIFT or SURF from all images. Feature matching is then performed to estimate the 3D coordinates of the features and generate a point cloud that can be converted into a depth map. In 2014, Prakash et al. [2]proposed a sparse depth estimation method based on SFM. Based on monocular image sequences from 5 to 8 different perspectives, the method used a multi-scale fast detector for feature detection and 3D position solution based on geometric view relations to obtain a sparse depth map. In 2016, Ha et al. [8]proposed a Structure From Small Motion (SFM) recovery method, which uses planar scanning technology to estimate depth maps. By using Harris corner detection and optical flow tracking method to solve the 3D position of the feature points, a relatively dense depth map can be obtained, but this algorithm cannot run in real time in terms of speed. In recent years, researchers have attempted to combine the strengths of explicit and implicit methods. In 2022, Zhong et al. [22] introduced a method that simultaneously conducts implicit reconstruction and extracts 3D feature points, while others usually use explicit method to get 3D points. It replaces manual feature extraction with an implicit description for 3D keypoint detection. In 2023, Wu et al. [23] demonstrated the equivalence of depth and height in the 2D-to-3D mapping transformation and proposed an explicit height description method applied to deep network models for transforming Bird's Eye View (BEV) space. ### _Real-Time Monocular Depth Estimation_ The overall development trend of monocular depth estimation is to push the increase of accuracy using extremely deep CNNs or by designing a complex network architecture, which are computationally expensive for current mobile computational devices which have limited memory and computational capability. Therefore, it is difficult for these networks to be deployed in small sized robots which depend on mobile computational devices. Under this context, researchers have begun to develop real-time monocular depth estiamtion methods[2]. In 2018,Poggi et al. [24] stack a simple encoder and multiple small decoders working in a pyramidal structure, which is capable to quickly infer an accurate depth map on a CPU, even of an embedded system, using a pyramid of features extracted from a single input image.The network was trained in an unsupervised manner casting depth estimation as an image reconstruction problem. The designed network only has 1.9M parameters and requires. 0.12s to produce a depth map on a i7-6700K CPU, which isclose to a real-time speed. In 2019, Wofk et al. [25] develop a lightweight encoderdecoder network for monocular depth estimation.A low latency, high throughput, high accuracy depth estimation algorithm running on embedded systems was designed. In addition,a network pruning algorithm is applied to further reduce the amount of parameters, which enables real-time depth estimation on embedded platforms with an Nvidia-TX2 GPU. In 2020, Wang et al. [14] design a highly compact network named DepthNet Nano. DepthNet Nano applies densely connected projection batchnorm expansion projection (PBEP) modules to reduce network architecture and computation complexity while maintaining the representative ability. In 2020, Liu et al. [26] introduce a lightweight model (named MiniNet) trained on monocular video sequences for unsupervised depth estimation. The core part of MiniNet is DepthNet, which iteratively utilizes the recurrent module-based encoder to extract multi-scale feature maps. The obtained feature maps are passed to the decoder to generate multi-scale disparity maps. MiniNet achieves real-time speed about 54fps with 640 192 sized images on a single Nvidia 1080Ti GPU. However, the accuracy of above is inferior to state-of-the-art methods. Therefore, developing real-time monocular depth estimation network is assumed to achieve the trade-off between accuracy and efficiency. ### _Corridor Environments Perception and Localization_ Long corridor is a typical degraded scene with a lack of texture features, which brings new challenges to visual perception and localization tasks[27]. It is necessary to understand the characteristics of this scene, so as to make full use of prior features and achieve high-precision perception and localization. However, long corridor scenes are inevitably faced by mobile robots, in recent years, some researchers have begun to focus on solving this problem. In 2021, Padhy et al.[28] introduce a localization method of Unmanned Aerial Vehicles(UAV) in Corridor Environments, a Deep Neural Network(DNN) was trained to understand corridor environmental information, andpredict the position of the UAV as either on the left or center or right side of the corridor.Depending upon the divergence of the UAV with respect to an imaginary central line, known as the central bisector line (CBL) of the corridor, a suitable command is generated to bring the UAV to the center, making UAV fly safely in Corridors. In 2023, Ge et al.[29] proposed a visual-feature-assisted localization methods in long corridor environments.A novel coarse-to-fine paradigm was presented that uses visual features to assist mobile robot localization in long corridors.Sufficient keyframes are obtained through the camera, and a visual camera map was created while the grid map built by the laser-based SLAM method with a low accuacy in corridors, and the mobile robot captures images in a proper perspective according to the moving strategy and matches them with the image map to achieve a coarse localization. ## III Materials and Methods ### _System Overview_ The proposed method consists of two main threads: the edge extraction thread,which main function is to extract ground edges line Sets, and the depth recovery thread, witch mainly completes the depth estimation of the scene. As shown in Figure \(1\). Images acquired from the visual sensor are first input to the feature extraction thread. In this thread, line feature information is extracted from the scene using Hough transform-based line feature detection. The line features of the sense are then filtered based on the distribution of line segment angles, leading to the construction of a set of ground edge lines. The information about the edge line set is then sent to the depth recovery thread. In the depth recovery thread, the edge line set is first projected into the virtual camera imaging space. The current camera-to-virtual-camera pose transformation is estimated by minimizing symmetry geometric residuals. Subsequently, the edge line set is transformed to the virtual camera imaging plane using the computed transform-mation matrix. Distance geometric residuals are then constructed, and a non-linear optimization process is iteratively performed to estimate the camera's pitch angle and the depths of the edge points.Based on the estimated depths of the edge points, depth planes are constructed and transformed back to the original image plane. Pixel points in the original image plane are classified based on the depth information, and finally, depth estimation values for the pixel points are obtained through approximate inter-polation. ### _Edge Extraction Thread_ Thread 1 primarily engages in the extraction of structured scene features through ground edge detection, thereby furnishing the depth recovery thread with essential prior information about the scene. In this study, it is assumed that the width of the long corridor/hallway remains constant, leading to the representation of ground and side wall edges as two straight lines within the image. #### Iii-B1 Construction of ROI In the context of long corridor/hallway scenarios, aside from ground edges, there often exist additional linear features such as door frames and objects, which can introduce interference with the detection of ground edge lines. Consequently, it becomes imperative to establish a Region of Interest (ROI) within the scene. This selection process is based on empirical observations. Due to geometric perspective, ground edge lines typically manifest in the lower-to-middle section of the image, converging from the bottom sides towards the center. Relying on this a priori knowledge, an ROI with a height of H within the image is designated, encompassing rows from the middle to the bottom of the image, as shown in Figure \(2(\textbf{a})\). #### Iii-B2 Hough Transform Based Line Feature Extraction For the extraction process, this study employs a line feature detection algorithm based on the Hough transform. The Hough transform is a methodology designed to extract linear features from images. It leverages the duality between points and lines, mapping the discrete pixel points along a straight line in image space to curves in Hough space through parameter equations. Subsequently, the intersection points of Fig. 1: System framework.There are two threads.Thread 1 main function is to extract ground edges combined along the line. Thread 2 mainly completes the depth estimation of the scene. Fig. 2: (**a**)The region of interesting. (**b**) The results of Canny edge detection. (**c**) The results of Hough line transform(after NMS). (**d**) The line defintion in the scene. multiple curves in Hough space are mapped back to straight line equations in image space, thereby forming the detected straight lines. Before performing the Hough transform, the image is initially binarized and subjected to Canny edge detection thus expediting the process of Hough transformations shown in Figure \(2(\textbf{b})\). Subsequently, linear features are extracted through the Hough transform,as shown in Figure \(2(\textbf{c})\), and the extracted features undergo linear fitting to yield the set of linear features, denoted as \(S_{l}\). \[S_{l}=\left\{l_{k}\left(\theta_{k},pt_{k}^{0}\right),k=1,2,...,n\right\} \tag{1}\] Where,\(\theta_{k},pt_{k}^{0}\)are the inclination angle of \(l_{k}\),and the the pixel coordinates of the starting point of \(l_{k}\), respectively, as illustrated in Figure \(2(\textbf{d})\). #### Ii-B3 Scene-Prior Based Line Feature Extraction Based on scene priors, a reasonable range for setting the length and inclination angle of the lines is established, which is used to filter the lines within set \(S_{l}\). If the image is strict symmetry of axis, the length of the edge lines \(L\left(l_{k}\right)\) and the angle \(\theta_{k}\) should satisfy the following conditions: \[\frac{H}{4}\leq L\left(l_{k}\right)\leq\frac{1}{2}\sqrt{H^{2}+W^{2}} \tag{2}\] \[\arctan\frac{H}{W}\leq\theta_{k}<\frac{\pi}{2}\ \vee\ \frac{\pi}{2}<\theta_{k}\leq\arctan-\frac{H}{W}\] According to (2), the left and right line are picked out. And the edge line set is \(S_{le}=\left\{l_{l}\left(\theta_{l},pt_{l}^{0}\right),l_{r}\left(\theta_{r}, pt_{r}^{0}\right)\right\}\), where \(l_{l}\) and \(l_{r}\) are left and right line, respectively. ### _Visual Camera_ #### Ii-C1 Visual Camera Model The primary function of the virtual camera is to ensure consistency in the imaging process across different scenes, currently predominantly employed within deep learning-based computer vision methodologies. In 2023, BEV-LaneDet [30] introduced the concept of the virtual camera in the context of 3D lane detection tasks on the Bird's Eye View (BEV) plane. Due to variations in camera intrinsic parameters, installation positions, and camera poses, images captured by the same scene may have different dimensions and scaling ratios. The BEV-LaneDet method employs a deep neural network that requires image parameters to be as consistent as possible with those in the training dataset. This necessitates aligning the camera position, pose, and height above the ground during imaging to avoid substantial disparities between the predictive accuracy of the deployed model and that observed during offline training. The virtual camera is manually defined, with its intrinsic parameters, installation position, and orientation preconfigured. Prior to training and inference with deep neural networks, images are initially projected onto the imaging space of the virtual camera through perspective transformation. This process ensures that the input images to the model exhibit consistency. In this study, the concept of the virtual camera is introduced to achieve algorithmic consistency across various scenes. The intrinsic coordinate system for both the real and virtual cameras is established as right-down-forward. The coordinate definitions for the two types of cameras and their respective images are depicted in Figure \(3\). Within this hypothesis, the width of the long corridor remains constant, and the virtual camera is positioned at the center of the scene, equidistant from the left and right walls. Its optical axis is directed straight ahead along the length of the corridor, while maintaining consistent pitch angles, camera intrinsic parameters, and mounting height as the current real camera. To achieve the pose transformation from the real camera to the virtual camera, this study begins by projecting ground edge feature points onto the imaging plane of the virtual camera. Subsequently, a geometric error model is constructed, and the maximum likelihood estimation of the pose transformation is computed through an iterative optimization process. #### Ii-C2 Virtual Camera Pose Estimation In the assumption of this study, the virtual camera is positioned at the center of the scene, ensuring that the edge lines in the image maintain its symmetry in the virtual camera. To fully utilize this structural feature and ensure algorithm consistency across different input images, the original image is initially transformed into the imaging space of the virtual camera. After obtaining the reference depth plane of the virtual camera, the depth plane information is then reprojected onto the current image. This process ultimately yields the depth distribution of the current image. Therefore, prior to conducting depth estimation, it is necessary to acquire the pose transformation from the current camera to the virtual camera. Let \(P^{W}\) represent a spatial point within the current scene, the pixel coordinate \(P^{C}\left(u,v\right)\) of space point \(P^{W}\left(x,y,z\right)\) is calculated as \[\left[\begin{array}{c}P^{C}\\ 1\end{array}\right]=\left[\begin{array}{c}u\\ v\\ 1\end{array}\right]=\frac{1}{z}\left[\begin{array}{ccc}f_{x}&0&c_{x}\\ 0&f_{y}&c_{y}\\ 0&0&1\end{array}\right]\left[\begin{array}{c}x\\ y\\ z\end{array}\right]=\frac{1}{z}KP^{W} \tag{3}\] Where, K is the camera internal parameter matrix, \(f_{x}\) and \(f_{y}\) are the pixel focal lengths in the x and y directions, respectively, corresponding to the physical focal length f of the camera. And the cx and cy represent the pixel offsets of the optical center in the image of the camera. In general, the horizontal and vertical dimensions of the camera sensor have equal pixel sizes, in which case \(f_{x}\)=\(f_{y}\). The rotation Fig. 3: The visual camera defination and coordinate difination. matrix R and displacement t are used to represent the pose transformation from the current camera to the virtual camera, and the current frame is projected onto the plane of the virtual camera. The projection result is calculated according to (3) \[s^{V}P^{C}=K\left(RP^{W}+t\right) \tag{4}\] Where \(s^{V}\) is the scale in the transformed virtual camera. In the assumption of the virtual camera, the disparity in pose between the virtual camera and the real camera arises from the yaw angle \(\phi\) and the displacement \(\tau\) in the x-direction. In which case, R and t are calculated as \[R=\left[\begin{array}{ccc}\cos\phi&0&\sin\phi\\ 0&1&0\\ -\sin\phi&0&\cos\phi\end{array}\right],t=\left[\begin{array}{c}\tau\\ 0\\ 0\end{array}\right] \tag{5}\] As depth has not been recovered yet, the images lack scale information. In order to unify the scales in both cameras, a assumption is accepted of that for every pixel in the original image, its corresponding original spatial point lies in front of the camera at a distance of 1 meter. It means, for any spatial point \(P^{W}\left(x,y,z\right)\), \(z\equiv 1\). Therefore, the scale factor \(s^{V}\) for the virtual camera is calculated as: \[s^{V}=\cos\phi-\frac{u-c_{x}}{f}sin\phi \tag{6}\] Based on equations (2) to (5), the transformation from the current frame image pixel point \(pt^{C}(u,v)\) to the virtual camera image pixel point \(pt^{V}(u^{V},v^{V})\) can be obtained as follows: \[\left[\begin{array}{c}u^{V}\\ v^{V}\end{array}\right]=\xi\left[\begin{array}{cc}cos\phi&0\\ 0&1\end{array}\right]\left[\begin{array}{c}u\\ v\end{array}\right] \tag{7}\] Where,\(\xi=1/\left(f_{t}\cos\phi-u\sin\phi-c_{x}\cos\phi\right)\),\(f_{t}=f_{x}=f_{y}\).Each pixel of \(l_{l}\left(\theta_{l},pt_{l}^{q}\right)\),\(l_{r}\left(\theta_{r},pt_{r}^{q}\right)\)along the edge can be transformed into the virtual camera according (7). Within the assumption, the optical center of the virtual camera oincides with the symmetry axis of the corridor. As a result, the symmetry of the road edge lines is preserved in the virtual camera imaging space. The axis of symmetry is located at position \(u=\frac{W}{2}\) on the image plane of virtual camera, where W represents the image width. Based on this structured feature, the geometric residual is calculated. As shown in Figure 4., Set \(pt_{k}^{lt}=\left(u_{k}^{l}.v_{k}^{l}\right)\) as a point in \(l_{l}\), and its symmetry point about \(u=\frac{W}{2}\) is \(pt_{k}^{lt}\). Substituting \(v=v_{k}^{l}\) into the equation of the right line, \(pt_{k}^{lt1}\) can be obtained, and the distance D between two points can be calculated. According to (7), D is a function of both the yaw angle \(\phi\) and the displacement \(\tau\) in the x-direction. By uniformly selecting N feature points at intervals of \(l_{l}\), the sum of D calculated for these N feature points yields symmetry error the left line \(E_{L}\). Similarly, the symmetry error of the right line \(E_{R}\) can be obtained. Ultimately, this process yields the symmetry geometric error \(E_{G}\). \[E_{G}=E_{L}+E_{R}=\sum_{k}^{N}D\left(pt_{k}^{lt},pt_{k}^{lt1}\right)+\sum_{k} ^{N}D\left(pt_{k}^{rt},pt_{k}^{rt1}\right) \tag{8}\] Once the mathematical model for the symmetry-based geometric error \(E_{G}\) is established, leveraging the prior features of the scene, a range of variability for the yaw angle \(\phi\) and the displacement \(\tau\) is defined. Nonlinear optimization is employed to compute the maximum likelihood estimation values \(\widehat{\phi}\) and \(\widehat{\tau}\) for the yaw angle \(\phi\) and displacement \(\tau\). Iterative calculations are performed within the range of variability for both parameters to minimize \(E_{G}\). The resulting yaw angle and displacement that yield the smallest \(E_{G}\) are considered as the maximum likelihood estimation results. \[\widehat{\phi},\widehat{\tau}=argmin\left\{\sum_{k}^{N}D\left(pt_{k}^{lt},pt_{ k}^{lt1}\right)+\sum_{k}^{N}D\left(pt_{k}^{rt},pt_{k}^{rt1}\right)\right\} \tag{9}\] After obtaining the estimated value of the heading Angle and displacement of the heading Angle, the edge points are projected to the virtual camera space through the formula(4)(5), and \(S^{V}=\left(pt_{k}^{VI},pt_{k}^{Vr}\right),k=0,1,2...,n\). ### _Fast Monocular Depth Recovery_ #### Iii-D1 3D Coordinates Estimation of Ground Edge Points In the virtual camera space, there are now pixel coordinates along two ground edges. Using the ground plane hypothesis, the points along each edge are assumed to be coplanar, and the 3D spatial coordinates of each point meet \(\forall P_{k}^{W}\left(x_{k},y_{k},z_{k}\right)\in S^{W},y_{k}=0\). According to the installation position and aperture Angle information of the camera, the 3D spatial coordinates of each edge point corresponding to the virtual camera can be solved by geometric method. The depth plane can then be constructed. The space coordinate system is selected as the lower left front, the camera height is known as h, the vertical aperture Angle is \(\theta_{v}\). Assume that the pitch Angle of the virtual camera in the current scene is \(\theta_{p}\), and for a pixel point \(pt_{0}\) at the bottom of the image, its pixel coordinate is \(\left(u_{t},v_{t}\right)\) meet \(0\leq u_{t}\leq W\),\(v_{t}=H\), W,H are the image height and image width respectively, and its depth is calculated as Fig. 4: The symmetry geometric error. \[z_{0}=\frac{h}{\tan\left(\theta_{v}+\theta_{p}\right)} \tag{10}\] The theoretical method of Inverse Perspective Transform (IPM) is used to calculate the depth of each edge point, as shown in Figure 5. For any point in the set of edge points \(S^{V}=\left(pt_{k}^{V1},pt_{k}^{Vr}\right),k=0,1,2...,n\), its pixel coordinate is \(\left(u_{k}^{V},v_{k}^{V}\right)\), and its depth is calculated according to the inverse perspective transform. \[z=z_{0}+\Delta z \tag{11}\] The equation is constructed to solve according to the perspective geometry as (12) \[\frac{\eta_{k}\cos\left(\theta_{k}-\theta_{p}\right)}{\Delta z\sin\theta_{p}} =\frac{z_{1}}{z_{0}} \tag{12}\] Where, \(\eta_{k}=f_{y}\Delta v_{k},\Delta v_{k}=H-v_{k}^{V}\)is the depth in space at the bottom of the imaging plane 1m away from the optical center of the camera, \[z_{1}=cos\theta_{v}-sin\theta_{v}sin\theta_{p} \tag{13}\] Solving (9),\(\Delta z\) is calculated as \[\Delta z=z_{0}\frac{z_{0}\cos\theta_{p}+h\sin\theta_{p}}{z_{1}h-\eta_{k}z_{0} \cos\theta_{p}}\eta_{k} \tag{14}\] #### Iii-B2 3D Coordinates Optimization \(\&\) Pitch Angle Estimation As shown in (14), the calculation of edge point depth \(\Delta z\) is related to camera elevation Angle \(\theta_{p}\). In this paper, \(\Delta z\) and \(\theta_{p}\) are estimated simultaneously by nonlinear optimization. In the corridor, the road surface in the scene is flat and the pitch Angle variation range is small. The iterative optimization algorithm is designed to iteratively calculate the geometric residual of 3D space points within the interval \([\lambda_{1},\lambda_{2}]\), taking \(\alpha\) as the step length, and calculate the optimal \(\theta_{p}\),to estimate the result. Set the initial value of \(\theta_{p}\), according to (14) for the edge point pixel set \(S^{V}\), in which all pixels recover the depth distribution, according to camera imaging model (3), the 3D spatial coordinates of edge point \(pt_{k}^{VI}\) are calculated as \[\begin{cases}z_{k}^{W}=z_{0}+\Delta z_{k}\\ x_{k}^{W}=z_{k}\left(u_{k}^{V}-c_{x}\right)/f_{t}\\ y_{k}^{W}=z_{k}\left(v_{k}^{V}-c_{y}\right)/f_{t}\end{cases} \tag{15}\] And the geometry residual is calculated as \[E_{G}^{W}=\sum_{k}^{L}\left\|P_{k}^{Wl},P_{k}^{Wr}\right\|_{2} \tag{16}\] \[\iff\sum_{k}^{L}\left|x_{k}^{Wl},x_{k}^{Wr}\right|_{y_{k}^{Wi}=y _{k}^{Wr}}\] \[=\sum_{k}^{L}\frac{z_{k}}{f_{t}}\left(1+\frac{z_{0}\cos\theta_{p} +h\sin\theta_{p}}{z_{1}h-\eta_{k}z_{0}\cos\theta_{p}}\eta_{k}\right)\left(u_{k }^{VI}-u_{k}^{Vr}\right)\] The approximate optimal estimate of \(\theta_{p}\) can be obtained by iterative optimization \[\hat{\theta}_{p}=\arg\min\left\{\sum_{k}^{L}\frac{z_{k}}{f_{t}}\left(1+\frac{ z_{0}\cos\theta_{p}+h\sin\theta_{p}}{z_{1}h-\eta_{k}z_{0}\cos\theta_{p}}\eta_{k} \right)\left(u_{k}^{VI}-u_{k}^{Vr}\right)\right\} \tag{17}\] #### Iii-B3 Depth Plane Construction \(\&\) Spatial Point Depth Recovery In the image space of the virtual camera, the spatial points in the same depth plane retain the parallel characteristics, and the contour points of the same depth are connected to build a depth plane. The depth plane \(\Gamma_{k}^{V}\) is guided by the contour lines. The depth plane \(\Gamma_{k}^{V}\) is determined by the left and right edge points \(\left\{p_{k}^{VI}\left(u_{k}^{VI},v_{k}^{VI}\right),pt_{k}^{Vr}\left(u_{k}^{Vr },v_{k}^{Vr}\right)\right\}\), and is uniquely determined, \(Dp\left(\Gamma_{k}^{V}\right)\) is the depth of \(\Gamma_{k}^{V}\), \(Dp\left(\Gamma_{k}^{V}\right)=z_{k}\). The depth plane \(\Gamma_{k}^{V}\), obtained in the virtual camera imaging space, is inversely transformed according to (4-5), and the set of depth planes in the real camera image can be obtained: \(S_{k}^{C}=\left\{\Gamma_{k}^{C}\right\},k=1,2,3,...,L\). In a real camera image, if the pixel \(pt_{k}^{V}\left(u_{k}^{C},v_{k}^{C}\right)\) belongs to the depth plane \(\Gamma_{k}^{C}\) then equation (18) is a necessary condition. \[\forall pt_{k}^{C}\in\Gamma_{k}^{C}\Rightarrow\begin{cases}f\left(u_{k}^{C} \right)\leq u_{i}^{C}\leq u_{k}^{C}\\ v_{k}^{C1}\leq v_{i}^{C}\leq v_{k}^{Cr}\\ u_{k-1}^{C}\leq u_{i}^{C}\ \vee\ v_{i}^{C}\leq v_{k-1}^{Cl}\ \vee\ v_{i}^{C}\geq v_{k-1}^{Cr} \end{cases} \tag{18}\] Where \(f\left(u_{k}^{Cl}\right)=\alpha u_{k}^{Cl}-\beta\) is a linear transform of \(u_{k}^{Cl}\), and \(\alpha\), \(\beta\) are empirical coefficients. In the corridor scene, the spatial point distribution is relatively ideal, ignoring the influence of dynamic objects, occlusion, etc., and taking (18) as sufficient and necessary conditions of \(pt_{k}^{Cl}\in\Gamma_{k}^{C}\), each pixel in the image is classified. After pixel classification is completed, it is determined whether it is a ground point according to the coordinates of each pixel. The condition that \(pt_{i}^{V}\left(u_{i}^{V},v_{i}^{V}\right)\) is a ground point is \[pt_{i}^{C}\in\Gamma_{k}^{C}\ \wedge\ u_{k-1}^{C}\leq u_{i}^{C}\leq u_{k-1}^{C} \tag{19}\] For the pixel point \(pt_{k}^{C}\left(u_{k}^{C},v_{k}^{C}\right)\in\Gamma_{k}^{C}\), its depth is calculated by interpolation method (20). Fig. 5: The geometry of the \(\Delta z\) computation. \[Dp\left(pt_{i}^{C}\right)=Dp\left(\Gamma_{k}^{C}\right)+\sigma\left(Dp\left( \Gamma_{k-1}^{C}\right)-Dp\left(\Gamma_{k}^{C}\right)\right) \tag{20}\] In the formula, \(\sigma\) is a nonlinear coefficient. When the depth plane is sufficiently dense, linear interpolation method is adopted. If the point is a ground point, then \[\sigma=\frac{u_{i}^{C}-u_{k-1}^{C}}{u_{k}^{C}-u_{k-1}^{C}} \tag{21}\] Otherwise, \[\sigma=\left\{\begin{aligned} &\frac{v_{i}^{C}-v_{k-1}^{Cl}}{v_{k}^{Cl}-v_{k-1}^{Cl}} \text{ },\text{ }v_{i}^{C}\leq v_{k-1}^{Cl}\\ &\frac{v_{i}^{C}-v_{k-1}^{Cl}}{v_{k}^{Cr}-v_{k-1}^{Cr}}\text{ }, \text{ }v_{i}^{C}\geq v_{k-1}^{Cr}\end{aligned}\right. \tag{22}\] ## IV Results ### _Experiment Overview_ The algorithm proposed in this paper is validated by collecting data in real scenarios. The ZED2 camera sensors mounted on an Unmanned Ground Vehicle(UGV) were used to collect images in different scenes to build data sets, and the RGB images and 16-bit depth images output by the cameras were recorded by ROSbag. The size of the output RGB image and depth image is 420x360 for the ZED2 camera. Camera mounting height are 0.66m, 0.62m respectively. The experimental equipment is shown Figure 11 ZED2 camera is used to collect 135 images of 9 kinds corridor with length range \([0-100]m\) and width range \([2-4]m\) and different lighting conditions, from which a corridor dataset, named Corr_EH_z were constructed. There are two parts of Corr_EH_z, \(Cord\_Exx\_z\), \(Cord\_Hxx\_z\). The \(Cord\_Exx\_z\) subset is a simple condition while \(Cord\_Hxx\_z\) is a complex condition. In Figure 12, scene images and corresponding depth images in two types of subsets are shown respectively. ### _Experiment 1: Verification of Algorithm Accuracy_ In the two types of sub-datasets, the heading Angle estimation interval is set as \([-0.314,0.314]rad(18)\) interval, and the accuracy of the pitch Angle estimation and corridor width estimation in this paper is tested with 0.05rad as the step length. The estimation results are shown in Table 1. The relative error of the output corridor width estimation value of the method in this paper is less than 0.0427. The mean relative error is 0.0221. The depth estimation accuracy of the proposed method is tested in the \(Cord\_Exx\_z\) and \(Cord\_Hxx\_z\) datasets, as shown in Table \(refab2,tab3\). Four types of general accuracy indexes of the proposed method are calculated under the two evaluation conditions of depth truth value and depth truth value respectively, and the four types of accuracy indexes are defined as [13] \[AbsRel: \frac{1}{N}\sum_{p}^{N}\left|\frac{y_{p}-\hat{y}_{p}}{y_{p}}\right|\] \[Log10: \frac{1}{N}\left|\log_{10}y_{p}-\log_{10}\hat{y}_{p}\right|\] \[RMSE: \sqrt{\frac{1}{N}\sum_{p}^{N}\left(y_{p}-\hat{y}_{p}\right)^{2}}\] \[RMSElog: \sqrt{\frac{1}{N}\sum_{p}^{N}\left(\log_{10}y_{p}-\log_{10}\hat{y }_{p}\right)^{2}}\] As can be seen from Table \(II\), under the condition of depth truth value!5m, the AbsRel index \(<7.106\%\) and RMSE index \(<0.35607\) output depth estimates in the \(Cord\_Exx\_z\) dataset of this method reach the advanced level Fig. 12: The Unmanned Ground Vehicle(UGV) and ZED2 camera Fig. 11: **(a)**Source img of Cord_E05_z sence. **(b)** Depth truth of Cord_E05_z sence. **(c)** Source img of Cord_H02_z sence. **(d)** Depth truth of Cord_H02_z sence. of accuracy. At the same time, the accuracy of the proposed method can be predicted in the depth range of 40m, as shown in Table \(\ III\). In the \(Cord\_E03\_z\) scenario, the AbsRel of the proposed method in the depth range of 40m can reach \(8.256\%\). Figure \(\ 8\) shows the depth recovery effect of the proposed method in several scenario.It should be noted that the ceiling part is eliminated to save computing resources. ### _Experiment 2: Comparison with the State-of-the-art Method_ In the \(Cord\_Exx\_z\) dataset, a precision comparison test was conducted between the proposed method and the ADABINS[15] method based on deep learning. ADABINS method is based on codec-decoding network +ViT for depth classification estimation. This method was published in CVPR in 2021, and currently ranks 14th in Kitti[31] list and 25th in NYU[32] list, which is at the advanced level among existing methods. First, 25epochs of ADABINSS method was trained on \(Cord\_Exx\_z\) sub-dataset, and then the accuracy of depth prediction between the proposed method and the ADABINS method was compared, as shown in Table \(IV\). AbsRel, Log10 and RMSE accuracy indexes of the proposed method are similar to those of ADABINS under the condition of depth truth value!5m, and RMSElog is 0.053, which is better than ADABINS (0.101). AbsRel and RMSE were \(9.8\%\) and 1.425 respectively under the condition of depth truth value!40m, while the effective depth prediction range of ADABINS indoor depth estimation is only 10m. Compare the speed of reasoning/recovering a single image between ADABINS and the proposed method in different computing platforms. Computing platform 1 is AMDThread-ripperPRO 5975W 32-core high-performance processor with single-core main frequency 3.6GHz, and GPU is Nvidia RTX4090 graphics processing unit, which is used to assist deep neural network reasoning. Computing platform 2 is a medium-low performance Intel Core i5-7300HQCPU with a main frequency of 2.5GHz; Repeated experiments were used to calculate the average execution time of the two methods, and the results were shown in Table \(\ V\). The average execution time of the proposed method for depth recovery of a single image on computing platform 1 is 0.0097s, which is less than 1/5 of the average execution time of the ADABINS method under GPU acceleration, and the average execution time of the proposed method on processor 2 is 0.048s. The results show that the proposed method can achieve a real-time processing speed of 20FPS in low and medium performance processors. ## V Discussion In this paper, an explicit method for fast monocular depth recovery in long corridor scenes is proposed. By extracting key information in long corridor scenes, the method optimizes pitch angle estimates and scene edge point depths by minimizing geometric residuals, and classification of space points by constructing depth planes, the monocular depth estimation problem is transformed into a solvable optimization problem. Fast monocular depth estimation for long corridor scenarios is achieved. In this paper, we collected the corridor image construction datasets under various scenarios, tested the performance of this method experimentally, and conducted precision comparison test with the method based on deep learning. The test results show that: 1. The accuracy of the explicit method for fast monocular depth recovery in long corridor scenarios reaches the advanced level of the existing monocular depth estimation algorithms; 2. The fast monocular depth recovery method greatly accelerates the depth recovery process of a single image. In low and medium performance processors, this method can perform real-time depth estimation of corridor scenes at a speed of 20FPS. Although the method work well in the experimental scenarios, there are still some limitations: 1. Due to the use of more scene prior information, the use scenarios of the proposed method are limited. It is more suitable for corridors with characteristics of closed and straight; 2. Due to the ground plane assumption, the performance of the method decreases when a small slope exists in the corridor, although this scenario is rare in reality; 3. Because of perspective geometry, the accuracy is low at farther distances, where there is a cumulative drift in height and transversality, as shown in 8. And the drift needs to be compensated empirically. We are trying to replace straight lines with curve detection and simply model the height of the ground to tackle the above limitations, so that the proposed method can also perform well in scenes with a curvature. The proposed algorithm can be used as an auxiliary module in the autonomous navigation and positioning system of Unmanned Aerial Vehicle(UAV) and Unmanned Ground Vehicle(UGV), such as the Simultaneous Localization and Mapping(SLAM) system, and other applications of monocular camera systems. There are several possible applications as we can thought: 1. Inner corridor UAV safe flying; 2. 3D positioning of the monitor camera in the corridor; 3. SLAM of delivery robots in the corridor. In the future, we will try to apply this method to the above applications effectively. In addition, a small deep neural network for segmentation and classification can be trained to combine with this method to solve the problem of limited scenarios. Finally, the theory of our method is suitable for unsupervised training of depth estimation models.
2309.06070
QED at NNLO and beyond for precision experiments
Low-energy experiments allow for some of the most precise measurements in particle physics, such as $g-2$. To make the most of these experiments, theory needs to match the experimental precision. Over the last decade, this meant that even in QED next-to-next-to-leading order calculations (or even more in some cases) became necessary. McMule (Monte Carlo for MUons and other LEptons) is a framework that we have developed to obtain NNLO predictions for a number of processes, such as $e\mu \to e\mu$, $ee\to ee$, and $\mu\to e\nu\bar\nu$. I will discuss some of the challenges faced when dealing with QED corrections and some possible solutions we have implemented in McMule, namely the subtraction scheme FKS$^\ell$, massification, and next-to-soft stabilisation. I will also demonstrate how to calculate the three-loop massification constant that will be required at N$^3$LO.
Yannick Ulrich
2023-09-12T09:09:22Z
http://arxiv.org/abs/2309.06070v1
# QED at NNLO and beyond for precision experiments ###### Abstract: Low-energy experiments allow for some of the most precise measurements in particle physics, such as \(g-2\). To make the most of these experiments, theory needs to match the experimental precision. Over the last decade, this meant that even in QED next-to-next-to-leading order calculations (or even more in some cases) became necessary. McMule (Monte Carlo for MUons and other LEptons) is a framework that we have developed to obtain NNLO predictions for a number of processes, such as \(e\mu\to e\mu\), \(ee\to ee\), and \(\mu\to e\nu\bar{\nu}\). I will discuss some of the challenges faced when dealing with QED corrections and some possible solutions we have implemented in McMule, namely the subtraction scheme FKS\({}^{\ell}\), massification, and next-to-soft stabilisation. I will also demonstrate how to calculate the three-loop massification constant that will be required at N\({}^{3}\)LO. PoS 2010 PoS 2010 Introduction Current and future low-energy precision experiments such as MUonE [1] (\(e\mu\to e\mu\), PRad [2, 3] and MUSE [4, 5] (\(\ell p\to\ell p\)) will require unprecedented precision from the theory side to reach their full potential. The dominant corrections in these experiments come from QED rather than QCD, which only enters non-perturbatively in the description of protons and pions. The level of precision reached by these experiments necessitates going beyond next-to-leading order (NLO) to at least next-to-next-to-leading order (NNLO). McMule is a framework to perform these calculations. In Table 1, we list the processes currently implemented in McMule as well the order at which they are known. In these proceedings, we will briefly review the tools used in McMule to make these calculations possible and efficient. These include the subtraction scheme (Section 2), massification (Section 3), and next-to-soft (NTS) stabilisation (Section 4). Next, we will discuss steps towards N\({}^{3}\)LO in Section 5 before concluding in Section 6. ## 2 Fks\({}^{\ell}\) subtraction As part of the calculation of higher-order corrections, we naturally have to include divergent real corrections. Since we want to perform this integration numerically, we need a prescription to deal with these singularities. In QED, one typically treats fermions as massive rather than massless which is common in QCD calculations. This means that we only have to treat soft singularities, i.e. \(E_{\gamma}=\sqrt{s}/2\times\xi\to 0\), with the collinear ones being regulated by the fermion mass. The simple structure of soft singularities was demonstrated by Yennie, Frautschi, and Suura (YFS) in their seminal paper [17]. When considering a process with a real photon emission, the matrix element (squared) can be approximated for soft photons \[\mathcal{M}_{n+1}^{(\ell)}=\mathcal{E}\,\mathcal{M}_{n}^{(\ell)}+\mathcal{O}( \xi)\,. \tag{1}\] \begin{table} \begin{tabular}{l|l|l|l} **Process** & **order** & **comment** & **reference** \\ \hline \(\mu\to e\nu\bar{\nu}\) & NNLO & polarised & [6] \\ \(\mu\to e\nu\bar{\nu}\gamma\) & NLO & polarised & [7] \\ \(\mu\to e\nu\bar{\nu}ee\) & NLO & polarised & [8] \\ \(\mu\to eX\) & NLO & polarised & [9] \\ \hline \(\ell p\to\ell p\) & NNLO & FFs at NLO & [10] \\ \(e\mu\to e\mu\) & NNLO & massified & [11] \\ \(ee\to\tau\tau\) & dom. NNLO + & polarised & [12] \\ & NLO EW & & \\ \hline \(e^{+}e^{-}\to e^{+}e^{-}\) & ph. NNLO & massified & [13] \\ \(e^{-}e^{-}\to e^{-}e^{-}\) & NNLO & massified & [14] \\ \hline \(ee\to\gamma\gamma\) & ph. NNLO & massified & [15] \\ \end{tabular} \end{table} Table 1: Processes implemented in McMule. “dom.” implies that only the dominant corrections are implemented (see eg. [16] for a precise definition). “ph.” means only photonic corrections (i.e. those without closed fermion loops) are implemented. \(\ell p\to\ell p\) is implemented with form factors at NLO as indicated in the comments columns. Here, we use \({\cal M}_{n}^{(\ell)}\) to denote the \(\ell\)-loop \(n\)-particle matrix element (squared) and defined the rescaled photon energy \(\xi=2E_{\gamma}/\sqrt{s}\). We have further defined the eikonal factor \({\cal E}\) that encodes the angular distribution of the emitted photon which can be constructed trivially as \[{\cal E}=-\sum_{ij}Q_{i}Q_{j}\frac{p_{i}\cdot p_{j}}{(p_{i}\cdot p_{\gamma})(p _{j}\cdot p_{\gamma})}\,, \tag{2}\] with fermion momenta (charges) \(p_{i}\) (\(Q_{i}\)). For simplicity, we have assumed all fermion momenta are incoming; the realistic case can be obtained by setting \(p_{i}\to-p_{i}\) as required. Integrating \({\cal E}\) over the one-particle phase space, we get the integrated eikonal \(\hat{\cal E}\) which contains the (dimensionally regulated) soft singularity. Naturally, the singularity in the \(\hat{\cal E}\) will exactly match the one in the one-loop virtual contribution \({\cal M}_{n}^{(1)}\). However, the statement of YFS is even stronger. All virtual singularities are subtracted to all loop orders by \(\hat{\cal E}\) \[e^{\hat{\cal E}}\sum_{\ell=0}^{\infty}{\cal M}_{n}^{(\ell)}=\sum_{\ell=0}^{ \infty}{\cal M}_{n}^{(\ell)f}\,, \tag{3}\] where we have introduced the finite, eikonal-subtracted matrix element \({\cal M}_{n}^{(\ell)f}\). We make use of this to construct the FKS\({}^{\ell}\) scheme [6] that is based on the original FKS scheme [18, 19]. The resulting subtraction scheme works at all orders in perturbation theory. We begin by reviewing the basics of the FKS scheme at NLO before discussing the all-order statement. For details of the derivation, we refer the reader to [6]. At NLO, we have two contributions \[\sigma^{(1)}=\int{\rm d}\Phi_{n}\ {\cal M}_{n}^{(1)}+\int{\rm d}\Phi_{n+1}\ {\cal M}_{n+1}^{(0)}\,. \tag{4}\] Both the flux factor and the measurement function are implicit for simplicity. In the type of slicing scheme that is commonly used in QED calculations, one introduces a cut-off \(\xi_{s}=2E_{s}/\sqrt{s}\) to split the second integral \[\sigma^{(1)}=\int{\rm d}\Phi_{n}\ {\cal M}_{n}^{(1)}+\int_{0}^{\xi_{s}}{\rm d }\Phi_{n+1}\ {\cal M}_{n+1}^{(0)}+\int_{\xi_{s}}{\rm d}\Phi_{n+1}\ {\cal M}_{n+1}^{(0)}\,. \tag{5}\] In the second integral, \({\cal M}_{n+1}^{(0)}\) is approximated as \({\cal EM}_{n}^{(0)}\) so that the integrated eikonal can be used. This results in \[\sigma^{(1)}=\int{\rm d}\Phi_{n}\ \underbrace{\left({\cal M}_{n}^{(1)}+\hat{ \cal E}{\cal M}_{n}^{(0)}\right)}_{{\cal M}_{n}^{(1)f}}+\int_{\xi_{s}}{\rm d} \Phi_{n+1}\ {\cal M}_{n+1}^{(0)}\,, \tag{6}\] which is manifestly finite. However, since \(\xi_{s}\) needs to be chosen small enough for the eikonal approximation to be valid, the numerical integration over \({\rm d}\Phi_{n+1}\) can be challenging. In FKS, we instead subtract and add back a counter term that is constructed from \({\cal E}\) as follows \[\sigma^{(1)}=\int{\rm d}\Phi_{n}\ \underbrace{\left({\cal M}_{n}^{(1)}+\hat{ \cal E}(\xi_{c}){\cal M}_{n}^{(0)}\right)}_{{\cal M}_{n}^{(1)f}}+\int{\rm d} \Phi_{n+1}\ \Big{(}\frac{1}{\xi}\Big{)}_{c}(\xi{\cal M}_{n+1}^{(0)})\,, \tag{7}\] where we have defined the \(c\)-distribution acting on \(\xi\mathcal{M}_{n+1}^{(0)}\) \[\int_{0}^{1}\mathrm{d}\xi\left(\frac{1}{\xi}\right)_{c}f(\xi)\equiv \int_{0}^{1}\mathrm{d}\xi\,\frac{f(\xi)-f(0)\theta(\xi_{c}-\xi)}{\xi}\,. \tag{8}\] Here, we have introduced an unphysical parameter \(\xi_{c}\) that can easily be confused with the slicing parameter \(\xi_{s}\). However, \(\xi_{c}\) does not need to be small since it merely controls when subtraction takes place. Varying it serves as a useful check for the implementation, but it can be chosen \(\xi\sim 0.3\) which alleviates the numerical issues of the slicing approach. The prescription in (7) is finite as the distribution regulates the singular behaviour of \(\mathcal{M}_{n+1}^{(0)}\) in the limit \(\xi\to 0\). We can extend this scheme fairly easily to all-orders (cf. [6] for the explicit construction at NNLO and N\({}^{3}\)LO) \[\sigma^{(\ell)} =\sum_{j=0}^{\ell}\int\mathrm{d}\Phi_{n+j}\frac{1}{j!}\Bigg{[} \prod_{i=1}^{j}\left(\frac{1}{\xi_{i}}\right)_{c}\Bigg{]}\mathcal{M}_{n+1}^{( \ell-j)f}\,, \tag{9}\] \[\mathcal{M}_{m}^{(\ell)f} =\sum_{j=0}^{\ell}\frac{\hat{\mathcal{E}}^{j}}{j!}\mathcal{M}_{ m}^{(\ell-j)}\,. \tag{10}\] This scheme has been successfully used for all NNLO calculations in McMule (cf. Table 1). We will come back to N\({}^{3}\)LO application in Section 5. ## 3 Massification In the previous section, we have exploited the fact that fermions are massive to simplify the infrared structure. However, this comes at a not insignificant cost when computing the required matrix elements. While much progress has been made, including methods presented at this very conference, the calculation of the two-loop \(\mathcal{M}_{n}^{(2)}\) with full mass dependence is still extremely challenging. Luckily, it is rarely needed since the electron mass \(m_{e}\) is much smaller than most other masses (such as the muon or proton masses \(m_{\mu}\) or \(m_{p}\), respectively) or kinematic variables (such as \(s\), \(t\), and \(u\)), collectively denoted as \(S\). This means that a leading power (LP) approximation in \(m_{e}^{2}/S^{2}\) is often more than sufficient. The error introduced by dropping these polynomially suppressed terms at two-loop is usually \(\mathcal{O}(10^{-3})\) on the NNLO coefficient or \(\mathcal{O}(10^{-5})\) on the total cross section. Massification is a SCET-inspired factorisation of the amplitude into hard, collinear, and soft modes \[\mathcal{A}(m_{e})=\mathcal{S}\times\prod_{i}\sqrt{Z_{i}}\times \mathcal{A}(0)+\mathcal{O}(m_{e})\,. \tag{11}\] The hard contribution corresponds to the amplitude \(\mathcal{A}(0)\) where \(m_{e}=0\) and can usually be obtained from existing calculations. The collinear modes are accounted for by a factor of \(\sqrt{Z_{i}}\) for each light external particle. They are not process dependent and can be re-used between different calculations after obtaining them in a matching calculation. Finally, the soft contribution \(\mathcal{S}\) is process dependent, but one can show that only diagrams with closed electron loops contribute. This means that, once \(Z_{i}\) is known, we only need to consider closed fermion loops for \(\mathcal{S}\) which significantly reduces the complexity of obtaining (and eventually evaluating) \(\mathcal{A}(m_{e})\). However, since McMule deals with energy scales below 10 GeV, we also need to consider hadronic loops. These cannot be calculated perturbatively and instead need to be obtained from data and integrated numerically. McMule uses either a dispersive approach [20] or a hyperspherical one [21, 22] for these integrals. Since a numerical integration is required anyway, we also use it to calculate closed electron loops with full dependence on all masses. This means that we never have to analytically calculate \(\mathcal{S}\). As mentioned before, the \(\sqrt{Z_{i}}\) are universal and can be calculated in the method of regions as was done in [23]. However, a simpler approach is to simply solve (11) for \(Z_{i}\) for a simple process where both \(\mathcal{A}(m_{e})\) and \(\mathcal{A}(0)\) are known. The form factors of \(\gamma^{*}(Q^{2})\to ee\) are the ideal environment for this as they were recently calculated semi-numerically to three-loop. By expanding the form factors for \(Q^{2}\gg m_{e}^{2}\) we can obtain the photonic parts of \(Z\) at three-loop accuracy \[Z\Big{|}_{\text{ph.}}=\frac{\mathcal{A}(Q^{2}\gg m_{e}^{2})}{\mathcal{A}(0)} \Big{|}_{\text{ph.}}\,. \tag{12}\] Using the result of [24] for \(\mathcal{A}(0)\) and of [25, 26] for \(\mathcal{A}(Q^{2}\gg m_{e}^{2})\) we can obtain an analytic answer for \(Z\) except for the finite three-loop part which contains a numeric constant which is known to high accuracy. The full expression for \(Z\) in the convention of [23] can be found in Appendix A and attached to this submission in electronic form. Even though (15) contains a numerical constant rather than just transcendental constants, it can still be used for calculations. It will eventually allow us to perform approximate N\({}^{3}\)LO calculations for processes where the full mass dependence of the triple-virtual is firmly out of reach such as \(ee\to ee\). ## 4 Next-to-soft stabilisation For the real-virtual corrections we need the real-virtual matrix element \(\mathcal{M}_{n+1}^{(1)}\). Thanks to the tremendous progress made in the automation of one-loop calculations, obtaining this is fairly straightforward using OpenLoops [27, 28] and Collier [29]. However, the vast majority of phase space points that need to be evaluated will have a soft and/or collinear photon. While OpenLoops can handle these, the numerical stability and speed suffer as the rescue and stability system is not well suited for QED calculations. A way to solve this problem is to expand \(\mathcal{M}_{n+1}^{(1)}\) in the problematic region, namely for soft emission \(\xi\to 0\). At LP, this results in the well-known eikonal we have encountered in Section 2. Extending the expansion to next-to-leading power (NLP), i.e. also include terms that \(\mathcal{O}(\xi^{-1})\), allows us to use the expansion earlier, improving speed and stability. At tree-level, the universal nature of this next-to-soft (NTS) expansion was proven by Low, Burnett, and Kroll [30, 31] (LBK) for unpolarised scattering. This was later extended first to one-loop [32] and then to all-orders [33], though still only for QED with massive fermions \[\mathcal{M}_{n+1}=\Bigg{[}\frac{1}{\xi^{2}}\mathcal{E}+\frac{1}{ \xi}\sum_{ij}Q_{i}Q_{j}\frac{p_{i}\cdot\tilde{D}_{j}}{p_{i}\cdot p_{\gamma}}+ \frac{1}{\xi}\sum_{i,j,k\neq j}Q_{k}Q_{j}^{2}Q_{i}\Big{(}\frac{p_{j} \cdot p_{i}}{(p_{\gamma}\cdot p_{j})(p_{\gamma}\cdot p_{i})}\] \[-\frac{p_{k}\cdot p_{i}}{(p_{\gamma}\cdot p_{k})(p_{\gamma}\cdot p _{i})}\Big{)}2S^{(1)}(p_{j},p_{k},p_{\gamma})+\mathcal{O}(\xi^{0})\Bigg{]} \mathcal{M}_{n}\,. \tag{13}\] Here, we have introduced the LBK differential operator \(\tilde{D}_{j}\) \[\tilde{D}_{j}^{\mu}=\sum_{L}\Big{(}\frac{p_{j}^{\mu}}{p_{j}\cdot p _{\gamma}}p_{\gamma}\cdot\frac{\partial s_{L}}{p_{j}}-\frac{\partial s_{L}}{ \partial p_{j}^{\mu}}\Big{)}\frac{\partial}{\partial s_{L}} \tag{14}\] that takes derivatives of the non-radiative matrix element w.r.t. invariants \(s_{L}\) (see [33] for further information on this). We have also introduced the one-loop exact soft function \(S^{(1)}(p_{j},p_{k},p_{\gamma})\) which can be found calculated in [32]. To study these approximations, we consider the process \(e^{+}e^{-}\to e^{+}e^{-}\gamma\) at one-loop and compare an arbitrary precision calculation with OpenLoops in double precision, OpenLoops in quadruple precision, the LP expansion, and the NLP expansion. The results are shown in Figure 1. One can clearly see that double precision is insufficient around \(\xi=10^{-3}\) while quadruple precision is always acceptable. However, the LP approximation only becomes reliable around \(\xi=10^{-7}\). The NLP expansion allows us to construct an implementation Figure 1: The different implementations of the one-loop matrix element for \(ee\to ee\gamma\) of the matrix element that is always stable and much faster than quadruple precision by using double precision OpenLoops up to \(\xi\simeq 10^{-3}\) before switching to the NLP expansion of (13). This was first demonstrated in [13, 14] in the context of \(ee\to ee\) where it allowed us to implement the first NNLO calculation of Bhabha and Moller scattering. It was later used for \(ee\to\tau\tau\)[12] and \(e\mu\to e\mu\)[11]. It is possible to extend (13) to polarised processes at the cost of simplicity. For real-virtual corrections, (13) only depends on the one-loop and tree-level reduced matrix elements \(\mathcal{M}_{n}^{(1)}\) and \(\mathcal{M}_{n}^{(0)}\). It was shown in [12] that when considering this expansion in the method of regions, the hard region gets an additional contribution from the polarisation. ## 5 Towards N\({}^{3}\)Lo While we have developed these tools for NNLO calculation, they can all be taken to N\({}^{3}\)LO. The subtraction scheme works at all orders, as does NTS stabilisation. Of course, the main bottleneck is the availability of the matrix elements which makes true \(2\to 2\) processes at N\({}^{3}\)LO extremely difficult in QED. However, we can consider eg. only the initial-state corrections to \(ee\to\gamma^{*}(\to\mu\mu,\pi\pi,...)\) or similarly the electron-line corrections [16] to \(\mu\)-\(e\) scattering. This is possible because the heavy-quark form factor is known at three-loop accuracy which constitutes the triple-virtual corrections of these (sub)processes. The real-virtual-virtual corrections, i.e. \(ee\to\gamma\gamma^{*}\), have been known in the limit \(m_{e}\to 0\) for many years as part of the NNLO corrections to three-jet production [34, 35] and have recently been recalculated [36, 37]. Massification allows us to recover the mass effects in the bulk of the phase space where \(m_{e}^{2}\ll S^{2}\). However, for soft and/or collinear emission, \(p_{i}\cdot p_{\gamma}\) may become comparable or even smaller than \(m_{e}\). For soft emission, the NLP expansion of (13) is sufficient to address this issue. For collinear emission, a similar expansion which we call jettification is needed. This has been demonstrated at LP and one-loop in [32]. Extending this to two-loop is the last missing ingredient for the real-virtual-virtual. The interplay between the different expansion is shown in Figure 2. With NTS stabilisation and OpenLoops, the real-real-virtual corrections should be feasible as well once NTS stabilisation is extended to two soft emissions. The last remaining part of the calculation, the triple-real correction, is unlikely to form a bottleneck either. ## 6 Conclusion We have reviewed the important theoretical underpinnings of the McMule framework. Phenomenological results for \(\mu\)-\(e\) scattering can be found elsewhere in these proceedings [38]. These tools allow for the systematic calculation of NNLO corrections in QED, similar to what happened in QCD some years ago. We can now say with confidence that the NNLO era has arrived, not only for QCD but also for QED with massive fermions. Figure 2: The arrangement of the different approximations for \(\mathcal{M}_{n+1}^{(2)}\). AcknowledgementI acknowledge support by the UK Science and Technology Facilities Council (STFC) under grant ST/T001011/1. I would like to thank Fabian Lange and Kay Schonwald for their help extracting the relevant parts of \(\mathcal{A}(Q^{2}\gg m_{e}^{2})\). Finally, a huge thank you to my collogues in the McMule Team for their support developing and implementing this framework. ## Appendix A The massification constant at three-loop The equation (12) evaluates to \[\begin{split}\sqrt{Z_{i}}=1&+a\Bigg{[}\frac{1}{ \epsilon^{2}}+\frac{1}{2\epsilon}+\zeta_{2}+2+\Big{(}4+\frac{1}{2}\zeta_{2} \Big{)}\epsilon+\Big{(}8+2\zeta_{2}+\frac{7}{4}\zeta_{4}\Big{)}\epsilon^{2}\\ &\qquad+\Big{(}16+4\zeta_{2}+\frac{7}{8}\zeta_{4}\Big{)} \epsilon^{3}+(32+8\zeta_{2}+\frac{7}{2}\zeta_{4}+\frac{31}{16}\zeta_{6}) \epsilon^{4}+\mathcal{O}(\epsilon^{5})\Bigg{]}\\ &+a^{2}\Bigg{[}\frac{1}{2\epsilon^{4}}+\frac{1}{2\epsilon^{3}}+ \frac{1}{\epsilon^{2}}\Big{(}\frac{51}{24}+\zeta_{2}\Big{)}+\frac{1}{ \epsilon}\Big{(}\frac{43}{8}-2\zeta_{2}+6\zeta_{3}\Big{)}\\ &\qquad+\frac{369}{16}+\frac{61}{4}\zeta_{2}-18\zeta_{4}-24 \zeta_{2}\ \log 2-3\zeta_{3}\\ &\qquad+\Big{(}-\frac{173}{32}+\frac{221}{4}\zeta_{2}-12\zeta_{ 2}\ \log 2+49\zeta_{3}+4\log^{4}2+48\log^{2}2\ \zeta_{2}+96a_{4}\\ &\qquad\qquad-\frac{351}{2}\zeta_{4}-18\zeta_{2}\ \zeta_{3}-3\zeta_{5}\Big{)}\epsilon\\ &\qquad+\Big{(}\frac{2841}{64}+\frac{2751}{8}\zeta_{2}-288\zeta _{2}\ \log 2+161\zeta_{3}+2\log^{4}2+24\zeta_{2}\ \log^{2}2+48a_{4}\\ &\qquad\qquad+\frac{387}{4}\zeta_{4}-\frac{24}{5}\log^{5}2+252 \zeta_{4}\ \log 2-87\zeta_{2}\ \zeta_{3}+576a_{5}-\frac{1431}{2}\zeta_{5}\\ &\qquad\qquad+4\zeta_{2}\ \log^{4}2-96\zeta_{3}\ \log^{3}2-60\zeta_{4}\ \log^{2}2+84\zeta_{2}\ \zeta_{3}\ \log 2+96\zeta_{2}\ a_{4}\\ &\qquad\qquad-\frac{81}{2}\zeta_{3}^{2}-613\zeta_{6}\Big{)} \epsilon^{2}+\mathcal{O}(\epsilon^{3})\Bigg{]}\\ &+a^{3}\Bigg{[}\frac{1}{6\epsilon^{6}}+\frac{1}{4\epsilon^{5}}+ \frac{1}{\epsilon^{4}}\Big{(}\frac{9}{8}+\frac{1}{2}\zeta_{2}\Big{)}+\frac{1}{ \epsilon^{3}}\Big{(}\frac{163}{48}-\frac{9}{4}\zeta_{2}+6\zeta_{3}\Big{)}\\ &\qquad+\frac{1}{\epsilon^{2}}\Big{(}\frac{39}{2}+\frac{103}{8} \zeta_{2}-24\zeta_{2}\ \log 2-\frac{151}{8}\zeta_{4}\Big{)}\\ &\qquad+\frac{1}{\epsilon}\Big{(}-\frac{77}{24}+\frac{915}{16} \zeta_{2}-24\zeta_{2}\ \log 2+\frac{425}{6}\zeta_{3}+4\log^{4}2+48\zeta_{2}\ \log^{2}2+96a_{4}\\ &\qquad\qquad-\frac{2709}{16}\zeta_{4}-\frac{52}{3}\zeta_{2}\ \zeta_{3}-43\zeta_{5}\Big{)}\\ &\qquad-342.0591735940860642547644580773479603199\ +\mathcal{O}(\epsilon)\Bigg{]}+\mathcal{O}(a^{4})\,.\end{split} \tag{15}\] We have used the conventional zeta function as well as \(a_{n}=\text{Li}_{n}(\frac{1}{2})\). This result is also available in electronic form attached to this submission. The numerical constant is exact to forty digits.
2309.16263
Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria
Cooperation is fundamental in Multi-Agent Systems (MAS) and Multi-Agent Reinforcement Learning (MARL), often requiring agents to balance individual gains with collective rewards. In this regard, this paper aims to investigate strategies to invoke cooperation in game-theoretic scenarios, namely the Iterated Prisoner's Dilemma, where agents must optimize both individual and group outcomes. Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games. Modifications are proposed where encouraging group rewards will also result in a higher individual gain, addressing real-world dilemmas seen in distributed systems. The study extends to scenarios with exponentially growing agent populations ($N \longrightarrow +\infty$), where traditional computation and equilibrium determination are challenging. Leveraging mean-field game theory, equilibrium solutions and reward structures are established for infinitely large agent sets in repeated games. Finally, practical insights are offered through simulations using the Multi Agent-Posthumous Credit Assignment trainer, and the paper explores adapting simulation algorithms to create scenarios favoring cooperation for group rewards. These practical implementations bridge theoretical concepts with real-world applications.
Vaigarai Sathi, Sabahat Shaik, Jaswanth Nidamanuri
2023-09-28T08:57:01Z
http://arxiv.org/abs/2309.16263v3
Cooperation Dynamics in Multi-Agent Systems: Exploring Game-Theoretic Scenarios with Mean-Field Equilibria* ###### Abstract Cooperation is fundamental in Multi-Agent Systems (MAS) and Multi-Agent Reinforcement Learning (MARL), often requiring agents to balance individual gains with collective rewards. In this regard, this paper aims to investigate strategies to invoke cooperation in game-theoretic scenarios, namely the iterated prisoner's dilemma, where agents must optimize both individual and group outcomes. Existing cooperative strategies are analyzed for their effectiveness in promoting group-oriented behavior in repeated games. Modifications are proposed where encouraging group rewards will also result in a higher individual gain, addressing real-world dilemmas seen in distributed systems. The study extends to scenarios with exponentially growing agent populations (\(N\)\(\rightarrow\)\(\infty\)), where traditional computation and equilibrium determination are challenging. Leveraging mean-field game theory, equilibrium solutions and reward structures are established for infinitely large agent sets in a model-based scenario. Finally, practical insights are offered through simulations using the Multi Agent - Posthumous Credit Assignment trainer, and the paper explores adapting simulation algorithms to create scenarios favoring cooperation for group rewards. These practical implementations bridge theoretical concepts with real-world applications. _Keywords-- Game Theory, Iterated Prisoner's Dilemma, Mean-field Game, Multi-Agent Cooperation, Multi-Agent Reinforcement Learning, Multi-Agent Systems._ ## I Introduction Cooperation is a cornerstone in the realm of Multi-Agent Systems (MAS) and Multi-Agent Systems (MARL), playing a pivotal role in scenarios where agents must strike a delicate balance between individual gains and collective rewards. While off-the-shelf MARL algorithms have historically navigated complex dynamic interactions in multi-agent environments, the rising prominence of model-based MARL is evident due to its increased utility and potential for enhanced real-world tasks [1]. With MARL's ubiquity in real-world use cases like autonomous vehicles [2], healthcare [3], and broader game theory applications [4], delving into these algorithms and exploring novel strategies for refining their efficiency becomes imperative. Curse of dimensionality is a challenge when dealing with scenarios where \(N>>2\), but when \(N\rightarrow+\infty\), the learning becomes tractable via mean-field approximation. In parallel, the simulation and validation of such environments, underpinned by theoretical concepts, have spurred the development of various algorithms that cover diverse facets of MARL. One such effort is from Unity Technologies [5], who introduced an intriguing addition to their ML-Agents toolkit--the MA-POCA (Multi-Agent Posthumous Credit Assignment) trainer. This effort aims to address challenges in nurturing agents' comprehension of their group contributions, even in the event of their demise and subsequent low rewards [6]. By encouraging agents to prioritize the group's welfare over the individual gain, MA-POCA presents a compelling approach. However, the adaptability of decision-making scenarios to harmonize individual objectives with group rewards warrants exploration, optimizing alignment between the two. This work aims to make the following contributions: (i). Investigate strategies for institating cooperation within an iterated prisoner's dilemma while optimizing agent interests. (ii). Extend this principle to an NIPD (N-player Iterated Prisoner's Dilemma), formulating optimal reward structures and equilibrium strategies through the lens of mean-field game theory. (iii). Offer practical insights into the simulation of such environments, analyze existing MARL algorithms, and explore the scope for adapting them into the dynamic scenarios as envisioned above. The subsequent sections are organized as follows: Section II surveys the prior works in the literature which address the challenges and goals similar to that of this study. Section III provides a comprehensive exposition of the proposed model-based approaches and highlights the underlying mathematical foundations behind these approaches and their anticipated functionality. Section IV delves into the results, showcasing how these theoretically derived strategies translate into practical applications, and examines the explored algorithms underpinning them. Section V concludes this work, offering insights into the challenges not addressed by this study and charting promising directions for future research. ## II Related Work In recent years, the intersection of game theoretic models and reinforcement learning has garnered attention for optimization purposes. A notable contribution by authors [7] uses the Stackelberg game structure, offering enhancements to the traditional actor-critic-based reinforcement learning. Meanwhile, another study [8] delved into equilibrium scenarios within the iterated prisoner's dilemma using various game-theoretic approaches, resulting in optimal strategies. However, this work primarily focuses on memory-one strategies and confines its analysis to scenarios where the inequality \(2R>T+S\) holds true. In contrast, [9] explored an N-player iterated prisoner's dilemma, proposing dynamic interconnected topologies to foster cooperation within the N-player system. Furthermore, [10] introduced a novel approach employing signed networks to establish negative links between players in the game, effectively promoting cooperation. It's worth noting that this methodology is tailored to the two-player version of the game, rendering it less suitable for the more complex N-player iterative version where invoking cooperative behavior becomes more challenging. Notably, existing literature tends to explore the treatment of repeated game scenarios as mean-field games to a lesser extent. The authors in [11] highlight several topics of interest in MARL from a game theoretical perspective, including learning in zero-sum games, general-sum games, and the inclusion of mean-field when the number of agents is large. Studying equilibrium scenarios within such dynamic environments holds a significant promise for real-time applications. The following section outlines our proposed approach, initially deriving equilibrium strategies in scenarios not requiring a mean-field approach. Subsequently, this approach is extended to a mean-field scenario, thereby scaling the solution to identify optimal strategies crucial for achieving equilibrium. This approach contributes to the burgeoning field of game-theoretic reinforcement learning, while paving way for practical applications in dynamic, multi-agent environments. ## III Proposed Methodology This section highlights the proposed strategic approaches towards the different scenarios considered--Iterated Prisoner's Dilemma, and the N-Player Iterated Prisoner's Dilemma using a model scenario. A detailed overview of the mathematical foundations behind these strategies, and how they are expected to influence agents' behavior in comparison to current strategies such as Win-Stay Lose-Shift and Grim strategy are explored, and further discusses the approaches which can be taken in the formulation of reinforcement learning algorithmic strategies which would be crucial in the simulation of such environments. ### _Iterated Prisoner's Dilemma_ The normal form of the prisoner's dilemma and the iterated version is provided in Table 1. R is the reward to each player for cooperation, T is the temptation payoff for the defector when the other player cooperates, S is the sucker's payoff for cooperation when the other player defects and P is the punishment when both players defect. To ensure the nature of the game, the inequalities \(T>R\)\(>P>S\) and \(2R>T+S\) should be met. This results in the agents being forced to cooperate, as mutual cooperation is better than defection. But this means that each agent must sacrifice getting the maximum reward in each iteration to ensure the betterment of the group. Here, a new strategy is proposed, by crafting a scenario where the cooperation group reward is lesser than the betrayal scenario where one cooperates and other defects, i.e., \(2R<T+S\), the agents can be encouraged to take turns obtaining the maximum reward for \(i\) iterations, without sacrificing the group reward. In this scenario, if the current choice tuple is (C, D), yielding a reward of (S, T), and the agents agree to alternatively defect, then the strategy for the next iteration can be (D, C), yielding a reward of (T, S). Each agent thus takes turns obtaining the maximum reward while also ensuring that the group reward is maximized. An agent who has chosen to defect in the current iteration will not deviate from the strategy and defect in the next iteration, as it will not only provide it the lowest possible reward, but in the high chance that the other agent sticks to the agreed strategy, the group reward will also be minimum. But an agent who has cooperated in the current iteration might cooperate in the next, where the other agent's and the group's reward will be minimized. If the other agent stuck to the strategy and was denied maximum reward, then it may retaliate by defecting forever to ensure that its reward will always be greater than or equal to the first agent. This would result in both obtaining the punishment reward, until such a time when they go back to their original strategy. For the agents to not deviate from the strategy, the reward obtained from not deviating from the strategy (alternating between maximum and minimum reward) must be higher than the reward obtained for deviating (maximum reward in current iteration, followed by punishment reward in all others). Considering the discount factor \(\delta\) for future iterations: * Reward if agent sticks to strategy: \[\text{T}+\text{S}\delta+\text{T}\delta^{2}+\text{S}\delta^{3}+\text{T}\delta ^{4}+...=\frac{T+S\delta}{1-\delta^{2}}\] (1) * Reward if agent deviates from strategy: \[\text{T}+\text{P}\delta+\text{P}\delta^{2}+\text{P}\delta^{3}+\text{P} \delta^{4}+...=T+\frac{P\delta}{1-\delta}\] (2) For agent to not deviate from strategy, (1) > (2): \[\frac{T+S\delta}{1-\delta^{2}}>T+\frac{P\delta}{1-\delta}\] Which results in the condition: \[\delta>\frac{P-S}{T-R} \tag{3}\] If the discount factor is above this level, cooperation by continually alternating mutual sacrifice of reward can be achieved. In the Grim Trigger strategy, cooperation is maintained until one player defects, at which point both players defect indefinitely. In contrast, the proposed strategy allows for occasional cooperation by taking turns maximizing rewards, potentially leading to more dynamic and variable outcomes. On the other hand, the Win-Stay Lose-Shift strategy involves staying with the chosen action if it leads to a win and shifting to the opposite if it results in a loss. The proposed methodology is essentially a Win-Shift Lose-Shift strategy, as players shift strategy regardless of the outcome, albeit ensuring that neither picks the same choice at any given iteration. This highlights the importance of a dynamic approach to cooperation, where agents adapt their strategies based on recent interactions to ensure fairness and maximize their own gains over time. ### _Mean-Field Equilibria_ **Model Scenario**: Imagine an intersection with a large population of \(N\rightarrow+\infty\) agents, where each agent is a vehicle waiting to cross the intersection. Each agent has two choices: wait or move. However, the dynamics are influenced by the number of agents who chose to move and a threshold \(i\) (\(0<i<N\)), that limits the maximum number of agents allowed to pass ahead. If the number of agents who chose to move \(j\leq i\), then all \(j\) agents will make it through, and \(N\)-\(j\) agents will wait. In case \(j>i\), then all the \(N\) agents will not move. The scenario then repeats with the agents being provided the same choices, with knowledge of the previous iteration's outcome. _Game Framework_: The dynamic plays out as a repeated game, a multi-agent extension of the classic N-player iterated prisoner's dilemma where cooperation and defection are not individual choices but a group outcome, which is infused with the added complexity of mean-field scenarios. Within this backdrop, the primary objective is to craft a reward system and equilibrium scenarios that ensure, in each iteration, a minimum of \(j\) vehicles traverse the intersection, and that \(j\to i\) is achieved. The parameters underpinning this formulation aim to incentivize altruistic behavior, secure individual rewards for each agent's contribution, foster cooperation while motivating agents to pursue individual rewards, and ultimately maximize collective rewards. Approaching this as a discrete-time mean-field game, several mathematical facets come into play. _States and Action_: We define the state variable \(S\) as \(j\), representing the number of agents who opted to move in the preceding iteration. It forms a finite set encompassing all feasible values of \(j\), ranging from 0 to \(N\). Agents, equipped with their knowledge of the current state and their past decisions, navigate the current iteration with a binary action from the action space \(a\): 0 for 'wait' and 1 for'move.' _Reward Structure_: The reward structure delineates distinct scenarios: * If \(j\leq i\) (Cooperative Case): * Agents who move receive a good reward for moving. * Agents who wait receive a good reward for their cooperative choice. * If \(j>i\) (Defection Case): * Agents who move receive the least for moving. * Agents who wait receive a low reward for not contributing to overcrowding. In essence, agents reap the highest rewards for actions that minimize the number of agents making the same choice. However, they must also strive to maximize the collective reward, achievable through equilibrium scenarios. It's worth noting that deviating from cooperation after equilibrium is reached invariably proves detrimental to the agent. Consequently, the group reward \(R\) is formulated as: \[R=\sum_{k=0}^{N}P(j)\cdot\frac{1}{1+\epsilon^{(1-2a_{k})(i-j)}}+B\] Here, \(a_{k}\) signifies the action taken by agent \(k\), \(P(j,t)\) denotes the fraction of agents in state \(j\) at time \(t\), and \(B\) serves as a constant offset. Utility Function: To incentivize cooperative choices, the utility function is formulated to incorporate the disparity \(j\)-\(i\), signifying the proximity or divergence from equilibrium. The utility function takes the form: \[U[j]=-a|j-i|+b\] In this utility function: * \(a\) represents a parameter influencing the agent's inclination for consistency when \(j\) approaches the threshold \(i\), signifying the agent's preference for decisions harmonizing with the threshold. Higher \(a\) values denote a stronger inclination for consistency. * \(b\) serves as a constant factor representing the agent's preferences, encapsulating additional factors beyond the \(j\)-\(i\) difference. \(b\) may embody a baseline preference for one choice, independent of the threshold. This utility function captures the interplay between individual preferences and the collective pursuit of equilibrium. _Mean-Field Distribution Evolution_: Agents make decisions considering the average behavior of the entire population. Here, the state variable \(j\) represents the average behavior of all agents in the population. Agents consider the collective impact of their decisions on the overall state, and this influence affects their individual choices. We find the mean-field distribution \(P(j,t)\), which represents the fraction of agents occupying a state \(j\) at time \(t\). In this scenario, \(j\) is the key state variable, and the distribution evolves over time as agents make decisions. At each time step \(t\), \(P(j,t)\) should capture the probability that \(j\) agents have chosen to move in the current iteration, based on the previous iterations' outcomes and the agents' strategies. To capture the evolution of the mean-field distribution, we define how \(P(j,t+1)\) depends on \(P(j,t)\) at time \(t\) and the choices of actions made by agents. In this case, the evolution is described as follows: \[P(j,t+1)=\sum_{a=0}^{1}\sum_{j=0}^{N}T(j^{{}^{\prime}},j,a,t)\cdot P(j^{{}^{ \prime}},t)\cdot\pi(a,j^{{}^{\prime}},t)\] Where \(T(j^{{}^{\prime}}\!j,a,t)\) is the transition probability from state \(j\) at time \(t\) to state \(j\) at time \(t\)+\(1\) given action \(a\); \(\pi(a,j^{{}^{\prime}},t)\) describes the probability that an agent chooses action \(a\) at time \(t\) given the current state \(j^{{}^{\prime}}\). The double summation accounts for all possible actions and states of agents that could lead to the state \(j\) at time \(t\)+\(1\). The probability \(\pi(a,j^{{}^{\prime}},t)\) is computed using a SoftMax function based on the agents' value functions: \[\pi\left(a,j^{{}^{\prime}},t\right)=\frac{\exp(V(a,j^{{}^{\prime}},t))}{\sum_{a =0}^{1}\exp\left(V\left(a,j^{{}^{\prime}},t\right)\right)}\] By iteratively applying this update rule for _P_(_j,t_) over time, we can track how the distribution of states evolves as agents make decisions in each iteration. The equilibrium, in this case, would be reached when the distribution stabilizes, and agents' actions are unlikely to change further as they found strategies which maximize their expected utility given the distribution and rewards associated with their actions. _Bellman Equation_: The Bellman optimality equation represents the optimal value of being in a particular state in a Markov Decision Process (MDP), and the general form of the equation for a single agent can written as: \[V^{*}(s)=max_{a}\sum_{s^{\prime}}P(s^{\prime}|s,a)[R(s,a,s^{\prime})~{}+~{} \delta V^{*}(s^{\prime})]\] Where _V_*(_s_) is the value function representing the expected cumulative reward of an agent being in state \(s\) and following an optimal strategy; \(a\) is the action taken in state s to maximize the expected cumulative reward; _P_(_s_'[s,a) is the transition probability that the MDP will transition from a state \(s\) to state \(s\) when taking action \(a\); _R_(_s, a, s'_) is the immediate reward received when transitioning states; \(\delta\) is the discount factor, representing the preference for immediate rewards over future rewards. This seeks to find the action \(a\) which maximizes the expected cumulative reward, thereby optimizing the actions of the agent. In a discrete-time mean-field game, this is used to find the optimal strategy for a rational agent in a population where each agent's behavior influences and is influenced by the average behavior of the population. In a population of \(N\) agents indexed by \(i\), let _x(t)_ denote the state of agent \(i\) at time \(t\) and _m(t)_ denote the average behavior of the population, the Bellman equation for a representative agent in this framework can be written as: \[V(x_{t}(t),m(t))=max_{u_{t}(t)}[\{x_{t}(t),u_{t}(t),m(t)\}+\delta\mathbb{E}[V (x_{t}(t+1),m(t+1))]\}\] Where _J_(_x_(_t_), _u(_t_), _m(t_)) is the immediate utility based on its own state, strategy, and the current mean field. _E[V(x_(_t_+1), m(t+1))]_ represents the expected future value for agent \(i\) at time _t_+1, given the transition dynamics and the anticipated mean field at the next time _step m(t+1)_, which essentially captures the long-term objectives of the agent. Consequently, for the described model in this section, the Bellman optimality equation can be written as: \[V(j,t)=max_{a\in[0,1]}\{U(a,j)+\delta\mathbb{E}[V(j^{\prime},t+1)]\}\] Where _U_(_a_j_) is the utility function for an agent who chooses action \(a\) (0 for wait, 1 for move), given the current state \(j\). And _E[V(j', t+1)]_ is the expected value of the value function in the next time step _t+1_, given the transition dynamics of \(j\) and agent strategies. Overall, this scenario presents a sophisticated framework for studying multi-agent dynamics and equilibrium in the context of a simple model, where agents must balance their individual preferences with the goal of achieving a collective equilibrium. The mathematical formulations applied to the model to analyze the dynamics shed light on how agents can optimize their strategies in this complex environment, and how cooperation can be induced with the help of simple reward and equilibrium strategies. ### _Posthumous Credit Assignment_ Although the Posthumous Credit Assignment trainer (Cohen et al., 2021) was developed specifically for purposes of letting agents know when the group has benefited from their actions in the event of the agents being terminated, this work considers it from the perspective of providing delayed rewards to the agent, when it is still active, but not performing a task for the benefit of the group. To address it from the scenario described in the N-player iterated prisoner's dilemma, an agent who is waiting to let the other agents move will be provided a delayed reward based on how their actions contributed towards the performance of the whole group. For reasons of consistency, this will be referred to as posthumous credit assignment. One observation regarding this methodology is the stagnation of roles of agents over a period, and reduced dynamism in decisions. While this can be beneficial when all the individual agents are just there to serve the purposes and goals of the larger collective, but in real-world scenarios, each agent will be in a position to prioritize their individual goals as much as they do the group ones. In context of the mean-field scenario provided earlier, this would mean that the application of the current credit assignment policy would result in eventually reaching equilibrium where _j_\(\to\)_i_, but in an environment of \(N\) agents, the over many iterations, the same \(j\) agents would choose to move while the same _N_-_j_ agents choose to wait. Given that the agents receive the maximum reward when they move (_j_\(\leq\)_i_), real-world scenarios would involve each agent wanting to obtain the maximum reward. A workaround to this can be obtained by dynamically interchanging the roles of the agents once stability is achieved, i.e., an agent who chose to wait can move in the next iteration while an agent who chose to move can wait in the next iteration. This ensures that the group reward is maximized as it was earlier, but in addition, all agents get to experience maximum rewards. If the number of agents is two, as in a traditional iterated prisoner's dilemma, this can be implemented by implementing a simple memory-one strategy for the agents in the environment, where they know the actions taken in the prior iteration and can thus modify their strategies to let another agent get a shot at the maximum reward. Cooperation is utmost, as it involves an agent who has obtained maximum reward sacrificing it for a lower reward for the benefit of another agent, a behavior which can also be rewarded after the completion of successful iterations to encourage agents to develop this practice. Nevertheless, as the number of agents increases, the number of prior iterations to be considered before the choices for the current iteration can be made also increases significantly. Although the memory-one strategy will be sufficient for the experiments discussed in the following section, Section V provides possible ways to handle dynamic role switching where _N_\(>>\)_2_. ## IV Results In this section, we evaluate the performance of MA-POCA with the modifications proposed earlier for dynamic role switching empirically on a custom multi-agent environment like the one described in Section III (A), and an existing environment which was originally used to study the performance of MA-POCA trainer. All environments were developed using Unity's ML-Agents toolkit. Code is available at [https://github.com/dawnorak/marl-kart-simulator](https://github.com/dawnorak/marl-kart-simulator). ### _Iterated Prisoner's Dilemma_ A simple environment is constructed where the goal of the agent is to go around a track on a kart. The implementation of the iterated prisoner's dilemma comes when the agents approach an intersection, where only one is permitted to move ahead. The rewards are then provided based on the standard prisoner's dilemma payoff matrix, but where \(2R<T+S\), as proposed in Section III, to encourage the agents to alternatively shift strategies and take turns going around the track. This would further need the implementation of a memory strategy where the agents know the choice they made prior and shift accordingly. Due to the simplicity of the simulation, the implementation of the discount factor to ensure agents do not deviate from strategy was not needed. The implementation of this strategy in the custom simulation is shown in Fig 1. ### _Dungeon Escape_ The aim of this environment is for the blue agents to kill the green dragon by any one agent sacrificing itself, obtain the revealed key, and escape the dungeon. While training the environment as it was provided in the toolkit, it was observed that over iterations, one agent tends to become the sacrifice often, while the other agents escape. While this is efficient given that this environment is primarily to prioritize team goals over individual ones, this can be further explored to check the validity of using memory strategies to dynamically facilitate in the shifting of agent roles over iterations. The three agents were given an additional task of remembering what their actions were in the previous iterations, mainly, to remember whether they sacrificed themselves to obtain the key or not. This needed to be done only for the past two iterations, as in an \(N\) agent system, the history of \(N\)-\(1\) iterations can provide an idea of the choice to be made in the current iteration. In case an agent has sacrificed itself in any one of the past two iterations, then it will not sacrifice itself in the current iteration. If an agent has not sacrificed itself in the past two iterations and escaped, then it will look to sacrifice itself in the current iteration. On modifying the agents' behavior to include their actions in the previous iterations--thereby encouraging the agents to take turns to sacrifice themselves--a similar level of success in the completion of objectives was achieved, albeit with the newer dynamic role shifting. Such a memory strategy could also be extended to other environments where dynamic role switching is needed. The implementation of this in the Dungeon Escape environment is shown in Fig. 2. Fig. 1: **Karts at an Intersection – (a) Agent 1 waits while Agent 2 moves in the first iteration, (b) Agent 2 waits in the next iteration while Agent 1 moves.** Fig. 2: **Dungeon Escape – (a) Purple headband agent sacrifices itself in iteration 1, (b) Red headband agent sacrifices itself in iteration 2, (c) Yellow headband agent sacrifices itself in iteration 3.** ## V Conclusion In summary, this work has introduced a few novel strategies for addressing game-theoretic scenarios and has developed corresponding reward policies and equilibrium concepts, particularly when dealing with scenarios where \(N\xrightarrow{}+\infty\) through a mean-field game perspective. These theoretical approaches have provided valuable insights into the mathematical and algorithmic dimensions of multi-agent reinforcement learning. While our experimentation with MA-POCA in this work has yielded promising results in scenarios involving a small number of agents, the challenges become significantly intricate when dealing with scenarios characterized by \(N\) being very large. In such cases, the extensive number of iterations required for dynamic role shifting presents a formidable challenge. One potential avenue for tackling this challenge involves introducing stochastic decision-making and employing a probabilistic approach to role switching. Agents could maintain awareness of their cumulative rewards over numerous iterations, allowing them to gauge how long they have adhered to a specific strategy. For instance, if an agent has been sticking to the same strategy for \({}^{N\text{-}i}C_{i}\) (all possible combinations of \(i\) agents in the agent space which does not consider the agent itself) iterations, that would mean that other agents have had ample opportunities to choose the other strategy. Consequently, the agent may decide to switch strategies based on a calculated probability. This probability can be expressed in any straightforward function, such as a sigmoid function, to represent the likelihood that an agent will transition to a different strategy at any given iteration. This stochastic approach offers a promising direction for addressing scenarios with a vast number of agents, promoting adaptability and strategic diversity in multi-agent systems.
2309.10612
An Extendable Python Implementation of Robust Optimisation Monte Carlo
Performing inference in statistical models with an intractable likelihood is challenging, therefore, most likelihood-free inference (LFI) methods encounter accuracy and efficiency limitations. In this paper, we present the implementation of the LFI method Robust Optimisation Monte Carlo (ROMC) in the Python package ELFI. ROMC is a novel and efficient (highly-parallelizable) LFI framework that provides accurate weighted samples from the posterior. Our implementation can be used in two ways. First, a scientist may use it as an out-of-the-box LFI algorithm; we provide an easy-to-use API harmonized with the principles of ELFI, enabling effortless comparisons with the rest of the methods included in the package. Additionally, we have carefully split ROMC into isolated components for supporting extensibility. A researcher may experiment with novel method(s) for solving part(s) of ROMC without reimplementing everything from scratch. In both scenarios, the ROMC parts can run in a fully-parallelized manner, exploiting all CPU cores. We also provide helpful functionalities for (i) inspecting the inference process and (ii) evaluating the obtained samples. Finally, we test the robustness of our implementation on some typical LFI examples.
Vasilis Gkolemis, Michael Gutmann, Henri Pesonen
2023-09-19T13:37:47Z
http://arxiv.org/abs/2309.10612v1
# An Extendable Python Implementation of Robust Optimisation Monte Carlo ###### Abstract Performing inference in statistical models with an intractable likelihood is challenging, therefore, most likelihood-free inference (LFI) methods encounter accuracy and efficiency limitations. In this paper, we present the implementation of the LFI method Robust Optimisation Monte Carlo (ROMC) in the Python package **ELFI**. Romc is a novel and efficient (highly-parallelizable) LFI framework that provides accurate weighted samples from the posterior. Our implementation can be used in two ways. First, a scientist may use it as an out-of-the-box LFI algorithm; we provide an easy-to-use API harmonized with the principles of **ELFI**, enabling effortless comparisons with the rest of the methods included in the package. Additionally, we have carefully split Romc into isolated components for supporting extensibility. A researcher may experiment with novel method(s) for solving part(s) of Romc without reimplementing everything from scratch. In both scenarios, the Romc parts can run in a fully-parallelized manner, exploiting all CPU cores. We also provide helpful functionalities for (i) inspecting the inference process and (ii) evaluating the obtained samples. Finally, we test the robustness of our implementation on some typical LFI examples. _Keywords_: Bayesian inference, implicit models, likelihood-free, Python, **ELFI**. ## 1 Introduction Simulator-based models are particularly captivating due to the modeling freedom they provide. In essence, any data generating mechanism that can be written as a finite set of algorithmic steps can be programmed as a simulator-based model. Hence, these models are often used to model physical phenomena in the natural sciences such as, e.g., genetics, epidemiology or neuroscience Gutmann and Corander (2016); Lintusaari et al. (2017); Sisson et al. (2018); Cranmer et al. (2020). In simulator-based models, it is feasible to generate samples using the simulator but is infeasible to evaluate the likelihood function. The intractability of the likelihood makes the so-called likelihood-free inference (LFI), i.e., the approximation of the posterior distribution without using the likelihood function, particularly challenging. Optimization Monte Carlo (OMC), proposed by Meeds and Welling (2015), is a novel LFI approach. The central idea is to convert the stochastic data-generating mechanism into a set of deterministic optimization processes. Afterwards, Forneron and Ng (2016) described a similar method under the name'reverse sampler'. In their work, Ikonomov and Gutmann (2019) identified some critical limitations of OMC, so they proposed Robust OMC (ROMC) an improved version of OMC with appropriate modifications. In this paper, we present the implementation of ROMC at the Python package **ELFI** (**Engine for likelihood-free inference**) Lintusaari, Vuollekoski, Kangasraasio, Skyten, Jarvenpaa, Marttinen, Gutmann, Vehtari, Corander, and Kaski (2018). The implementation has been designed to facilitate extensibility. ROMC is an LFI framework; it defines a sequence of algorithmic steps for approximating the posterior without enforcing a specific algorithm for each step. Therefore, a researcher may use ROMC as the backbone method and apply novel algorithms to each separate step. For being a ready-to-use LFI method, Ikonomov and Gutmann (2019) propose a particular (default) algorithm for each step, but this choice is by no means restrictive. We have designed our software for facilitating such experimentation. To the best of our knowledge, this is the first implementation of the ROMC inference method to a generic LFI framework. We organize the illustration and the evaluation of our implementation in three steps. First, for securing that our implementation is accurate, we test it against an artificial example with a tractable likelihood. The artificial example also serves as a step-by-step guide for showcasing the various functionalities of our implementation. Second, we use the second-order moving average (MA2) example Marin, Pudlo, Robert, and Ryder (2012) from the **ELFI** package, using as ground truth the samples obtained with Rejection ABC Lintusaari _et al._ (2017) using a very high number of samples. Finally, we present the execution times of ROMC, measuring the speed-up achieved by the parallel version of the implementation. The code of the implementation is available at the official **ELFI** repository. Apart from the examples presented in the paper, there are five **Google Colab**Bisong (2019) notebooks available online, with end-to-end examples illustrating how to: (i) use ROMC on a synthetic \(1D\) example, (ii) use ROMC on a synthetic \(2D\) example, (iii) use ROMC on the Moving Average example, (iv) extend ROMC with a Neural Network as a surrogate model, (v) extend ROMC with a custom proposal region module. ## 2 Background We first give a short introduction to simulator-based models, we then focus on OMC and its robust version, ROMC, and we, finally, introduce **ELFI**, the Python package used for the implementation. ### Simulator-based models and likelihood-free inference An implicit or simulator-based model is a parameterized stochastic data generating mechanism, where we can sample data points but we cannot evaluate the likelihood. Formally, a simulator-based model is a parameterized family of probability density functions \(\{p(\mathbf{y}|\boldsymbol{\theta})\}_{\boldsymbol{\theta}}\) whose closed-form is either unknown or computationally intractable. In these cases, we can only access the simulator \(m_{r}(\boldsymbol{\theta})\), i.e., the black-box mechanism (computer code) that generates samples \(\mathbf{y}\) in a stochastic manner from a set of parameters \(\boldsymbol{\theta}\in\mathbb{R}^{D}\). We denote the process of obtaining samples from the simulator with \(m_{r}(\boldsymbol{\theta})\to\mathbf{y}\). As shown by Meeds and Welling (2015), it is feasible to isolate the randomness of the simulator by introducing a set of nuisance random variables denoted by \(\mathbf{u}\sim p(\mathbf{u})\). Therefore, for a specific tuple \((\boldsymbol{\theta},\mathbf{u})\) the simulator becomes a deterministic mapping \(g\), such that \(\mathbf{y}=g(\boldsymbol{\theta},\mathbf{u})\). In terms of computer code, the randomness of a random process is governed by the global seed. There are some differences on how each scientific package handles the randomness; for example, at **Numpy** Harris, Millman, Van Der Walt, Gommers, Virtanen, Cournapeau, Wieser, Taylor, Berg, Smith _et al._ (2020) the pseudo-random number generation is based on a global state, whereas, at **JAX** Bradbury, Frostig, Hawkins, Johnson, Leary, Maclaurin, Necula, Paszke, VanderPlas, Wanderman-Milne _et al._ (2018) random functions consume a key that is passed as a parameter. However, in all cases, setting the initial seed to a specific integer converts the simulation to a deterministic piece of code. The modeling freedom of simulator-based models comes at the price of difficulties in inferring the parameters of interest. Denoting the observed data as \(\mathbf{y_{0}}\), the main difficulty lies at the intractability of the likelihood function \(L(\boldsymbol{\theta})=p(\mathbf{y_{0}}|\boldsymbol{\theta})\). To better see the sources of the intractability, and to address them, we go back to the basic characterization of the likelihood as the (rescaled) probability of generating data \(\mathbf{y}\) that is similar to the observed data \(\mathbf{y_{0}}\), using parameters \(\boldsymbol{\theta}\). Formally, the likelihood \(L(\boldsymbol{\theta})\) is: \[L(\boldsymbol{\theta})=\lim_{\epsilon\to 0}c_{\epsilon}\int_{\mathbf{y} \in B_{d,\epsilon}(\mathbf{y_{0}})}p(\mathbf{y}|\boldsymbol{\theta})d\mathbf{ y}=\lim_{\epsilon\to 0}c_{\epsilon}\Pr(g(\boldsymbol{\theta},\mathbf{u}) \in B_{d,\epsilon}(\mathbf{y_{0}})\mid\boldsymbol{\theta}) \tag{1}\] where \(c_{\epsilon}\) is a proportionality factor that depends on \(\epsilon\) and \(B_{d,\epsilon}(\mathbf{y_{0}})\) is an \(\epsilon\) region around \(\mathbf{y_{0}}\) that is defined through a distance function \(d\), i.e., \(B_{d,\epsilon}(\mathbf{y_{0}}):=\{\mathbf{y}:d(\mathbf{y},\mathbf{y_{0}})\leq\epsilon\}\). In cases where the output \(\mathbf{y}\) belongs to a high dimensional space, it is common to extract summary statistics \(\Phi\) before applying the distance \(d\). In these cases, the \(\epsilon\)-region is defined as \(B_{d,\epsilon}(\mathbf{y_{0}}):=\{\mathbf{y}:d(\Phi(\mathbf{y}),\Phi(\mathbf{ y_{0}}))\leq\epsilon\}\). In our notation, the summary statistics are sometimes omitted for simplicity. Equation 1 highlights the main source of intractability; computing \(\Pr(g(\boldsymbol{\theta},\mathbf{u})\in B_{d,\epsilon}(\mathbf{y_{0}})| \boldsymbol{\theta})\) as the fraction of samples that lie inside the \(\epsilon\) region around \(\mathbf{y_{0}}\) is computationally infeasible in the limit where \(\epsilon\to 0\). Hence, the constraint is relaxed to \(\epsilon>0\), which leads to the approximate likelihood: \[L_{d,\epsilon}(\boldsymbol{\theta})=\Pr(\mathbf{y}\in B_{d,\epsilon}(\mathbf{ y_{0}})\mid\boldsymbol{\theta}),\quad\text{where }\epsilon>0. \tag{2}\] and, in turn, to the approximate posterior: \[p_{d,\epsilon}(\boldsymbol{\theta}|\mathbf{y_{0}})\propto L_{d,\epsilon}( \boldsymbol{\theta})p(\boldsymbol{\theta}) \tag{3}\] Equations 2 and 3 is by no means the only strategy to deal with the intractability of the likelihood function in Equation 1. Other strategies include modeling the (stochastic) relationship between \(\boldsymbol{\theta}\) and \(\mathbf{y}\), and its reverse, or framing likelihood-free inference as a ratio estimation problem, see for example Blum and Francois (2010); Wood (2006); Papamakarios and Murray (2016); Papamakarios, Sterratt, and Murray (2019); Chen and Gutmann (2019); Thomas, Dutta, Corander, Kaski, and Gutmann (2022); Hermans, Begy, and Louppe (2020). However, both OMC and robust OMC, which we introduce next, are based on the approximation in Equation 2. ### Optimization Monte Carlo (OMC) Our description of OMC Meeds and Welling (2015) follows Ikonomov and Gutmann (2019). We define the indicator function (boxcar kernel) that equals one only if \(\mathbf{x}\) lies in \(B_{d,\epsilon}(\mathbf{y})\): \[\mathbbm{1}_{B_{d,\epsilon}(\mathbf{y})}(\mathbf{x})=\left\{\begin{array}{ ll}1&\text{if }\mathbf{x}\in B_{d,\epsilon}(\mathbf{y})\\ 0&\text{otherwise}\end{array}\right. \tag{4}\] We, then, rewrite the approximate likelihood function \(L_{d,\epsilon}(\mathbf{\theta})\) of Equation 2 as: \[L_{d,\epsilon}(\mathbf{\theta})=\Pr(\mathbf{y}\in B_{d,\epsilon}(\mathbf{y_{0}})|\bm {\theta})=\int_{\mathbf{u}}\mathbb{1}_{B_{d,\epsilon}(\mathbf{y_{0}})}(g(\mathbf{ \theta},\mathbf{u}))d\mathbf{u} \tag{5}\] which can be approximated using samples from the simulator: \[L_{d,\epsilon}(\mathbf{\theta})\approx\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}_{B_{d, \epsilon}(\mathbf{y_{0}})}(g(\mathbf{\theta},\mathbf{u}_{i}))\quad\text{ where }\mathbf{u}_{i}\sim p(\mathbf{u}). \tag{6}\] In Equation 6, for each \(\mathbf{u}_{i}\), there is a region \(C_{\epsilon}^{i}\) in the parameter space \(\mathbf{\theta}\) where the indicator function returns one, i.e., \(C_{\epsilon}^{i}=\{\mathbf{\theta}:g(\mathbf{\theta},\mathbf{u}_{i})\in B_{d,\epsilon }(\mathbf{y_{0}})\}\). Therefore, we can rewrite the approximate likelihood and posterior as: \[L_{d,\epsilon}(\mathbf{\theta})\approx\frac{1}{N}\sum_{i=1}^{N}\mathbb{1}_{C_{ \epsilon}^{i}}(\mathbf{\theta}) \tag{7}\] \[p_{d,\epsilon}(\mathbf{\theta}|\mathbf{y_{0}})\propto p(\mathbf{\theta})\sum_{i}^{N} \mathbb{1}_{C_{\epsilon}^{i}}(\mathbf{\theta}). \tag{8}\] As argued by Ikonomov and Gutmann (2019), these derivations provide a unique perspective for likelihood-free inference by shifting the focus onto the geometry of the acceptance regions \(C_{\epsilon}^{i}\). Indeed, the task of approximating the likelihood and the posterior becomes a task of characterizing the sets \(C_{\epsilon}^{i}\). OMC by Meeds and Welling (2015) assumes that the distance \(d\) is the Euclidean distance \(||\cdot||_{2}\) between summary statistics \(\Phi\) of the observed and generated data, and that the \(C_{\epsilon}^{i}\) can be well approximated by infinitesimally small ellipses. These assumptions lead to an approximation of the posterior in terms of weighted samples \(\mathbf{\theta}_{i}^{*}\) that achieve the smallest distance between observed and simulated data for each realization \(\mathbf{u}_{i}\sim p(\mathbf{u})\), i.e., \[\mathbf{\theta}_{i}^{*}=\operatorname*{argmin}_{\mathbf{\theta}}||\Phi(\mathbf{y_{0}} )-\Phi(g(\mathbf{\theta},\mathbf{u}_{i}))||_{2},\quad\mathbf{u}_{i}\sim p(\mathbf{ u}). \tag{9}\] The weighting for each \(\mathbf{\theta}_{i}^{*}\) is proportional to the prior density at \(\mathbf{\theta}_{i}^{*}\) and inversely proportional to the determinant of the Jacobian matrix of the summary statistics at \(\mathbf{\theta}_{i}^{*}\). For further details on OMC we refer the reader to Meeds and Welling (2015); Ikonomov and Gutmann (2019). ### Robust optimization Monte Carlo (ROMC) Ikonomov and Gutmann (2019) showed that considering infinitesimally small ellipses can lead to highly overconfident posteriors. We refer the reader to their paper for the technical details and conditions for this issue to occur. Intuitively, it happens because the weights in OMC are only computed from information at \(\mathbf{\theta}_{i}^{*}\), and using only local information can be misleading. For example, if the curvature of \(||\Phi(\mathbf{y_{0}})-\Phi(g(\mathbf{\theta},\mathbf{u}_{i}))||_{2}\) at \(\mathbf{\theta}_{i}^{*}\) is nearly flat, it may wrongly indicate that \(C_{\epsilon}^{i}\) is much larger than it actually is. In our software package we implement the robust generalization of OMC by Ikonomov and Gutmann (2019) that resolves this issue. ROMC approximates the acceptance regions \(C_{\epsilon}^{i}\) with \(D\)-dimensional bounding boxes \(\hat{C}_{\epsilon}^{i}\). A uniform distribution, \(q_{i}(\mathbf{\theta})\), is defined on top of each bounding box and serves as a proposal distribution for generating posterior samples \(\mathbf{\theta}_{ij}\sim q_{i}\). The samples get an (importance) weight \(w_{ij}\) that compensate for using the proposal distributions \(q_{i}(\mathbf{\theta})\) instead of the prior \(p(\mathbf{\theta})\): \[w_{ij}=\mathbb{1}_{C_{i}^{i}}(\mathbf{\theta}_{ij})\frac{p(\mathbf{\theta}_{ij})}{q(\bm {\theta}_{ij})}. \tag{10}\] Given the weighted samples, any expectation \(\mathsf{E}_{p(\mathbf{\theta}|\mathbf{y_{0}})}[h(\mathbf{\theta})]\) of some function \(h(\mathbf{\theta})\), can be approximated as \[\mathsf{E}_{p(\mathbf{\theta}|\mathbf{y_{0}})}[h(\mathbf{\theta})]\approx\frac{\sum_{ ij}w_{ij}h(\mathbf{\theta}_{ij})}{\sum_{ij}w_{ij}} \tag{11}\] The approximation of the acceptance regions contains two compulsory and one optional step: (i) solving the optimization problems as in OMC, (ii) constructing bounding boxes around \(C_{\epsilon}^{i}\) and, optionally, (iii) refining the approximation via a surrogate model of the distance. _(i) Solving the deterministic optimization problems_ For each set of nuisance variables \(\mathbf{u}_{i},i=\{1,2,\ldots,n_{1}\}\), we search for a point \(\mathbf{\theta}_{i}^{*}\) such that \(d(g(\mathbf{\theta}_{i}^{*},\mathbf{u}_{i}),\mathbf{y_{0}})\leq\epsilon\). In principle, \(d(\cdot)\) can refer to any valid distance function. For the rest of the paper we consider \(d(\cdot)\) as the squared Euclidean distance, as in Ikonomov and Gutmann (2019). For simplicity, we use \(d_{i}(\mathbf{\theta})\) to refer to \(d(g(\mathbf{\theta},\mathbf{u}_{i}),\mathbf{y_{0}})\). We search for \(\theta_{i}^{*}\) solving: \[\mathbf{\theta}_{i}^{*}= \operatorname*{argmin}_{\mathbf{\theta}}d_{i}(\mathbf{\theta}) \tag{12}\] and we accept the solution only if it satisfies the constraint \(d_{i}(\mathbf{\theta}_{i}^{*})\leq\epsilon\). If \(d_{i}(\mathbf{\theta})\) is differentiable, Equation 12 can be solved using any gradient-based optimizer. The gradients \(\nabla_{\mathbf{\theta}}d_{i}(\mathbf{\theta})\) can be either provided in closed form or approximated by finite differences. If \(d_{i}\) is not differentiable, Bayesian Optimization (Shahriari, Swersky, Wang, Adams, and De Freitas, 2015) can be used instead. In this scenario, apart from obtaining an optimal \(\mathbf{\theta}_{i}^{*}\), we can also automatically build a surrogate model \(\hat{d}_{i}(\mathbf{\theta})\) of the distance function \(d_{i}(\mathbf{\theta})\). The surrogate model \(\hat{d}_{i}\) can then substitute the actual distance function in downstream steps of the algorithms, with possible computational gains especially in cases where evaluating the actual distance \(d_{i}(\mathbf{\theta})\) is expensive. _(ii) Estimating the acceptance regions_ Each acceptance region \(C_{\epsilon}^{i}\) is approximated by a bounding box \(\hat{C}_{\epsilon}^{i}\). The acceptance regions \(C_{\epsilon}^{i}\) can contain any number of disjoint subsets in the \(D\)-dimensional space and any of these subsets can take any arbitrary shape. We should make three important remarks. First, since the bounding boxes are built around \(\mathbf{\theta}_{i}^{*}\), we focus only on the connected subset of \(C_{\epsilon}^{i}\) that contains \(\mathbf{\theta}_{i}^{*}\), which we denote as \(C_{\epsilon,\theta_{i}^{*}}^{i}\). Second, we want to ensure that the bounding box \(\hat{C}_{\epsilon}^{i}\) is big enough to contain on its interior all the volume of \(C_{\epsilon,\theta_{i}^{*}}^{i}\). Third, we want \(\hat{C}_{\epsilon}^{i}\) to be as tight as possible to \(C_{\epsilon,\theta_{i}^{*}}^{i}\) to ensure high acceptance rate on the importance sampling step that follows. Therefore, the bounding boxes are built in two steps. First, we compute their axes \(\mathbf{v}_{m}\), for \(m=\{1,\ldots,D\}\) based on the (estimated) curvature of the distance at \(\mathbf{\theta}_{i}^{*}\), and, second, we apply a line-search method along each axis to determine the size of the bounding box. We refer the reader to Algorithm 2 for the details. After the bounding boxes construction, a uniform distribution \(q_{i}\) is defined on each bounding box, and is used as the proposal region for importance sampling. _(iii) Refining the estimate via a local surrogate model (optional)_ For computing the weight \(w_{ij}\) at Equation 10, we need to check whether the samples \(\mathbf{\theta}_{ij}\), drawn from the bounding boxes, are inside the acceptance region \(C^{i}_{\epsilon}\). This can be considered to be a safety-mechanism that corrects for any inaccuracies in the construction of \(\hat{C}^{i}_{\epsilon}\) above. However, this check involves evaluating the distance function \(d_{i}(\mathbf{\theta}_{ij})\), which can be expensive if the model is complex. Ikonomov and Gutmann (2019) proposed fitting a surrogate model \(\hat{d}_{i}(\mathbf{\theta})\) of the distance function \(d_{i}(\mathbf{\theta})\), on data points that lie inside \(\hat{C}^{i}_{\epsilon}\). In principle, any regression model can be used as surrogate model. Ikonomov and Gutmann (2019) used a simple quadratic model because it has ellipsoidal isocontours, which facilitates replacing the bounding box approximation of \(C^{i}_{\epsilon}\) with a tighter-fitting ellipsoidal approximation. The training data for the quadratic model is obtained by sampling \(\mathbf{\theta}_{ij}\sim q_{i}\) and accessing the distances \(d_{i}(\mathbf{\theta}_{ij})\). The generation of the training data adds an extra computational cost, but leads to a significant speed-up when evaluating the weights \(w_{ij}\). Moreover, the extra cost is largely eliminated if Bayesian optimization with a Gaussian process (GP) surrogate model \(\hat{d}_{i}(\mathbf{\theta})\) was used to obtain \(\mathbf{\theta}^{*}_{i}\) in the first step. In this case, we can use \(\hat{d}_{i}(\mathbf{\theta})\) instead of \(d_{i}(\mathbf{\theta})\) to generate the training data. This essentially replaces the global GP model with a simpler local quadratic model which is typically more robust. ### Engine for likelihood-free inference (ELFI) **Engine for Likelihood-Free Inference (ELFI)1**Lintusaari _et al._ (2018) is a Python package for LFI. We selected to implemented ROMC in **ELFI** since it provides convenient modules for all the fundamental components of a probabilistic model (e.g. prior, simulator, summaries etc.). Furthermore, **ELFI** already supports some recently proposed likelihood-free inference methods. **ELFI** handles the probabilistic model as a Directed Acyclic Graph (DAG). This functionality is based on the package **NetworkX**Hagberg, Swart, and S Chult (2008), which supports general-purpose graphs. In most cases, the structure of a likelihood-free model follows the pattern of Figure 1; some edges connect the prior distributions to the simulator, the simulator is connected to the summary statistics that, in turn, lead to the output node. Samples can be obtained from all nodes through sequential (ancestral) sampling. **ELFI** automatically considers as parameters of interest, i.e., those we try to infer a posterior distribution, the ones included in the elfi.Prior class. Footnote 1: Extended documentation can be found [https://elfi.readthedocs.io](https://elfi.readthedocs.io) All inference methods of **ELFI** are implemented following two conventions. First, their constructor follows the signature elfi.<Class name>(<output node>, *arg), where <output node> is the output node of the simulator-based model and *arg are the parameters of the method. Second, they provide a method elfi.<Class name>.sample(*args) for drawing samples from the approximate posterior. In this section, we express ROMC as an algorithm and then we present the general implementation principles. ### Algorithmic view of ROMC For designing an extendable implementation, we firstly define ROMC as a sequence of algorithmic steps. At a high level, ROMC can be split into the training and the inference part; the training part covers the steps for estimating the proposal regions and the inference part calculates the weighted samples. In Algorithm 1, that defines ROMC as an algorithm, steps 2-11 (before the horizontal line) refer to the training part and steps 13-18 to the inference part. #### Training part At the training (fitting) part, the goal is the estimation of the proposal regions \(\hat{C}^{i}_{\epsilon}\), which expands to three mandatory tasks; (a) sample the nuisance variables \(\mathbf{u}_{i}\sim p(\mathbf{u})\) for defining the deterministic distance functions \(d_{i}(\boldsymbol{\theta})\) (steps 3-5), (b) solve the optimization problems for obtaining \(\boldsymbol{\theta}^{*}_{i},d^{*}_{i}\) and keep the solutions inside the threshold \(\epsilon\) (steps 6-9), and (c) estimate the bounding boxes \(\hat{C}^{i}_{\epsilon}\) to define uniform distributions \(q_{i}\) on them (step 10). Optionally, a surrogate model \(\tilde{d}_{i}\) can be fitted for a faster inference phase (step 11). If \(d_{i}(\boldsymbol{\theta})\) is differentiable, using a gradient-based method is advised for obtaining \(\boldsymbol{\theta}^{*}_{i}\) faster. In this case, the gradients \(\nabla_{\boldsymbol{\theta}}d_{i}\) gradients are approximated automatically with finite-differences, if they are not provided in closed-form by the user. Finite-differences approximation requires two evaluations of \(d_{i}\) for each parameter \(\theta_{m},m\in\{1,\ldots,D\}\), which scales well only in low-dimensional problems. If \(d_{i}(\boldsymbol{\theta})\) is not differentiable, Bayesian Optimization can be used as an alternative. In this scenario, the training part becomes slower due to fitting of the surrogate model and the blind optimization steps. Figure 1: Baseline example for creating an **ELFI** model. Image taken from Lintusaari _et al._ (2018) After obtaining the optimal points \(\mathbf{\theta}_{i}^{*}\), we estimate the proposal regions. Algorithm 2 describes the line search approach for finding the region boundaries. The axes of each bounding box \(\mathbf{v}_{m},m=\{1,\ldots,D\}\) are defined as the directions with the highest curvature at \(\mathbf{\theta}_{i}^{*}\) computed by the eigenvalues of the Hessian matrix \(\mathbf{H}_{i}\) of \(d_{i}\) at \(\mathbf{\theta}_{i}\) (step 1). Depending on the algorithm used in the optimization step, we either use the real distance \(d_{i}\) or the Gaussian Process approximation \(\hat{d}_{i}\). When the distance function is the Euclidean distance (default choice), the Hessian matrix can be also computed as \(\mathbf{H}_{i}=\mathbf{J}_{i}^{T}\mathbf{J}_{i}\), where \(\mathbf{J}_{i}\) is the Jacobian matrix of the summary statistics \(\Phi(g(\mathbf{\theta},\mathbf{u}_{i}))\) at \(\mathbf{\theta}_{i}^{*}\). This approximation has the computational advantage of using only first-order derivatives. After defining the axes, we search for the bounding box limits with a line step algorithm (steps 2-21). The key idea is to take long steps \(\eta\)_start until crossing the boundary and then take small steps backwards to find the exact boundary position. ``` 1:procedureROMC 2:for\(i\gets 1\) to \(n_{1}\)do 3:\(\mathbf{u}_{i}\sim p(\mathbf{u})\)\(\triangleright\) Draw nuisance variables 4: Convert \(M_{r}(\mathbf{\theta})\) to \(g(\mathbf{\theta},\mathbf{u}=\mathbf{u}_{i})\)\(\triangleright\) Define deterministic simulator 5:\(d_{i}(\mathbf{\theta})=d(g(\mathbf{\theta},\mathbf{u}=\mathbf{u}_{i}),\mathbf{y}_{0})\)\(\triangleright\) Define distance function 6:\(\mathbf{\theta}_{i}^{*}=\operatorname*{argmin}_{\mathbf{\theta}}d_{i}\), \(d_{i}^{*}=d_{i}(\mathbf{\theta}_{i}^{*})\)\(\triangleright\) Solve optimization problem 7:if\(d_{i}^{*}>\epsilon\)then 8: Go to 2\(\triangleright\) Filter solution 9:endif 10: Estimate \(\hat{C}_{\epsilon}^{i}\) and define \(q_{i}\)\(\triangleright\) Estimate proposal area 11: (Optional) Fit \(\hat{d}_{i}\) on \(\hat{C}_{\epsilon}^{i}\)\(\triangleright\) Fit surrogate model 12: 13:for\(j\gets 1\) to \(n_{2}\)do 14:\(\mathbf{\theta}_{ij}\sim q_{i}\), compute \(w_{ij}\) as in Algorithm 3\(\triangleright\) Sample 15:endfor 16:endfor 17:\(\mathsf{E}_{p(\mathbf{\theta}|\mathbf{y}_{0})}[h(\mathbf{\theta})]\) as in eq. (11)\(\triangleright\) Estimate an expectation 18:\(p_{d,\epsilon}(\mathbf{\theta})\) as in eq. (8)\(\triangleright\) Evaluate the unnormalized posterior 19:endprocedure ``` **Algorithm 1** ROMC. Requires the prior \(p(\mathbf{\theta})\), the simulator \(M_{r}(\mathbf{\theta})\), number of optimization problems \(n_{1}\), number of samples per region \(n_{2}\), acceptance limit \(\epsilon\) #### Inference Part The inference part includes one or more of the following three tasks; (a) sample from the posterior distribution \(\mathbf{\theta}_{i}\sim p_{d,\epsilon}(\mathbf{\theta}|\mathbf{y}_{0})\) (Equation 10), (b) compute an expectation \(\mathsf{E}_{\mathbf{\theta}|\mathbf{y}_{0}}[h(\mathbf{\theta})]\) (Equation 11) and/or (c) evaluate the unnormalized posterior \(p_{d,\epsilon}(\mathbf{\theta}|\mathbf{y}_{0})\) (Equation 8). Sampling is performed by getting \(n_{2}\) samples from each proposal distribution \(q_{i}\). For each sample \(\mathbf{\theta}_{ij}\), the distance function2 is evaluated for checking if it lies inside the acceptance region. Algorithm 3 defines the steps for computing a weighted sample. After we obtain weighted samples, computing the expectation is straightforward using Equation 11. Finally, evaluating the unnormalized posterior at a specific point \(\mathbf{\theta}\) requires access to the distance functions \(d_{i}\) and the prior distribution \(p(\mathbf{\theta})\). Following Equation 8, we simply count for how many deterministic distance functions it holds that \(d_{i}(\mathbf{\theta})<\epsilon\). It is worth noticing that for evaluating the unnormalized posterior, there is no need for solving the optimization problems and building the proposal regions. ``` 1:Compute eigenvectors \(\mathbf{v}_{m}\) of \(\mathbf{H}_{i}\) (\(m=1,\ldots,D\)) 2:for\(m\gets 1\) to \(D\)do 3:\(\tilde{\mathbf{\theta}}\leftarrow\mathbf{\theta}_{i}^{*}\) 4:\(k\gets 0\) 5:\(\eta\leftarrow\eta\_\text{start}\)\(\triangleright\) Initialize \(\eta\) 6:repeat 7:\(j\gets 0\) 8:repeat 9:\(\tilde{\mathbf{\theta}}\leftarrow\tilde{\mathbf{\theta}}+\eta\ \mathbf{v}_{m}\)\(\triangleright\) Large step size \(\eta\). 10:\(j\gets j+1\) 11:until\(d(g(\tilde{\mathbf{\theta}},\mathbf{u}=\mathbf{u}_{i}),\mathbf{y}_{\mathbf{0}})>\epsilon\) or \(j\geq M\)\(\triangleright\) Check distance or maximum iterations 12:\(\tilde{\mathbf{\theta}}\leftarrow\tilde{\mathbf{\theta}}-\eta\ \mathbf{v}_{m}\) 13:\(\eta\leftarrow\eta/2\)\(\triangleright\) More accurate region boundary 14:\(k\gets k+1\) 15:until\(k=K\) 16:if\(\tilde{\mathbf{\theta}}=\mathbf{\theta}_{i}^{*}\)then\(\triangleright\) Check if no step has been done 17:\(\tilde{\mathbf{\theta}}\leftarrow\tilde{\mathbf{\theta}}+\frac{\eta\_\text{start}}{2^{K}} \mathbf{v}_{m}\)\(\triangleright\) Then, make the minimum step 18:endif 19: Set \(\tilde{\mathbf{\theta}}\) as the positive end point along \(\mathbf{v}_{m}\) 20: Run steps 3 - 18 for \(\mathbf{v}_{m}=-\mathbf{v}_{m}\) and set \(\tilde{\mathbf{\theta}}\) as the negative end point along \(\mathbf{v}_{m}\) 21:endfor 22:Fit a rectangular box around the region end points and define \(q_{i}\) as uniform distribution ``` **Algorithm 2** Approximation \(C_{\epsilon}^{i}\) with a bounding box \(\hat{C}_{\epsilon}^{i}\); Requires: a model of distance \(d_{i}(\mathbf{\theta})\), an optimal point \(\mathbf{\theta}_{i}^{*}\), a number of refinements \(K\), a step size \(\eta\_\text{start}\), maximum iterations \(M\) and a curvature matrix \(\mathbf{H}_{i}\) (\(\mathbf{J}_{i}^{T}\mathbf{J}_{i}\) or GP Hessian) ``` 1:\(\mathbf{\theta}_{ij}\sim q_{i}\forall i\)\(\triangleright\) Sample parameters 2:for\(i\gets 1\) to \(n_{1}\)do 3:for\(j\gets 1\) to \(n_{2}\)do 4:if\(d_{i}(\mathbf{\theta}_{ij})\leq\epsilon\)then\(\triangleright\) Accept sample 5:\(w_{ij}=\frac{p(\mathbf{\theta}_{ij})}{q(\mathbf{\theta}_{ij})}\)\(\triangleright\) Compute weight 6: Store (\(w_{ij},\mathbf{\theta}_{ij}\))\(\triangleright\) Store weighted sample 7:endif 8:endfor 9:endfor 10:endfor ``` **Algorithm 3** Sampling. Requires a function of distance \(d_{i}\), the prior distribution \(p(\mathbf{\theta})\), the proposal distribution \(q_{i}\) ### General implementation principles The overview of our implementation is illustrated in Figure 2. Following Python naming principles, the methods starting with an underscore (green rectangles) represent internal (private) functions, whereas the rest (blue rectangles) are the methods exposed at the API. In Figure 2, it can be observed that the implementation follows Algorithm 1. The training part includes all the steps until the computation of the proposal regions, i.e., sampling the nuisance variables, defining the optimization problems, solving them, constructing the regions and fitting local surrogate models. The inference part comprises of evaluating the unnormalized posterior (and the normalized when is possible), sampling and computing an expectation. We also provide some utilities for inspecting the training process, such as plotting the histogram of the final distances or visualizing the constructed bounding boxes. Finally, in the evaluation part, we provide two methods for evaluating the inference; (a) computing the Effective Sample Size (ESS) of the samples and (b) measuring the divergence between the approximate posterior Figure 2: Overview of the ROMC implementation. On the left side, we depict ROMC as a sequence of algotirhmic steps. On the right side, we present the functions that form our implementation; the green rectangles (starting with underscore) are the internal functionalities and the blue rectangles the publicly exposed API. This side-by-side illustration highlights that our implementation follows strictly the algorithmic view of ROMC. the ground-truth, if the latter is available.3 Footnote 3: Normally, the ground-truth posterior is not available; However, this functionality is useful in cases where the posterior can be computed numerically or with an alternative method, e.g., Rejection Sampling, and we want to measure the discrepancy between the two approximations. #### Parallel version of Romc As discussed, Romc has the significant advantage of being fully parallelisable. We exploit this fact by implementing a parallel version of the major fitting components; (a) solving the optimization problems, (b) constructing bounding box regions. We parallelize these processes using the built-in Python package **multiprocessing**. The specific package enables concurrency, using sub-processes instead of threads, for side-stepping the Global Interpreter (GIL). For activating the parallel version of the algorithm, the user simply has to set elfi.ROMC(<output_node>, parallelize = True). #### Simple one-dimensional example For illustrating the functionalities we will use the following running example introduced by Ikonomov and Gutmann (2019), \[p(\theta)=\mathcal{U}(\theta;-2.5,2.5) \tag{13}\] \[p(y|\theta)=\left\{\begin{array}{ll}\theta^{4}+u&\text{if } \theta\in[-0.5,0.5]\\ |\theta|-c+u&\text{otherwise}\end{array}\right.\] (14) \[u\sim\mathcal{N}(0,1) \tag{15}\] The prior is a uniform distribution in the range \([-2.5,2.5]\) and the likelihood is defined at Equation 14. The constant \(c\) is \(0.5-0.5^{4}\) ensures that the PDF is continuous. There is only one observation \(y_{0}=0\). The inference in this particular example can be performed quite easily without using a likelihood-free inference approach. We can exploit this fact for validating the accuracy of our implementation. At the following code snippet, we code the model at **ELFI**: import elfi import scipy.stats as ss import numpy as np def simulator(t1, batch_size = 1, random_state = None): c = 0.5 - 0.5**4 if t1 < -0.5: y = ss.norm(loc = -t1-c, scale = 1).rvs(random_state = random_state) elif t1 <= 0.5: y = ss.norm(loc = t1**4, scale = 1).rvs(random_state = random_state) else: y = ss.norm(loc = t1-c, scale = 1).rvs(random_state = random_state) return y # observation y = 0 Elfi graph t1 = elfi.Prior('uniform', -2.5, 5) sim = elfi.Simulator(simulator, t1, observed = y) d = elfi.Distance('euclidean', sim) Initialize the ROMC inference method bounds = [(-2.5, 2.5)] # bounds of the prior parallelize = False # True activates parallel execution romc = elfi.ROMC(d, bounds = bounds, parallelize = parallelize) ## 4 Implemented functionalities At this section, we analyze the functionalities of the training, the inference and the evaluation part. Extended documentation for each method can be found in **ELFI**'s official documentation. Finally, we describe how a user may extend ROMC with its custom modules. ### Training part In this section, we describe the six functions of the training part: >>> romc.solve_problems(n1, use_bo = False, optimizer_args = None) This method (a) draws integers for setting the seed, (b) defines the optimization problems and (c) solves them using either a gradient-based optimizer (default choice) or Bayesian optimization (BO), if use_bo = True. The tasks are completed sequentially, as shown in Figure 2. The optimization problems are defined after drawing n1 integer numbers from a discrete uniform distribution \(u_{i}\sim\mathcal{U}\{1,2^{32}-1\}\), where each integer \(u_{i}\) is the seed passed to **ELFI**'s random simulator. The user can pass a Dict with custom parameters to the optimizer through optimizer_args. For example, in the gradient-based case, the user may pass optimizer_args = {"method": "L-BFGS-B", "jac": jac}, to select the "L-BFGS-B" optimizer and use the callable jac to compute the gradients in closed-form. >>> romc.distance_hist(**kwargs) This function helps the user decide which threshold \(\epsilon\) to use by plotting a histogram of the distances at the optimal point \(d_{i}(\boldsymbol{\theta}_{i}^{*}):\{i=1,2,\ldots,n_{1}\}\) or \(\tilde{d}_{i}^{*}\) in case use_bo = True. The function forwards the keyword arguments to the underlying pylot.hist() of the matplotlib package. In this way the user may customize some properties of the histogram, e.g., the number of bins. >>> romc.estimate_regions(eps_filter, use_surrogate = None, fit_models = False) This method estimates the proposal regions around the optimal points, following Algorithm 2. The choice about the distance function follows the previous optimization step; if a gradient-based optimizer has been used, then estimating the proposal region is based on the real distance \(d_{i}\). If BO is used, then the surrogate distance \(\hat{d}\) is chosen. Setting use_surrogate=False enforces the use of the real distance \(d\) even after BO. Finally, the parameter fit_models selects whether to fit local surrogate models \(\tilde{d}\) after estimating the proposal regions. The training part includes three more functions. The function romc.fit_posterior(args*) which is a syntactic sugar for applying.solve_problems() and.estimate_regions() in a single step. The function romc.visualize_region(i) plots the bounding box around the optimal point of the \(i\)-th optimization problem, when the parameter space is up to \(2D\). Finally, romc.compute_eps(quantile) returns the appropriate distance value \(d_{i=\kappa}^{*}\) where \(\kappa=\lfloor quantile\cdot n\rfloor\) from the collection \(\{d_{i}^{*}\},i=\{1,\dots,n\}\) where \(n\) is the number of accepted solutions. It can be used to automate the selection of the threshold \(\epsilon\), e.g., eps=romc.compute_eps(quantile=0.9). ### Example - Training part In the following snippet, we put together the routines to code the training part of the running example. Training (fitting) part n1 = 500 # number of optimization problems seed = 21 # seed for solving the optimization problems use_bo = False # set to True for switching to Bayesian optimization Training step-by-step romc.solve_problems(n1 = n1, seed = seed, use_bo = use_bo) romc.theta_hist(bins = 100) # plot hist to decide which eps to use eps =.75 # threshold for the bounding box romc.estimate_regions(eps = eps) # build the bounding boxes romc.visualize_region(i = 2) # for inspecting visually the bounding box # Equivalent one-line command # romc.fit_posterior(n1 = n1, eps = eps, use_bo = use_bo, seed = seed) ### Inference part In this section, we describe the four functions of the inference part: >>> romc.sample(n2) This is the basic functionality of the inference, drawing \(n_{2}\) samples for each bounding box region, giving a total of \(k\cdot n_{2}\) samples, where \(k<n_{1}\) is the number of the optimal points remained after filtering. The samples are drawn from a uniform distribution \(q_{i}\) defined over the corresponding bounding box and the weight \(w_{i}\) is computed as in Algorithm 3. The inference part includes three more function. The function romc.compute_expectation(h) computes the expectation \(\mathsf{E}_{p(\boldsymbol{\theta}|\mathbf{y_{0}})}[h(\boldsymbol{\theta})]\) as in Equation 11. The argument h can be any python Callable. The method romc.eval_unnorm_posterior(theta, eps_cutoff = False) computes the unnormalized posterior approximation as in Equation 3. The method romc.eval_posterior(theta, eps_cutoff = False) evaluates the normalized posterior estimating the partition function \(Z=\int p_{d,\epsilon}(\boldsymbol{\theta}|\mathbf{y_{0}})d\boldsymbol{\theta}\) using Riemann's integral approximation. The approximation is computationally feasible only in a low-dimensional parametric space. ### Example - Inference part In the following code snippet, we use the inference utilities to (a) get weighted samples from the approximate posterior, (b) compute an expectation and (c) evaluate the approximate posterior. We also use some of **ELFI**'s built-in tools to get a summary of the obtained samples. For romc.compute_expectation(), we demonstrate its use to compute the samples mean and the samples variance. Finally, we evaluate romc.eval_posterior() at multiple points to plot the approximate posterior of Figure 3. We observe that the approximation is quite close to the ground-truth. # Inference part seed = 21 n2 = 5 romc.sample(n2 = n2, seed = seed) # visualize region, adding the samples now romc.visualize_region(i = 1) Visualize marginal (built-in ELFI tool) weights = romc.result.weights romc.result.plot_marginals(weights = weights, bins = 100, range = (-3,3)) # Summarize the samples (built-in ELFI tool) Figure 3: Histogram of distances and visualization of a specific region. romc.result.summary() # Method: ROMC # Number of samples: 19300 # Parameter Mean 2.5% 97.5% # theta: -0.012 -1.985 1.987 compute expectation exp_val = rcomc.compute_expectation(h = lambda x: np.squeeze(x)) print("Expected value : %.3f" % exp_val) # Expected value: -0.012 exp_var = rcomc.compute_expectation(h = lambda x: np.squeeze(x)**2) print("Expected variance: %.3f" % exp_var) # Expected variance: 1.120 # eval unnorm posterior print("%.3f" % rcomc.eval_unnorm_posterior(theta = np.array([[0]]))) # check eval posterior print("%.3f" % rcomc.eval_posterior(theta = np.array([[0]]))) ### Evaluation part The method rcomc.compute_ess() computes the Effective Sample Size (ESS) as \(\frac{(\sum_{i}w_{i})^{2}}{\sum_{i}w_{i}^{2}}\), which is a useful quantity to measure how many samples actually contribute to an expectation. For example, in an extreme case of a big population of samples where only one has big weight, the ESS is much smaller than the samples population. The method rcomc.compute_divergence(gt_posterior, bounds, step, distance) estimate the divergence between the ROMC approximation and the ground truth posterior. Since the estimation is performed using Riemann's approximation, the method can only work in low dimensional spaces. The method can be used for evaluation in synthetic examples where the ground truth is accessible. In a real-case scenarios, where it is not expected to have access to the ground-truth posterior, the user may set the approximate posterior obtained with any other inference approach for comparing the two methods. The argument step defines the step used in the Riemann's approximation and the argument distance can be either "Jensen-Shannon" or "KL-divergence". Evaluation part res = rcomc.compute_divergence(wrapper, distance = "Jensen-Shannon") print("Jensen-Shannon divergence: %.3f" % res) # Jensen-Shannon divergence: 0.035 nof_samples = len(romc.result.weights) ess = rcomc.compute_ess() print("NofSamples:%d,ESS:%.3f"%(nofsamples,ess)) #NofSamples:19300,ESS:16196.214 ### Extend the implementation with custom modules ROMC is a generic LFI framework as it describes a sequence of steps for approximating the posterior distribution without explicitly enforcing a specific algorithm for each step. For completeness, Ikonomov and Gutmann (2019) propose a method for each step but, in general, a user can experiment with alternative methods. Considering that, we designed the implementation to support flexibility. We have specified four critical parts where a user may intervene using custom methods; (a) gradient-based optimization, (b) Bayesian optimization, (c) proposal region construction and (d) surrogate model fitting. Each of these parts corresponds to an internal function inside the romc.OptimisationProblem class; (a) solve_gradients(), (b) solve_bo(), (c) build_region() and (d) fit_local_surrogate(), respectively. To replace any of these parts, the user must create a custom class that inherits OptimisatioProblem and overwrite the appropriate function(s). To illustrate this in practice, suppose a user wants to fit Deep Neural Networks instead of, the default, quadratic models as local surrogates \(\tilde{d}_{i}\). Therefore, the user must create a new class that inherits OptimisationProblem and overwrite the fit_local_surrogate(**kwargs) function with one that fits neural networks as local surrogates. We illustrate that in the following snippet using the neural_network.MLPRegressor class of the **scikit-learn** package. The reader can find the end-to-end example here as an online Colab notebook. class CustomOptim(OptimisationProblem): def __init__(self,**kwargs): super(CustomOptim, self).__init__(**kwargs) def fit_local_surrogate(self, **kwargs): nof_samples = 500 objective = self.objective # the distance function # helper function def local_surrogate(theta, model_scikit): assert theta.ndim == 1 theta = np.expand_dims(theta, 0) return float(model_scikit.predict(theta)) # create local surrogate model as a function of theta def create_local_surrogate(model): return partial(local_surrogate, model_scikit = model) local_surrogates = [] for i in range(len(self.regions)): # prepare dataset x = self.regions[i].sample(nof_samples) y = np.array([objective(ii) for ii in x]) # train Neural Network mlp = MLPRegressor(hidden_layer_sizes = (10,10), solver = 'adam') model = Pipeline([('linear', mlp)]) model = model.fit(x, y) local_surrogates.append(create_local_surrogate(model)) self.local_surrogates = local_surrogates self.state["local_surrogates"] = True In a similar way, the user can replace any of the other three functions. In each case, the custom function must update some class-level variables that hold the state of the training phase. In the following sections, we present which are these variables in each function. Furthermore, when implementing custom functions, the user may use two helping classes; (a) RomcOptimisationResult, that stores the result of the optimization and (b) NDimBoundingBox, that stores the bounding box. We present their definitions in the following snippets and we illustrate how to use them in the next sections. Both classes can be imported from the module elfi.methods.inference.romc. class RomcOptimisationResult: def __init__(self, x_min, f_min, hess_appr): Parameters ------------------------------------------------ x_min: np.ndarray (D,), minimum f_min: float, distance at x_min hess_appr: np.ndarray (D,D), Hessian approximation at x_min class NDimBoundingBox: def __init__(self, rotation, center, limits): Parameters ------------------------------------------------ rotation: np.ndarray (D,D), rotation matrix for the bounding box center: np.ndarray (D,) center of the bounding box limits: np.ndarray (D,2), the limits of the bounding box _(a) Extending gradient-based optimization_ For replacing the default gradient-based optimization method, the user must overwrite the function solve_gradients(). Using the objective function \(d_{i}\) (self.objective), the custom method must store the result of the optimization as a RomcOptimisationResult instance. In the following snippet, after the comment # state variables, we present the class-level variables that must be set by the method. def solve_gradients(self, **kwargs): # useful variables # self.objective: Callable, the distance function # code custom solution here result = RomcOptimisationResult(x =.., y =.., jac =.., hess_inv =..) success: bool =... # whether optimization was successful # state variables self.state["attempted"] = True if success: self.result = result self.state["solved"] = True return True else: return False _(b) Extending Bayesian Optimization_ For replacing the default Bayesian optimization function, the procedure is similar to the gradient-based case. As presented in the following snippet, the additional class-level variables that must be set are; (a) self.surrogate = custom_surrogate, where custom_surrogate is a Callable and (b) self.state["has_fit_surrogate"] = True if the optimization is successful. def solve_bo(self, **kwargs): # useful variables # self.objective: Callable, the distance function # code custom solution here result = RomcOptimisationResult(x =.., y =.., jac =.., hess_inv =..) custom_surrogate =... # store a Callable here success: bool =... # whether optimization was successful # state variables self.state["attempted"] = True if success: self.result = result self.surrogate = custom_surrogate self.state["solved"] = True self.state["has_fit_surrogate"] = True return True else: return False _(c) Extending the proposal region construction_ For replacing the construction of the proposal region the user must overwrite the build_region method. Using the objective function \(d_{i}\) (self.objective) the method must estimate a list of bounding boxes as a List with NDimBoundingBox instances and set the state variables presented below. An end-to-end example for using a custom region construction module can be found here. def build_region(self, **kwargs): # useful variables # self.objective: Callable, the distance function # custom build_region method eps: float =... # epsilon used in region estimation bounding_box: List[NDimBoundingBox] =... success: bool =... # whether region built successfully # state variables self.eps_region = eps if success: # construct region self.regions = bounding_box self.state["region"] = True return True else: return False _(d) Extending the surrogate model fitting_ For replacing the surrogate model fitting the user must overwrite the fit_local_surrogate method. Using the objective function \(d_{i}\) (self.objective) and the estimated bounding boxes (self.regions), the method must create a list of local surrogates, one for each region, as a List with Callables and set the state variables as presented in the following snippet. def fit_local_surrogate(self, **kwargs): # useful variables # self.objective: Callable, the distance function # self.regions: List[NDimBoundingBox], the bounding boxes # custom local surrogates local_surrogates: List[Callable] =... # the surrogate models success: bool =... # whether surrogates fit successfully # state variables if success: self.local_surrogates = custom_surrogates self.state["local_surrogates"] = True return True else: return False ## 5 Use-case illustration In this section, we test the implementation using the second-order moving average (MA2) example, which is one of the standard models of **ELFI**. We perform the inference using three different versions of ROMC; (i) with a gradient-based optimizer, (ii) with Bayesian Optimization and (iii) fitting a Neural Network as a surrogate model. The later illustrates how to extend the implementation, replacing part of ROMC with a user-defined component. Finally, we measure the execution speed-up using the parallel version of ROMC. ### Model Definition MA2 is a probabilistic model for time series analysis. The observation at time \(t\) is given by, \[y_{t}=w_{t}+\theta_{1}w_{t-1}+\theta_{2}w_{t-2},\quad t=1,\dots,T \tag{16}\] \[\theta_{1},\theta_{2}\in\mathbb{R},\quad w_{k}\sim\mathcal{N}(0,1),k\in\mathbb{Z} \tag{17}\] The random variable \(w_{k}\sim\mathcal{N}(0,1)\) is white noise and the two parameters of interest, \(\theta_{1},\theta_{2}\), model the dependence from the previous observations. The parameter \(T\) is the number of sequential observations which is set to \(T=100\). For securing that the inference problem is identifiable, i.e., the likelihood has only one mode, we use the prior proposed by Marin _et al._ (2012), \[p(\boldsymbol{\theta})=p(\theta_{1})p(\theta_{2}|\theta_{1})=\mathcal{U}( \theta_{1};-2,2)\mathcal{U}(\theta_{2};\theta_{1}-1,\theta_{1}+1) \tag{18}\] The observation vector \(\mathbf{y}_{0}=(y_{1},\dots,y_{100})\) is generated with \(\boldsymbol{\theta}^{*}=(0.6,0.2)\). The dimensionality of the output \(\mathbf{y}\) is high, therefore we use summary statistics. Considering that the output vector represents a time-series signal, we select the autocovariances with \(\text{lag}=1\) and \(\text{lag}=2\), as shown in Equations 19 and 20. The distance between the observation and the simulator output is measured with the squared Euclidean distance, as shown in Equation 22. \[s_{1}(\mathbf{y})=\frac{1}{T-1}\sum_{t=2}^{T}y_{t}y_{t-1} \tag{19}\] \[s_{2}(\mathbf{y})=\frac{1}{T-2}\sum_{t=3}^{T}y_{t}y_{t-2}\] (20) \[s(\mathbf{y})=(s_{1}(\mathbf{y}),s_{2}(\mathbf{y}))\] (21) \[d=||s(\mathbf{y})-s(\mathbf{y}_{0})||_{2}^{2} \tag{22}\] ### Inference To demonstrate the full capabilities of our ROMC implementation, we perform inference using three different methods: (i) a gradient-based optimizer, (ii) Bayesian optimization, and (iii) fitting a neural network (NN) as a surrogate model. The use of a NN as a surrogate model serves as an example of the extensibility of our implementation, as described in Chapter 4.4. For the NN, we employ the **MLPRegressor** class from the **scikit-learn** package. The NN (\(\tilde{d}_{i}\)) substitutes the actual distance function (\(d_{i}\)) inside the proposal regions. Therefore, all inference actions, namely, sampling, expectation computation, and posterior evaluation are based on \(\tilde{d}_{i}\). We use a NN with two hidden layers of 10 neurons each and train it using 500 examples from each proposal region. To compare the results of ROMC inference with a traditional ABC algorithm, we also include Rejection Sampling in our analysis. In Figure 4, we illustrate the acceptance region of the same deterministic simulator, in the gradient-based and the Bayesian optimization case. The acceptance regions are quite similar even though the different optimization schemes lead to different optimal points. In Figure 5, we demonstrate the histograms of the marginal posteriors, for each approach; (a) Rejection ABC, (b) ROMC with gradient-based optimization (c) ROMC with Bayesian optimization and (d) ROMC with the NN extension. We observe a significant agreement between the different approaches. At Table 1 we present the empirical mean \(\mu\) and standard deviation \(\sigma\) for each inference approach and finally, in Figure 6, we illustrate the unnormalized posterior for the three different variations of the ROMC method. The results show that all ROMC variations provide consistent results between them which are in agreement with the Rejection ABC algorithm. #### Parallelize the implementation As stated above, ROMC is an approach that can be executed in a fully-parallelized manner, exploiting all CPU cores. In our implementation, we support a parallel version of the the training part, namely, for solving the optimization problems and for estimating the proposal regions. The parallel version of the algorithm is built on top of the built-in Python package **multiprocess** for using all the available CPU cores. In Figure 7 we observe the execution times for performing the inference on the MA2 model; the parallel version performs both tasks almost five times faster than the sequential. The result is reasonable given that the experiments have run in a single machine with the Intel(r) Core(tm) i7-8750H Processor, which has six separate cores. ## 6 Summary and discussion In this paper, we presented the implementation of the LFI method ROMC at the Python package **ELFI**. We highlighted two different use-cases. First, we illustrated how a user may exploit the provided API to solve an LFI problem. Second, we focus on the scenario where a researcher wants to intervene and alter parts of the method to experiment with new approaches. Since (Robust) Optimization Monte Carlo is a novel approach for statistical inference and, to the best of our knowledge, this is the first open-source implementation on a generic package, we believe that the later is the biggest contribution. There are still open challenges for enabling ROMC to solve high-dimensional LFI problems efficiently. The first is enabling the execution of ROMC execution into a distributed environment, i.e., a cluster of computers. ROMC can be characterized as an _embarrassingly parallel_ workload; each optimization problem is an entirely independent task. Therefore, supporting inference into a cluster of computers can radically speed up the inference. The second is Figure 5: Histogram of the marginal posterior distributions using three different inference approaches; (a) in the first row, the samples are obtained using Rejection ABC sampling (b) in the second row, using ROMC with a gradient-based optimizer and (c) in the third row, using ROMC with Bayesian optimization approach. The vertical (red) line represents the samples mean \(\mu\) and the horizontal (black) the standard deviation \(\sigma\). the implementation of the method on a framework that supports automatic differentiation. Automatic differentiation is necessary for efficiently solving optimization problems, especially in high-dimensional parametric models. ## Computational details The results in this paper were obtained using Python 3.9, **ELFI 0.8.6** at Ubuntu 20.04 Its operating system, at a single machine with Intel(r) Core(tm) i7-8750H processor. ## Acknowledgments HP was funded by European Research Council grant 742158 (SCARABEE, Scalable inference algorithms for Bayesian evolutionary epidemiology). Figure 6: The unnormalized posterior distribution using the ROMC method with (a) a gradient-based optimization (b) Bayesian Optimization (c) gradient-based with a Neural Network as a surrogate model. Figure 7: Comparison between parallel and sequential execution of ROMC. We observe that the parallel version runs almost 5 times faster.
2309.04667
Nontrivial upper bound for chemical distance on the planar random cluster model
We extend the upper bounds derived for the horizontal and radial chemical distance for 2d Bernoulli percolation in [DHS21, SR20] to the planar random cluster model with cluster weight $1 \le q \le 4$. Along the way, we provide a complete proof of the strong arm separation lemma for the random cluster model.
Lily Reeves
2023-09-09T02:56:39Z
http://arxiv.org/abs/2309.04667v1
# Nontrivial upper bound for chemical distance on the planar random cluster model ###### Abstract. We extend the upper bounds derived for the horizontal and radial chemical distance for \(2d\) Bernoulli percolation in [6, 29] to the planar random cluster model with cluster weight \(1\leq q\leq 4\). Along the way, we provide a complete proof of the strong arm separation lemma for the random cluster model. The research of L. R. is supported by NSF grant DMS-2303316. ## 1. Introduction ### Random cluster model and chemical distance The random cluster model is a well-known dependent percolation model that generalizes classical models such as the Bernoulli percolation and Ising and Potts models. It was first introduced by Fortuin and Kasteleyn in 1972 [15]. Let \(G=(V,E)\) be a graph and \(\omega\) be a percolation configuration on \(G\) where each edge is colored open or closed. The random cluster measure is tuned by two parameters \(p\in(0,1)\), the edge weight, and \(q>0\), the cluster weight. Then, the random cluster measure is proportional to \[p^{\#\text{ open edges}}(1-p)^{\#\text{ closed edges}}q^{\#\text{ open clusters}}.\] One can immediately see that when \(q=1\), the random cluster measure coincides with the Bernoulli percolation measure. For integer \(q\geq 2\), the random cluster model corresponds to Ising and Potts models through the Edwards-Sokal coupling [14]. The random cluster model undergoes a phase transition at \(p_{c}(q)=\sqrt{q}/(1+\sqrt{q})\) in the sense that for \(p>p_{c}\), the probability the origin is contained in an infinite open cluster is positive while the probability is \(0\) for \(p<p_{c}\)[3]. Moreover, the phase transition is continuous (i.e. the probability the origin is contained in an infinite open cluster is \(0\) at criticality) for \(q\in[1,4]\)[12] and discontinuous for \(q>4\)[10, 27]. The object of interest in this article is the _chemical distance_. For two subsets \(A\) and \(B\) of \(V\) and a percolation configuration viewed as a subgraph of \(G\), the chemical distance is the graph distance between \(A\) and \(B\). Specifically, consider the box \(B(n)=[-n,n]^{2}\) on \(\mathbb{Z}^{2}\) and let \(\mathcal{H}_{n}\) be the event that there exists a horizontal open crossing between the left and right sides of the box, we denote by \(S_{n}\) the length of the shortest such crossing which we call _the_ chemical distance from now on. Chemical distances on planar Bernoulli percolation have received attention from physicists and mathematicians alike. In both the subcritical and supercritical regimes, the chemical distance is known to behave linearly [18, 2]. In the critical phase, while physics literature [13, 16, 19, 20, 21, 22] generally assumes the existence of an exponent \(s>0\) such that \[E\left[S_{n}\mid\mathcal{H}_{n}\right]\sim n^{s},\] there is no widely accepted conjecture on the value of \(s\), nor is there a precise interpretation of "\(\sim\)". The present known lower bound can be derived from the work of Aizenman and Burchard [1]: there is \(\eta>0\) such that, with high probability \[E\left[S_{n}\mid\mathcal{H}_{n}\right]\geq n^{1+\eta}.\] This bound applies to a general family of random curves satisfying certain conditions. This includes shortest open connections in the random cluster model. We remark further on this lower bound in Section 1.5. In [24], Kesten and Zhang note that the shortest horizontal open crossing can be compared to the _lowest crossing_\(\ell_{n}\), which, by combinatorial arguments, consists of "_three-arm points_" and has expected size \[E\left[\#\ell_{n}\mid\mathcal{H}_{n}\right]\leq Cn^{2}P(A_{3}(n)), \tag{1}\] see [25]. Here \(A_{3}(n)\) is the event that there are two disjoint open and one dual-closed paths from the origin to distance \(n\). In [5, 6], Damron, Hanson, and Sosoe improve the upper bound by a factor of \(n^{-\delta}\) for some \(\delta>0\), thus obtaining the present known upper bound: \[E\left[S_{n}\mid\mathcal{H}_{n}\right]\leq Cn^{2-\delta}P(A_{3}(n)). \tag{2}\] In [29], Sosoe and the author obtain the same upper bound for the _radial_ chemical distance, which measures the expected length of the shortest open crossing from the origin to the boundary of the box \(B(n)\) conditional on the existence of such a crossing. Although there is no lowest crossing to compare to in the radial case, the construction of a path consisting of three-arm points serves as the foundation for the improvement. When \(q\in[1,4]\), the random cluster model exhibits a continuous phase transition [12] as well as enjoys positive association (FKG inequality). These facts combined with recent development in RSW-type quad-crossing probabilities [11] allows us to pursue an upper bound in the form of (2): **Theorem 1.1**.: _Fix \(1\leq q\leq 4\), \(p=p_{c}(q)\) and let \(\mathbb{E}\) denote the expectation with respect to the random cluster measure \(\phi^{\xi}_{p_{c},q,B(n)}\). For any boundary condition \(\xi\), there is a \(\delta>0\) and a constant \(C>0\) independent of \(n\) such that_ \[\mathbb{E}[S_{n}\mid\mathcal{H}_{n}]\leq Cn^{2-\delta}\phi^{\xi}_{p_{c},q,B(n )}(A_{3}(n)). \tag{3}\] ### Organization In Section 1.3, we summarize the notations we use in this paper. In Section 1.4, we list a few results and tools for the random cluster model we utilize. One of the persistent tools used in all of the above constructions is the so-called "gluing construction". In 2d critical Bernoulli percolation, this classical construction is realized by RSW estimates and generalized FKG inequality. The later is not known for the random cluster model. Therefore, we provide a detailed alternate argument for gluing constructions in Section 2. The general strategy to prove Theorem 1.1 aligns with [6], which we outline in Section 5 to provide context. We aim to point to the similarities and highlight the differences between the proofs for the two models to ensure readability while minimizing the amount of repetition. In Section 3 and 4, we provide the proofs of a large deviation bound conditional on a three-arm event and the random-cluster analogue of the strong arm separation lemma. Both proofs involve strategic applications of the domain Markov property to circumvent the lack of independence. We can extend the result of the main theorem to the radial chemical distance, following the approach in [29] for Bernoulli percolation. Since most of the arguments in [29] rely solely on independence and gluing constructions, they extend to the random cluster model when substituted with the domain Markov property and gluing constructions detailed in Section 2. The remaining challenge is to find a way to bound the probability of a specific event without the use of Reimer's inequality. Such a method will be detailed in Section 6. ### Notations In this paper, we consider the random cluster model on the square lattice \((\mathbb{Z}^{2},\mathcal{E})\), that is a graph with vertex set \(\mathbb{Z}^{2}\) and edge set \(\mathcal{E}\) consisting of edges between all pairs of nearest-neighbor vertices. We often work with the random cluster model on a discrete subdomain of \(\mathbb{Z}^{2}\). A finite subdomain \(\mathcal{D}=(V,E)\) is defined by the (finite) edge set \(E\) and the vertex set \(V\) of all endpoints of the edges in \(E\). Its boundary \(\partial\mathcal{D}\) consists of the vertices in the topological boundary of \(\mathcal{D}\). A percolation configuration \(\omega=(\omega_{e})_{e\in E}\) on a domain \(\mathcal{D}=(V,E)\) is an element in the state space \(\Omega=\{0,1\}^{E}\) which assigns a status to each edge \(e\in E\). An edge \(e\) is said to be open in \(\omega\) if \(\omega_{e}=1\) and closed otherwise. A boundary condition \(\xi\) on \(\mathcal{D}\) is a partition of \(\partial\mathcal{D}\). All vertices in the same class of the partition are _wired together_ and count towards the same connected component when defining the probability measure. In the _free_ boundary condition, denoted by \(0\) in the superscript, no two vertices on the boundary are identified with each other. **Definition 1**.: Let \(\mathcal{D}=(V,E)\) be a subdomain of \(\mathbb{Z}^{2}\). For an edge weight parameter \(p\in[0,1]\) and a cluster weight parameter \(q>0\), the random cluster measure on \(\mathcal{D}\) with boundary condition \(\xi\) is defined by \[\phi_{p,q,\mathcal{D}}^{\xi}(\omega)=\frac{1}{Z_{p,q,\mathcal{D}}^{\xi}}p^{o( \omega)}(1-p)^{c(\omega)}q^{k(\omega^{\xi})}\] where \(o(\omega)\) is the number of open edges in \(\omega\), \(c(\omega)=|E|-o(\omega)\) is the number of closed edges in \(\omega\), \(k(\omega^{\xi})\) is the number of connected components of \(\omega\) with consideration of the boundary condition \(\xi\), and the partition function is defined by \[Z_{p,q,\mathcal{D}}^{\xi}=\sum_{\omega\in\{0,1\}^{E}}p^{o(\omega)}(1-p)^{c( \omega)}q^{k(\omega^{\xi})}.\] For the rest of this paper, we fix the cluster weight \(q\in[1,4]\) and edge weight \(p=p_{c}(q)=\sqrt{q}/(1+\sqrt{q})\) and we drop them from the notation. #### 1.3.1. Arm events and arm exponents To set up for the so-called arm events, we first introduce a duality. The dual lattice of the square lattice is written as \(((\mathbb{Z}^{2})^{*},\mathcal{E}^{*})\) where \((\mathbb{Z}^{2})^{*}=\mathbb{Z}^{2}+(1/2,1/2)\) and \(\mathcal{E}^{*}\) its nearest-neighbor edges. For each edge \(e\in\mathcal{E}\), \(e^{*}\in\mathcal{E}^{*}\) is the dual edge that shares the same midpoint as \(e\). Given \(\omega\in\Omega\), we obtain \(\omega^{*}\in\Omega^{*}=\{0,1\}^{\mathcal{E}^{*}}\) by the relation \(\omega_{e}=\omega_{e^{*}}^{*}\). The dual measure is of the form \[\phi_{p^{*},q,\mathcal{D}^{*}}^{\xi}(\omega^{*})\propto(p^{*})^{o(\omega^{*}) }(1-p^{*})^{c(\omega^{*})}q^{k((\omega^{*})^{\xi})} \tag{4}\] where the dual parameter \(p^{*}\) satisfies \[\frac{p^{*}p}{(1-p^{*})(1-p)}=q.\] A path (on either the primal or dual lattice) is a sequence \((v_{0},e_{1},v_{1},\ldots,v_{N-1},e_{N},v_{N})\) such that for all \(k=1,...,N,\|v_{k-1}-v_{k}\|_{1}=1\) and \(e_{k}=\{v_{k-1},v_{k}\}\). A circuit is a path with \(v_{0}=v_{N}\). If \(e_{k}\in\mathcal{E}\) for all \(k=1,\ldots,N\) and \(\omega(e_{k})=1\), we say \(\gamma=(e_{k})_{k=1,\ldots,N}\) is open; if \(e_{k}\in\mathcal{E}^{*}\) for all \(k=1,\ldots,N\) and \(\omega(e_{k})=0\), we say \(\gamma\) is a dual-closed path. We write \(B(n)\) for the domain induced by the edges in \([-n,n]^{2}\) and \(B(x,n)\) its translation by \(x\in\mathbb{Z}^{2}\). For \(n_{1}\leq n_{2}\), we denote annuli centered at some vertex \(x\) by \[\operatorname{Ann}(x;n_{1},n_{2})=B(x,n_{2})\setminus B(x,n_{1}).\] If the annulus is centered at the origin, we drop the \(x\) from the notation and instead write \(\operatorname{Ann}(n_{1},n_{2})\). A path of consecutive open or dual-closed edges is called an arm. A color sequence \(\sigma\) of length \(k\) is a sequence \((\sigma_{1},\dots,\sigma_{k})\in\{O,C\}^{k}\). Each \(\sigma_{i}\) indicates a "color", with \(O\) representing open and \(C\) representing dual-closed. For \(n_{1}\leq n_{2}\) and a vertex \(x\), we define a \(k\)_-arm event_ with color sequence \(\sigma\) to be the event that there are \(k\) disjoint paths whose colors are specified by \(\sigma\) in the annulus \(\operatorname{Ann}(x;n_{1},n_{2})\) connecting \(\partial B(x,n_{1})\) to \(\partial B(x,n_{2})\). Formally, \[A_{k,\sigma}(x;n_{1},n_{2}):=\left\{\partial B(x,n_{1})\xleftrightarrow{ \operatorname{Ann}(x;n_{1},n_{2})\over\sigma}\partial B(x,n_{2})\right\}.\] We write \(A\xleftrightarrow{\mathcal{D}}B\) to denote that vertex sets \(A\) and \(B\) are connected through a path of color \(\sigma_{1}\) in the domain \(\mathcal{D}\). For \(A_{k,\sigma}(x;n_{1},n_{2})\) to occur, we let \(n_{0}(k)\) be the smallest integer such that \(|\partial B(n_{0}(k))|\geq k\) and let \(n_{1}\geq n_{0}(k)\). Color sequences that are equivalent up to cyclic order denote the same arm event. For this paper, there are a few special arm events: Let us fix \(n\) and the boundary condition \(\xi\) throughout the paper unless otherwise stated. Let \(0\leq n_{1}<n_{2}\leq n\). **The three-arm event:** We denote by \(\pi_{3}(n_{1},n_{2})\) the probability for the three-arm event \(A_{3}(n_{1},n_{2})=A_{3,OOC}(n_{1},n_{2})\) that there are two open arms and one dual-closed arm in the annulus \(\operatorname{Ann}(n_{1},n_{2})\): \[\pi_{3}(n_{1},n_{2}):=\phi_{B(n)}^{\xi}(A_{3}(n_{1},n_{2})).\] **The alternating five-arm event:** There exists \(c,C>0\) such that \[c(n_{1}/n_{2})^{2}\leq\phi_{B(n)}^{\xi}(A_{5,OCOCO}(n_{1},n_{2}))\leq C(n_{1}/ n_{2})^{2}.\] Thus, the alternating five-arm event is said to have the universal arm exponent \(2\). For proof, see [11, Proposition 6.6]. We remark that the dependencies on the cluster weight \(q\) are implicit in the notations above. Constants denoted by \(C,c\), quantities denoted by \(\alpha,\delta,\epsilon\), and boundary conditions denoted by \(\eta,\iota\) are not necessarily consistent throughout the paper and their values may defer from line to line. ### Properties of the random cluster model Elaboration on the following properties can be found in [17] and [9] or as cited. Domain Markov propertyFor any configuration \(\omega^{\prime}\in\{0,1\}^{E}\) and any subdomain \(\mathcal{F}=(W,F)\) with \(F\subset E\), \[\phi_{\mathcal{D}}^{\xi}(\cdot_{|F}\mid\omega_{e}=\omega_{e}^{\prime},\forall e \notin F)=\phi_{\mathcal{F}}^{\xi^{\prime}}(\cdot)\] where the boundary conditions \(\xi^{\prime}\) on \(\mathcal{F}\) are defined as follows: \(x\) and \(y\) on \(\partial\mathcal{F}\) are wired if they are connected in \(\omega_{|E\setminus F}^{\xi}\). Quad-crossing RSW.[11, Theorem 1.2] Fix \(1\leq q<4\) and \(p=p_{c}(q)\). For every \(M>0\), there exists \(\eta=\eta(M)\in(0,1)\) such that for any discrete quad \((\mathcal{D},a,b,c,d)\) and any boundary conditions \(\xi\), if the extremal distance \(\ell_{\mathcal{D}}[(ab),(cd)]\in[M^{-1},M]\), then \[\eta\leq\phi_{\mathcal{D}}^{\xi}[(ab)\xrightarrow[]{\mathcal{D}}(cd)]\leq 1-\eta. \tag{5}\] _FKG inequality_. Fix \(q\geq 1\) and a domain \(\mathcal{D}=(V,E)\) of \(\mathbb{Z}^{2}\). An event \(A\) is called increasing if for any \(\omega\leq\omega^{\prime}\) (for the partial order on \(\{0,1\}^{E}\)), \(\omega\in A\) implies that \(\omega^{\prime}\in A\). For every increasing events \(A\) and \(B\), \[\phi^{\xi}_{\mathcal{D}}(A\cap B)\geq\phi^{\xi}_{\mathcal{D}}(A)\phi^{\xi}_{ \mathcal{D}}(B).\] We remark that there is no known proof for the equivalent of the generalized FKG inequality for the random cluster model. _Quasi-multiplicativity_. [11, Proposition 6.3] Fix \(1\leq q<4\) and \(\sigma\). There exist \(c=c(\sigma,q)>0\) and \(C=C(\sigma,q)>0\) such that for any boundary condition \(\xi\) and every \(n_{0}(k)\leq n_{1}\leq n_{3}\leq n_{2}\), \[c\phi^{\xi}_{\mathcal{D}}(A_{\sigma}(n_{1},n_{2}))\leq\phi^{\xi}_{\mathcal{D} }(A_{\sigma}(n_{1},n_{3}))\phi^{\xi}_{\mathcal{D}}(A_{\sigma}(n_{3},n_{2})) \leq C\phi^{\xi}_{\mathcal{D}}(A_{\sigma}(n_{1},n_{2})).\] _Lack of Reimer's inequality_. Despite being a classical tool for Bernoulli percolation, the van den Berg-Kesten/Reimer's inequality is not known in the general form for the random cluster model, nor do we expect it to be true. A weak form of Reimer's inequality for the random cluster model is shown in [30]. This is an issue which will be discussed in Section 6. ### Lower bound for the random cluster model The Aizenman-Burchard lower bound (1) applies when the following criterion on the probability of simultaneous traversals of separated rectangles is satisfied: A collection of rectangles \(R_{j}\) is called _well-separated_ when the distance between any two rectangles is at least as large as the diameter of the larger. The following criterion is formulated for the random cluster measure. **Hypothesis** ([1]).: _Fix \(\delta>0\). There exist \(\sigma>0\) and some \(\rho<1\) with which for every collection of \(k\) well-separated rectangles, \(A_{1},\ldots,A_{k}\), of aspect ratio \(\sigma\) and lengths \(\ell_{1},\ldots,\ell_{k}\geq\delta n\),_ \[\phi^{\xi}_{B(n)}\left(\begin{array}{c}A_{1},\ldots,A_{k}\text{ are traversed (in}\\ \text{the long direction) by segments}\\ \text{of an open crossing}\end{array}\right)\leq C\rho^{k}.\] This hypothesis is satisfied as a consequence of the weak (polynomial) mixing property [8]: There exists \(\alpha>0\) such that for any \(2\ell<m<n\) and any event \(A\) depending only on edges in \(B(\ell)\) and event \(B\) depending only on the edges in \(\operatorname{Ann}(m,n)\), \[|\phi^{\xi}_{B(n)}(A\cap B)-\phi^{\xi}_{B(n)}(A)\phi^{\xi}_{B(n)}(B)|\leq \left(\frac{\ell}{m}\right)^{\alpha}\phi^{\xi}_{B(n)}(A)\phi^{\xi}_{B(n)}(B),\] uniform in the boundary condition \(\xi\). ### Acknowledgment We thank Reza Gheissari for a conversation that inspired this project. And we thank Philippe Sosoe for the many helpful discussions and detailed comments on a draft. ## 2. Gluing Construction Without Generalized FKG Inequality This section is dedicated to carefully examining the gluing construction for the random cluster model. The notations used in this section are independent of the rest of the paper. Fix \(\delta>0\), a positive integer \(k\), and \(n_{1}<n_{2}<n_{3}\) sufficiently large. Let \(E(n_{1},n_{2})\) be the event such that: 1. there exist vertices \(x_{i}\in\partial B(n_{2})\) for \(i=1,\ldots,k\) and \(\min_{i\neq j}|x_{i}-x_{j}|\geq 10\delta n_{2}\) such that \(x_{i}\) is connected to \(\partial B(x_{i},2\delta n_{2})\)\(\cap B(n_{2})^{c}\) by a path of color \(\sigma_{i}\); 2. \(E(n_{1},n_{2})\) depends only on the status of the edges in \(\operatorname{Ann}(n_{1},n_{2})\cup(\cup_{i=1}^{k}B(x_{i},2\delta n_{2}))\). Similarly, let \(F(2n_{2},n_{3})\) be the event such that: 1. there exist vertices \(y_{i}\in\partial B(2n_{2})\) for \(i=1,\ldots,k\) and \(\min_{i\neq j}|y_{i}-y_{j}|\geq 20\delta n_{2}\) such that \(y_{i}\) is connected to \(\partial B(y_{i},2\delta n_{2})\cap B(2n_{2})\) by a path of color \(\sigma_{i}\); 2. \(F(2n_{2},n_{3})\) depends only on the status of the edges in \(\operatorname{Ann}(2n_{2},n_{3})\cup(\cup_{i=1}^{k}B(y_{i},2\delta n_{2}))\). **Proposition 2.1**.: _Let \(E(n_{1},n_{2})\), \(F(2n_{2},n_{3})\) be as the above. Then there exists \(c>0\) depending only on \(k\), such that_ \[\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(E(n_{1},n_{2} )\cap F(2n_{2},n_{3})\cap\bigcap_{i=1}^{k}\{x_{i}\xleftarrow{\operatorname{ Ann}(n_{2},2n_{2})}\over\sigma_{i}}y_{i}\right)\] \[\geq c\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}(E(n_{1},n_{2} )\cap F(2n_{2},n_{3})). \tag{6}\] Proof.: Conditional on \(E(n_{1},n_{2})\cap F(2n_{2},n_{3})\), we construct a set of \(k\) corridors, \(T_{1},\ldots,T_{k}\), each connecting \(B(x_{i},2\delta n_{2})\cap B(n_{2})^{c}\) to \(B(y_{i},2\delta n_{2})\cap B(2n_{2})\). Let \(\gamma_{1},\ldots,\gamma_{k}\) be a collection of (topological) paths that satisfy the following constraints: * \(\gamma_{i}\) is a path in \(\operatorname{Ann}(n_{2},2n_{2})\) from \(x_{i}\) to \(y_{i}\). * The distance between any two \(\gamma_{i},\gamma_{j}\) is at least \(10\delta n_{2}\). * The length of each \(\gamma_{i}\) is at most \(Cn_{2}\) for some constant \(C\). Then, we let \(T_{i}\) be the \(\delta n_{2}\) neighborhood of \(\gamma_{i}\) intersected with \(\operatorname{Ann}(n_{2},2n_{2})\). The \(T_{i}\)'s are disjoint by construction. We show (6) by first dividing the right-hand side on both sides, converting the left-hand side into a conditional probability, and noting that \[\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(\cap_{i=1}^{k} \{x_{i}\xleftarrow{\operatorname{Ann}(n_{2},2n_{2})}\over\sigma_{i}}y_{i} \right)\biggm{|}E(n_{1},n_{2})\cap F(2n_{2},n_{3})\biggm{)}\] \[\geq \phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(\cap_{i=1}^{k} \{x_{i}\xleftarrow{T_{i}\over\sigma_{i}}y_{i}\}\biggm{|}E(n_{1},n_{2})\cap F( 2n_{2},n_{3})\right).\] It suffices to provide a constant lower bound for the right-hand side. We first use the tower rule for conditional expectations to isolate the occurrence of \(\{x_{1}\xleftarrow{T_{1}}y_{1}\}\). \[\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(\cap_{i=1}^{k} \{x_{i}\xleftarrow{T_{i}}y_{i}\}\biggm{|}E(n_{1},n_{2})\cap F(2n_{2},n_{3})\right) \tag{8}\] \[=\mathbf{E}\bigg{[}\mathbf{E}\bigg{[}\mathbf{1}\left\{\cap_{i=1}^ {k}\{x_{i}\xleftarrow{T_{i}}y_{i}\}\right\}\biggm{|}\omega|_{T_{1}^{c}},E(n_{1 },n_{2})\cap F(2n_{2},n_{3})\biggm{]}\biggm{|}E(n_{1},n_{2})\cap F(2n_{2},n_{3} )\biggm{]}. \tag{7}\] Here \(\mathbf{E}\) denotes the expectation with respect to the measure \(\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\). Since \(\cap_{i=2}^{k}\{x_{i}\xleftarrow{T_{i}}y_{i}\}\) is \(\omega|_{T_{1}^{c}}\)-measurable, the right-hand side can be rewritten as \[\mathbf{E}\bigg{[}\mathbf{E}\bigg{[}\mathbf{1}\{x_{1}\xleftarrow{T_{1}}y_{1} \}y_{1}\biggm{|}\omega|_{T_{1}^{c}},E(n_{1},n_{2})\cap F(2n_{2},n_{3})\biggm{]} \cdot\mathbf{1}\left\{\cap_{i=2}^{k}\{x_{i}\xleftarrow{T_{i}}y_{i}\}\right\} \biggm{|}E(n_{1},n_{2})\cap F(2n_{2},n_{3})\biggm{]}. \tag{9}\] We write the inner conditional expectation in (9) back in conditional probability form as \(\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(x_{1}\xleftarrow{T_{1}}y_{1} \right|\omega|_{T_{1}^{c}},E(n_{1},n_{2})\cap F(2n_{2},n_{3})\right)\). Note that \(\{x_{1}\xleftarrow{T_{1}}y_{1}\}\) occurs if the following events occur simultaneously: * \(\{x_{1}\xleftarrow{B(2\delta n_{2})(x_{1})\cap B(n_{2})^{c}}\over\sigma_{1}} \partial B(x_{1},2\delta n_{2})\cap B(n_{2})^{c}\}\); * \(\{y_{1}\xleftarrow{B(2\delta n_{2})(y_{1})\cap B(2n_{2})\over\sigma_{1}} \partial B(y_{1},2\delta n_{2})\cap B(2n_{2})\}\); * there is a \(\sigma_{1}\)-path in \(T_{1}\) connecting the two short sides of \(T_{1}\); * there is a half \(\sigma_{1}\)-circuit enclosing \(x_{1}\) in the half annulus \(\operatorname{Ann}(x_{1};\delta n_{2},2\delta n_{2})\cap B(n_{2})^{c}\), the event of which we denote by \(\mathcal{C}_{1}\); and * there is a half \(\sigma_{1}\)-circuit enclosing \(y_{1}\) in the half annulus \(\operatorname{Ann}(y_{1};\delta n_{2},2\delta n_{2})\cap B(2n_{2})\), the event of which we denote by \(\mathcal{C}_{2}\), see Figure 1. Since all these events are \(\sigma_{1}\) connection events, FKG inequality applies. To simplify notation, we denote by \(\varphi(\cdot)\) the conditional measure \(\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}(\cdot\mid\omega|_{T_{1}^{c}},E(n _{1},n_{2})\cap F(2n_{2},n_{3}))\). Then, \[\varphi\left(x_{1}\xleftrightarrow[]{T_{1}}{\overbrace{\sigma_{1}}^{T_{1}}} \ y_{1}\right)\geq \varphi\left(\partial B(n_{2})\cap T_{1}\xleftrightarrow[]{T_{1}} \partial B(2n_{2})\cap T_{1}\right)\varphi(\mathcal{C}_{1})\varphi(\mathcal{C }_{2})\] \[\cdot\varphi\left(x_{1}\xleftrightarrow[]{B(x_{1},\delta n_{2}) \cap B(n_{2})^{c}}\sigma_{1}\right)\partial B(x_{1},\delta n_{2})\cap B(n_{2} )^{c}\right)\] \[\cdot\varphi\left(y_{1}\xleftrightarrow[]{B(y_{1},2\delta n_{2}) \cap B(2n_{2})}\sigma_{1}\right)\partial B(y_{1},2\delta n_{2})\cap B(2n_{2}) \right).\] The two probabilities on the last two lines are both \(1\) because the occurrence of the events is guaranteed by the conditioning on \(E(n_{1},n_{2})\) and \(F(2n_{2},n_{3})\). The cost of half circuits is constant by RSW inequality. Since the width and length of the corridor \(T_{1}\) is of constant proportion, again by RSW inequality, the probability of having a \(\sigma_{1}\)-path connecting the two ends of the corridor is also constant. Therefore, \[\phi_{\operatorname{Ann}(n_{1},n_{3})}^{\xi}\left(x_{1}\xleftrightarrow[]{T_ {1}}{\overbrace{\sigma_{1}}^{T_{1}}}\ y_{1}\mid\omega|_{T_{1}^{c}},E(n_{1},n_ {2})\cap F(2n_{2},n_{3})\right)\geq c.\] Plugging this back into (9), we have \[(7) \geq\mathbf{E}\left[c\mathbf{1}\left\{\cap_{i=2}^{k}\{x_{i} \xleftrightarrow[]{T_{i}}{\overbrace{\sigma_{i}}^{T_{i}}}\ y_{i}\}\right\} \ \middle|\ E(n_{1},n_{2})\cap F(2n_{2},n_{3})\right]\] \[=c\mathbf{E}\left[\mathbf{1}\left\{\cap_{i=2}^{k}\{x_{i} \xleftrightarrow[]{T_{i}}{\overbrace{\sigma_{i}}^{T_{i}}}\ y_{i}\}\right\} \ \middle|\ E(n_{1},n_{2})\cap F(2n_{2},n_{3})\right].\] Applying the same procedure sequentially to each \(i=2,\ldots,k\), and we have a uniform lower bound. ## 3. Large Deviation Bound Conditional on Three Arms In this section, we prove Theorem 3.1, a large deviation bound conditional on a three-arm event. This is one of the main ingredients in the proof of Theorem 1.1, which we outline in Section 5, and one that requires significant modification to extend to the random cluster model. Figure 1. Constructions near the endpoints \(x_{1},x_{2}\). Before stating the theorem, we first introduce some notations and definitions. Fix some integer \(N>0\) and for any integer \(k\geq 1\), we define \(\mathfrak{C}_{k}\) to be the event that there is a dual-closed circuit in \(\operatorname{Ann}(2^{kN},2^{(k+1)N})\) with two defect dual-open edges. Similarly, let \(\mathfrak{D}_{k}\) be the event that there is an open circuit in \(\operatorname{Ann}(2^{kN},2^{(k+1)N})\) with one defect closed edge. **Definition 2**.: For any \(k\geq 1\), we define \(\hat{\mathfrak{C}}_{k}\), the compound circuit event in \(\operatorname{Ann}(2^{10kN},2^{10(k+1)N})\), as the simultaneous occurrence of the following events: 1. for \(i=1,4,6,9\), \(\mathfrak{C}_{10k+i}\) occurs in \(\operatorname{Ann}(2^{(10k+i)N},2^{(10k+i+1)N})\) and 2. \(\mathfrak{D}_{10k}\) occurs in \(\operatorname{Ann}(2^{10kN},2^{(10k+1)N})\). **Theorem 3.1** ([6], Theorem 4.1).: _There exist universal \(c_{1}>0\) and \(N_{0}\geq 1\) such that for any \(N\geq N_{0}\), any \(L^{\prime},L\geq 0\) satisfying \(L-L^{\prime}\geq 40\), and any event \(E_{k}\) satisfying_ 1. \(E_{k}\) _depends on the status of the edges in_ \(\operatorname{Ann}(2^{kN},2^{(k+1)N})\) _and_ 2. _there exists a uniform constant_ \(c_{2}>0\) _such that_ \(\phi_{B(n)}^{\xi}(E_{10k+5}\cap\hat{\mathfrak{C}}_{k}\mid A_{3}(2^{L}))\geq c_ {2}\) _for all_ \(n\geq 0\) _and_ \(0\leq k\leq\frac{L}{10N}-1\)_._ _Then,_ \[\phi_{B(n)}^{\xi}\left(\sum_{k=\lceil\frac{L^{\prime}}{10N}\rceil}^{\lfloor \frac{L}{10N}\rceil-1}\mathbf{1}\{E_{10k+5},\hat{\mathfrak{C}}_{k}\}\leq c_{1} c_{2}\frac{L-L^{\prime}}{N}\ \Bigg{|}\ A_{3}(2^{L})\right)\leq\exp(-c_{1}c_{2}\frac{L-L^{\prime}}{N}).\] For simplicity of notation, we define \(I_{L^{\prime},L}\) and \(J_{L^{\prime},L}\) to be (random) collections of indices of scales: \[I_{L^{\prime},L} :=\left\{k=\lceil\frac{L^{\prime}}{10N}\rceil,\ldots,\lfloor \frac{L}{10N}\rfloor-1:E_{10k+5}\cap\hat{\mathfrak{C}}_{k}\text{ occurs}\right\}.\] \[J_{L^{\prime},L} :=\left\{k=\lceil\frac{L^{\prime}}{10N}\rceil,\ldots,\lfloor \frac{L}{10N}\rfloor-1:\hat{\mathfrak{C}}_{k}\text{ occurs}\right\}.\] The estimate in Theorem 3.1 relies on a two-step strategy: first estimating \(\#I_{L^{\prime},L}\) using the standard Chernoff bound, then expanding the resulting expectation by conditioning on nested filtrations. Since these two steps themselves apply to general random variables, we provide a high-level summary of the proof and only recreate the parts that are sensitive to the model. To start, we want to condition on there existing sufficiently many decoupling circuits \(\hat{\mathfrak{C}}_{k}\). We quantify this probability using the next Proposition which is proved later in this section. **Proposition 3.2** ([6], Proposition 4.2).: _There exist \(c_{3}>0\) and \(N_{0}\geq 1\) such that for all \(N\geq N_{0}\) and \(L,L^{\prime}\geq 0\) with \(L-L^{\prime}\geq 40\),_ \[\phi_{B(n)}^{\xi}\left(\#I_{L^{\prime},L}\leq c_{3}\frac{L-L^{\prime}}{N}\ \Bigg{|}\ A_{3}(2^{L})\right)\leq\exp(-c_{3}(L-L^{\prime})).\] Combining Proposition 3.2 and the Chernoff bound, we have \[\phi_{B(n)}^{\xi}\left(\#I_{L^{\prime},L}\leq c_{4}\frac{L-L^{ \prime}}{N}\ \Bigg{|}\ A_{3}(2^{L})\right)\\ \leq\exp(-c_{3}(L-L^{\prime}))+\exp\left(c_{4}\frac{L-L^{\prime}} {N}\right)\mathbb{E}\left[e^{-\#I_{L^{\prime},L}}\mathbf{1}\{\#J_{L^{\prime},L }>c_{3}\frac{L-L^{\prime}}{N}\}\ \Bigg{|}\ A_{3}(2^{L})\right].\] We decompose the expectation over all possible sets of \(J_{L^{\prime},L}\). \[\sum_{\mathcal{J}:\#\mathcal{J}\geq c_{3}\frac{L-L^{\prime}}{N}}\mathbb{E} \left[e^{-\#I_{L^{\prime},L}}\ \big{|}\ J_{L^{\prime},L}=\mathcal{J},A_{3}(2^{L})\right]\phi_{B(n)}^{\xi}(J_{L^{ \prime},L}=\mathcal{J}\ \big{|}\ A_{3}(2^{L})). \tag{10}\] We enumerate \(\mathcal{J}=\{k_{1},\ldots,k_{R}\}\). Then, conditional on \(J_{L^{\prime},L}=\mathcal{J}\), we have \(\#I_{L^{\prime},L}=\sum_{r=1}^{R}\mathbf{1}\{E_{10k_{r}+5}\}\). Define the filtration \((\mathcal{F}_{r})\) by \[\mathcal{F}_{r}=\sigma\{E_{10k_{1}+5},\ldots,E_{10k_{r-1}+5}\}\cap\{J_{L^{ \prime},L}=\mathcal{J}\}\cap A_{3}(2^{L})\quad\text{for }r=1,\ldots,R.\] Thus, the expectation in (10) can be expanded as \[\mathbb{E}[e^{-\mathbf{1}\{E_{10k_{1}+5}\}}\cdots\mathbb{E}[e^{-\mathbf{1}\{E _{10k_{R}-1+5}\}}]\mathbb{E}[e^{-\mathbf{1}\{E_{10k_{R}+5}\}}\mid\mathcal{F}_{ R}]\mid\mathcal{F}_{R-1}]\cdots\mid\mathcal{F}_{1}]\] where for each \(r=1,\ldots,R\), we have \[\mathbb{E}[e^{-\mathbf{1}\{E_{10k_{r}+5}\}}\mid\mathcal{F}_{r}]=1-(1-e^{-1}) \phi_{B(n)}^{\xi}(E_{10k_{r}+5}\mid\mathcal{F}_{r}). \tag{11}\] Thus, we introduce the following lemma to give a uniform bound on the conditional probability above and decouple \(E_{10k_{r}+5}\) from \(\sigma\{E_{10k_{1}+5},\ldots,E_{10k_{r-1}+5}\}\) while conditional on \(A_{3}(2^{L})\). **Lemma 3.3**.: _There exists a universal constant \(c_{5}>0\) such that the following holds. For any \(k,L\geq 0\) and \(N\geq 1\) satisfying \(k\leq\left\lfloor\frac{L}{10N}\right\rfloor-1\) and any events \(F\) and \(G\) depending on the status of edges in \(B(2^{10kN})\) and \(B(2^{10(k+1)N})^{c}\) respectively, one has_ \[\phi_{B(n)}^{\xi}(E_{10k+5}\mid\hat{\mathbb{C}}_{k},A_{3}(2^{L}),F,G)\geq c_ {5}\phi_{B(n)}^{\xi}(E_{10k+5}\mid\hat{\mathbb{C}}_{k},A_{3}(2^{L})). \tag{12}\] Let \(k=\lceil\frac{L^{\prime}}{10N}\rceil,\ldots,\lfloor\frac{L}{10N}\rfloor-1\). For any \(F\) depending on edges in \(B(2^{10kN})\) and any \(\mathcal{J}\) containing \(k\), we have as a result of Lemma 3.3 \[\phi_{B(n)}^{\xi}(E_{10k+5}\mid\hat{\mathbb{C}}_{k},F,J_{L^{\prime},L}= \mathcal{J},A_{3}(2^{L}))\geq c_{5}\phi_{B(n)}^{\xi}(E_{10k+5}\mid\hat{ \mathbb{C}}_{k},A_{3}(2^{L}))\geq c_{5}\tilde{c}_{0},\] with the second inequality owing to (32) and gluing constructions (Proposition 2.1) for mitigating \(\hat{\mathbb{C}}_{k}\). Inserting back into (11), we have \[\mathbb{E}[e^{-\mathbf{1}\{E_{10k_{r}+5}\}}\mid\mathcal{F}_{r}]\leq 1-c_{5} \tilde{c}_{0}(1-e^{-1})\] Putting everything together, we have \[\phi_{B(n)}^{\xi}\left(\#I_{n^{\prime},n}\leq c_{4}\frac{L-L^{ \prime}}{N}\mid A_{3}(2^{L})\right)\] \[\qquad\leq\exp(-c_{3}(L-L^{\prime}))+\exp\left(c_{4}\frac{L-L^{ \prime}}{N}\right)(1-c_{4}\tilde{c}_{0}(1-e^{-1}))^{c_{3}\frac{L-L^{\prime}}{ N}}\] This implies the existence of some universal \(c_{1}>0\) such that \[\phi_{B(n)}^{\xi}\left(\#I_{n^{\prime},n}\leq c_{1}c_{2}\frac{L-L^{\prime}}{N} \mid A_{3}(2^{L})\right)\leq\exp\left(-c_{1}c_{2}\frac{L-L^{\prime}}{N}\right).\] Proof of Proposition 3.2.: We first note the set relation \[\left\{\#J_{n^{\prime},n}\leq c_{3}\frac{L-L^{\prime}}{N},A_{3}( 2^{L})\right\} \subset A_{3}\left(2^{10N\lceil\frac{L^{\prime}}{10N}\rceil}\right)\] \[\cap\left\{\bigcap_{m=10\lceil\frac{L^{\prime}}{10N}\rceil}^{10 \lfloor\frac{L^{\prime}}{10N}\rfloor-1}A_{3}(2^{mN},2^{(m+1)N}),\#J_{L^{\prime},L}\leq c_{3}\frac{L-L^{\prime}}{N}\right\}\] \[\cap A_{3}\left(2^{10N\lfloor\frac{L}{10N}\rfloor},2^{L}\right).\] Since the three events on the right-hand side are disjoint, we apply the domain Markov property twice, first on \(B(2^{10N\lceil\frac{L^{\prime}}{10N}\rceil})\) and then on \(B(2^{10N\lceil\frac{L^{\prime}}{10N}\rceil})\), and obtain \[\phi_{B(n)}^{\xi}\left(\#J_{n^{\prime},n}\leq c_{3}\frac{L-L^{ \prime}}{N},A_{3}(2^{L})\right)\] \[\qquad\leq\mathbb{E}_{B(n)}^{\xi}\left[\phi_{B(2^{10N\lceil\frac{L ^{\prime}}{10N}\rceil})}^{\xi^{\prime}}\left(A_{3}\left(2^{10N\lceil\frac{L^{ \prime}}{10N}\rceil}\right)\right)\right]\] \[\qquad\times\phi_{B(n)}^{\xi}\left(\bigcap_{m=10\lceil\frac{L^{ \prime}}{10N}\rceil}^{10\lfloor\frac{L}{10N}\rfloor-1}A_{3}(2^{mN},2^{(m+1)N} ),\#J_{L^{\prime},L}\leq c_{3}\frac{L-L^{\prime}}{N},A_{3}(2^{10N\lfloor\frac{ L}{10N}\rfloor},2^{L})\right)\] \[\qquad\leq\mathbb{E}_{B(n)}^{\xi}\left[\phi_{B(2^{10N\lceil\frac{ L^{\prime}}{10N}\rceil})}^{\xi^{\prime}}\left(A_{3}\left(2^{10N\lceil\frac{L^{ \prime}}{10N}\rceil}\right)\right)\right]\] \[\qquad\times\mathbb{E}_{B(n)}^{\xi}\left[\phi_{B(2^{10N\lfloor \frac{L}{10N}\rfloor})}^{\xi^{\prime\prime}}\left(\bigcap_{m=10\lceil\frac{L^{ \prime}}{10N}\rceil}^{10\lfloor\frac{L}{10N}\rfloor-1}A_{3}(2^{mN},2^{(m+1)N} ),\#J_{L^{\prime},L}\leq c_{3}\frac{L-L^{\prime}}{N}\right)\right]\] \[\qquad\times\phi_{B(n)}^{\xi}\left(A_{3}(2^{10N\lfloor\frac{L}{10N }\rfloor},2^{L})\right). \tag{13}\] Here, \(\xi^{\prime}\) and \(\xi^{\prime\prime}\) are implicit random variables of boundary conditions on \(B(2^{10N\lceil\frac{L^{\prime}}{10N}\rceil})\) and \(B(2^{10N\lfloor\frac{L}{10N}\rfloor})\) respectively. Since three-arm probabilities are of the same order uniform over boundary conditions, it suffices to show that (13) can be bounded by \[O\left(\exp(-c(L-L^{\prime}))\phi_{B(2^{10N\lfloor\frac{L}{10N}\rfloor})}^{\xi }\left(A_{3}(2^{10N\lceil\frac{L^{\prime}}{10N}\rceil},2^{10N\lfloor\frac{L}{ 10N}\rfloor})\right)\right). \tag{14}\] Here the choice of \(\xi\) is arbitrary. For each scale \(m\), Let \(X_{m}\) be the indicator function on the event \(A_{3}(2^{10mN},2^{10(m+1)N})\) occurs but \(\hat{\mathbb{C}}_{m}\) does not. Then, \[\left\{\bigcap_{m=10\lceil\frac{L^{\prime}}{10N}\rceil}^{101\lfloor\frac{L^{ \prime}}{10N}\rceil-1}A_{3}(2^{mN},2^{(m+1)N}),\#J_{L^{\prime},L}\leq c_{3} \frac{L-L^{\prime}}{N}\right\}\subset\left\{\sum_{m=\lceil\frac{L^{\prime}}{10N }\rceil}^{\lfloor\frac{L}{10N}\rfloor-1}X_{m}\geq\lfloor\frac{L}{10N} \rfloor-\lceil\frac{L^{\prime}}{10N}\rceil-c_{3}\frac{L-L^{\prime}}{N} \right\}.\] Although the \(X_{m}\)'s are not independent, when applying the domain Markov property, the dependence between events are only reflected in the boundary condition. We will establish an upper bound uniform over boundary conditions for each \(X_{m}\), thus allowing us access to a set of independent \(Y_{m}\)'s that stochastically dominate the \(X_{m}\)'s. **Lemma 3.4**.: _For each \(m\), let \(Y_{m}\) be an independent Bernoulli random variable with parameter \(p_{m}=5c2^{-\alpha N}\phi_{B(2^{10(m+1)N})}^{\xi}\big{(}A_{3}(2^{10mN},2^{10(m+ 1)N})\big{)}\). Then, \(X_{m}\) is stochastically dominated by \(Y_{m}\)._ Proof.: We give a uniform upper bound to \(\mathbf{P}(X_{m}=1)=\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}), \hat{\mathfrak{C}}_{m}^{c}\right)\). By a union bound, \[\begin{split}\phi_{B(2^{10(m+1)N})}^{\xi}&\left(A_{ 3}(2^{10mN},2^{10(m+1)N}),\hat{\mathfrak{C}}_{m}^{c}\right)\\ &=\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}), \mathfrak{D}_{10m}^{c}\cup\bigcup_{i=1,4,6,9}\mathfrak{C}_{10m+i}^{c}\right) \\ &\leq\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}), \mathfrak{D}_{10m}^{c}\right)\\ &\quad+\sum_{i=1,4,6,9}\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2 ^{10mN},2^{10(m+1)N}),\mathfrak{C}_{10m+i}^{c}\right).\end{split} \tag{15}\] For each \(i\) and similarly for the probability with \(\mathfrak{D}_{10m}^{c}\), we use the domain Markov property twice, \[\begin{split}\phi_{B(2^{10(m+1)N})}^{\xi}&\left(A_{ 3}(2^{10mN},2^{10(m+1)N}),\mathfrak{C}_{10m+i}^{c}\right)\\ &\leq\mathbb{E}_{B(2^{10(m+1)N})}^{\xi}\left[\phi_{B(2^{10(m+1)N} )}^{\eta_{1}}\left(A_{3}(2^{10mN},2^{(10m+i)N})\right)\right]\\ &\quad\times\mathbb{E}_{B(2^{10(m+1)N})}^{\xi}\left[\phi_{B(2^{1 0(m+i+1)N})}^{\eta_{2}}\left(A_{3}(2^{(10m+i)N},2^{(10m+i+1)N}),\mathfrak{C}_{ 10m+i}^{c}\right)\right]\\ &\quad\times\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{(10m+i+1)N },2^{10(m+1)N})\right),\end{split} \tag{16}\] where \(\eta_{1},\eta_{2}\) are random variables of boundary conditions. **Claim 1**.: _There exists \(\alpha\in(0,1)\) and \(c_{6}>0\) such that for any boundary condition \(\eta\), we have_ \[\begin{split}\phi_{B(2^{(10m+i+1)N})}^{\eta}&\left(A_ {3}(2^{(10m+i)N},2^{(10m+i+1)N}),\mathfrak{C}_{10m+i}^{c}\right)\\ &\leq c_{6}2^{-\alpha N}\phi_{B(2^{(10m+i+1)N})}^{\eta}\left(A_{ 3}(2^{(10m+i)N},2^{(10m+i+1)N})\right),\\ \phi_{B(2^{(10m+1)N})}^{\eta}&\left(A_{3}(2^{10mN},2 ^{(10m+1)N}),\mathfrak{D}_{10m}^{c}\right)\\ &\leq c_{6}2^{-\alpha N}\phi_{B(2^{(10m+1)N})}^{\eta}\left(A_{3}( 2^{10mN},2^{(10m+1)N})\right).\end{split}\] Plugging this back into (16) and using gluing constructions (see Section 2), we have \[\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}),\mathfrak{C}_{1 0m+i}^{c}\right)\leq c_{6}2^{-\alpha N}\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3} (2^{10mN},2^{10(m+1)N})\right). \tag{17}\] Similarly, \[\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}),\mathfrak{D}_{1 0m}^{c}\right)\leq c_{6}2^{-\alpha N}\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2 ^{10mN},2^{10(m+1)N})\right). \tag{18}\] Plugging (17) and (18) into (15) and using quasi-multiplicativity, we have \[\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10(m+1)N}),\hat{\mathfrak{ C}}_{m}^{c}\right)\leq c_{7}2^{-\alpha N}\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2 ^{10mN},2^{10(m+1)N})\right).\] Choosing \(p_{m}\) to be the right-hand side gives us that \(Y_{m}\) stochastically dominates \(X_{m}\). Note that there exists \(\beta\) such that \(p_{m}\in(2^{-\beta N},1)\) for all \(m\). We use an elementary lemma from [6] on the concentration of independent Bernoulli random variables. **Lemma 3.5** ([6], Lemma 4.3).: _Given \(\epsilon_{0}\in(0,1)\) and \(M\geq 1\), if \(Y_{1},\ldots,Y_{M}\) are any independent Bernoulli random variables with parameters \(p_{1},\ldots,p_{M}\), respectively, satisfying \(p_{i}\in[\epsilon_{0},1]\) for all \(i\), then for all \(r\in(0,1)\),_ \[\mathbf{P}\left(\sum_{m=1}^{M}Y_{m}\geq rM\right)\leq(1/\epsilon_{0})^{M(1-r)} 2^{M}\prod_{m=1}^{M}p_{m}.\] Applying Lemma 3.5 by taking \(M=\lfloor\frac{L}{10N}\rfloor-\lceil\frac{L^{\prime}}{10N}\rceil\) and \(r=1-20c_{3}\), we have \[\mathbf{P}\Biggl{(}\sum_{m=\lceil\frac{L}{10N}\rceil}^{\lfloor \frac{L}{10N}\rfloor-1}X_{m}\geq\lfloor\frac{L}{10N}\rfloor-\lceil\frac{L^{ \prime}}{10N}\rceil-c_{3}\frac{L-L^{\prime}}{N}\Biggr{)}\] \[\qquad\qquad\leq\mathbf{P}\Biggl{(}\sum_{m=\lceil\frac{L}{10N} \rceil}^{\lfloor\frac{L}{10N}\rfloor-1}Y_{m}\geq\lfloor\frac{L}{10N}\rfloor -\lceil\frac{L^{\prime}}{10N}\rceil-c_{3}\frac{L-L^{\prime}}{N}\Biggr{)} \tag{19}\] \[\qquad\qquad\leq\left(2^{20c_{3}\beta N+1}\right)^{\lfloor\frac{ L}{10N}\rfloor-\lceil\frac{L^{\prime}}{10N}\rceil}\prod_{m=\lceil\frac{L^{ \prime}}{10N}\rceil}^{\lfloor\frac{L}{10N}\rfloor-1}p_{m}.\] Plugging in \(p_{m}=c_{7}2^{-\alpha N}\phi_{B(2^{10(m+1)N})}^{\xi}\left(A_{3}(2^{10mN},2^{10 (m+1)N})\right)\), we have \[\eqref{eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq: eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eqeq:eq:eqeq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eqeq:eq:eqeq:eq:eqeq:eqeq:eqeq:eq:eq:eq:eqeq:eqeqeq:eq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eqeq:eqeq:eq:eqeq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eqeq:eqeq:eq:eq:eqeq:eq:eq:eqeq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eq:eqeq:eq:eq:eq:eq:eqeq \[\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}(2^{kN},2^{(k+1)N})\circ A_{1,O} (2^{kN},2^{(k+1)N})\right)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\times\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}^{I,J}(2^{kN},2^{(k+1)N })\circ A_{1,O}(2^{kN},2^{(k+1)N})\right).\] Let \(\gamma_{1}\) be the counterclockwise-most open arm from \(I_{1}\) to \(J_{1}\) that is disjoint from at least one open arm from \(I_{2}\) to \(J_{2}\) and let \(\gamma_{3}\) be the clockwise-most dual-closed arm from \(I_{3}\) to \(J_{3}\). Let \(\mathcal{U}\) denote the random region bounded by \(\gamma_{1}\), \(\gamma_{3}\), \(\partial B(2^{kN})\), and \(\partial B(2^{(k+1)N})\) that also contains an open arm from \(I_{2}\) to \(J_{2}\). We say that \(U\) is "admissible" if \(U\) is a possible value of the random region \(\mathcal{U}\). Conditional on the location of \(\mathcal{U}\), \[\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}^{I,J}(2^{kN},2^{(k+1)N}) \circ A_{1,O}(2^{kN},2^{(k+1)N})\right)\] \[\qquad\qquad\qquad=\sum_{\text{admissible }U}\mathbb{E}^{\eta}_{B(2^{(k+1)N})}\left[\phi^{t}_{B(2^{(k+1)N})\setminus U} \left(A_{1,O}(2^{kN},2^{(k+1)N})\right)\right]\phi^{\eta}_{B(2^{(k+1)N})}( \mathcal{U}=U)\] where the boundary condition \(\iota\) is a random variable depending on \(U\) and \(\eta\). The one-arm event in \(B(2^{(k+1)N})\setminus U\) can be estimated using quad-crossing RSW estimates (5) and an analogue of quasi-multiplicativity \(N\) times as follows: \[\phi^{t}_{B(2^{(k+1)N})\setminus U}(A_{1,O}(2^{kN},2^{(k+1)N}))\leq c\prod_{ \ell=kN}^{(k+1)N}\phi^{t}_{B(2^{(k+1)N})\setminus U}(A_{1}(2^{\ell},2^{\ell+1 }))\leq c(2^{-\alpha})^{N}.\] Thus, \[\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}(2^{kN},2^{(k+1)N}),\mathfrak{C}^{c}_{k} \right)\leq c2^{-\alpha N}\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}(2^{kN},2^{(k +1)N})\right).\] By the same reasoning, \[\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}(2^{kN},2^{(k+1)N}),\mathfrak{D}^{c}_{k} \right)\leq c^{\prime}2^{-\alpha N}\phi^{\eta}_{B(2^{(k+1)N})}\left(A_{3}(2^{ kN},2^{(k+1)N})\right).\] Now we prove Lemma 3.3. Proof of Lemma 3.3.: Recall that \(F\) and \(G\) depend on the status of edges in \(B(2^{10kN})\) and \(B(2^{10(k+1)N})^{c}\) respectively. We focus on demonstrating how we remove the conditioning on \(F\) as the other side is similar. \[\phi^{\xi}_{B(n)}(E_{10k+5}\mid\hat{\mathfrak{C}}_{k},A_{3}(2^{L}),F,G)\geq c_ {8}\phi^{\xi}_{B(n)}(E_{10k+5}\mid\hat{\mathfrak{C}}_{k},A_{3}(2^{L}),G). \tag{20}\] Recall from Definition 2 that \(\hat{\mathfrak{C}}_{k}\) is the simultaneous occurrence of a stack of circuits. We use these circuits to separate \(F\) and \(E_{10k+5}\) while conditioning on \(A_{3}(2^{L})\). A dual-closed circuit \(\mathcal{C}\) with two defect edges is naturally divided into two arcs between these defects consisting of open connections. Fix some deterministic ordering of arcs and label the two arcs \(\operatorname{Arc}_{1}(\mathcal{C})\) and \(\operatorname{Arc}_{2}(\mathcal{C})\) in this ordering. Let \(X_{-}(\mathcal{C},i)\) be the event such that: 1. \(\mathfrak{D}_{10k}\) occurs; 2. \(\mathcal{C}\) is the _innermost_ dual-closed circuit with two defect edges in \(\operatorname{Ann}(2^{(10k+1)N},2^{(10k+2)N})\); 3. the origin is connected to the two defects in \(\mathcal{C}\) through two disjoint open paths; 4. \((\frac{1}{2},-\frac{1}{2})\) is connected to \(\operatorname{Arc}_{i}(\mathcal{C})\) through a dual-closed path. Note that the occurrence of \(\mathfrak{D}_{10k}\), together with the two open paths through the origin, guarantees that item (4) only occurs for one of the two arcs. Hence \(X_{-}(\mathcal{C},i)\) occurs for exactly one choice of \(\mathcal{C}\) and \(i\). Similarly, let \(X_{+}(\mathcal{D},j)\) be the event such that: 1. \(\mathbf{\xi}_{10k+i}\) occurs for \(i=6,9\); 2. \(\mathcal{D}\) is the _innermost_ dual-closed circuit with two defect edges in \(\operatorname{Ann}(2^{(10k+4)N},2^{(10k+5)N})\); 3. the two defects are connected to \(\partial B(2^{L})\) through disjoint open paths; 4. \(\operatorname{Arc}_{j}(\mathcal{D})\) is connected to \(\partial B(2^{L})\) through a dual-closed path. We also need a three-arm event between \(\mathcal{C}\) and \(\mathcal{D}\). The dual-closed arm connects an arc of \(\mathcal{C}\) and an arc of \(\mathcal{D}\) and the indices of the arcs are important. Thus, let \(X(\mathcal{C},\mathcal{D},i,j)\) be the event such that: 1. there is a dual-closed path connecting \(\operatorname{Arc}_{i}(\mathcal{C})\) and \(\operatorname{Arc}_{j}(\mathcal{D})\) in the region between \(\mathcal{C}\) and \(\mathcal{D}\); 2. there is a pair of disjoint open paths in the region between \(\mathcal{C}\) and \(\mathcal{D}\) connecting a defect of \(\mathcal{C}\) and a defect of \(\mathcal{D}\). Note that for each pair of \(i\) and \(j\), topologically there is only one possible way to connect the defects of \(\mathcal{C}\) and \(\mathcal{D}\) with open paths. Then, the occurrence of \(\mathbf{\hat{\mathbb{C}}}_{k}\) allows for a decomposition in admissible \(\mathcal{C}\), \(\mathcal{D}\), and \(i,j=1,2\) for the following event. \[\phi^{\xi}_{B(n)}(E_{10k+5},\mathbf{\hat{\mathbb{C}}}_{k},A_{3}( 2^{L}),F,G)\] \[\qquad=\sum_{\mathcal{C},\mathcal{D},i,j}\phi^{\xi}_{B(n)}(F,X_{- }(\mathcal{C},i),X(\mathcal{C},\mathcal{D},i,j),X_{+}(\mathcal{D},j),E_{10k+5 },G).\] Since \(\mathcal{C}\) is the innermost circuit, its position can be determined via exploration in the interior of \(\mathcal{C}\). Using the domain Markov property first on \(\operatorname{Ext}(\mathcal{C}):=B(n)\setminus\operatorname{Int}(\mathcal{C})\), the boundary condition induced by the configuration in \(\operatorname{Int}(\mathcal{C})\) identifies only the two defect edges in \(\mathcal{C}\), which we denote by \(0^{*}(\mathcal{C})\). Thus, we have \[\phi^{\xi}_{B(n)}(F,X_{-}(\mathcal{C},i),X(\mathcal{C},\mathcal{D },i,j),X_{+}(\mathcal{D},j),E_{10k+5},G)\] \[\qquad\qquad=\phi^{\xi}_{B(n)}(F,X_{-}(\mathcal{C},i))\phi^{0^{*} (\mathcal{C})}_{\operatorname{Ext}(\mathcal{C})}(X(\mathcal{C},\mathcal{D},i, j),X_{+}(\mathcal{D},j),E_{10k+5},G).\] Using the domain Markov property again on \(\operatorname{Ext}(\mathcal{D})\), the configuration in \(\operatorname{Int}(\mathcal{D})\setminus\operatorname{Ext}(\mathcal{C})\) induces a free boundary condition on \(\mathcal{D}\). Thus, we have \[\phi^{\xi}_{B(n)}(E_{10k+5},\mathbf{\hat{\mathbb{C}}}_{k},A_{3}( 2^{L}),F,G) \tag{21}\] \[\qquad=\sum_{\mathcal{C},\mathcal{D},i,j}\phi^{\xi}_{B(n)}(F,X_{- }(\mathcal{C},i))\phi^{0^{*}(\mathcal{C})}_{\operatorname{Ext}(\mathcal{C})}( X(\mathcal{C},\mathcal{D},i,j))\phi^{0}_{\operatorname{Ext}(\mathcal{D})}(X_{+}( \mathcal{D},j),E_{10k+5},G).\] where \(0\) is the free boundary condition. A similar decomposition gives, \[\phi^{\xi}_{B(n)}(\mathbf{\hat{\mathbb{C}}}_{k},A_{3}(2^{L}),G) \tag{22}\] \[\qquad=\sum_{\mathcal{C}^{\prime},\mathcal{D}^{\prime},i^{\prime },j^{\prime}}\phi^{\xi}_{B(n)}(X_{-}(\mathcal{C}^{\prime},i^{\prime}))\phi^{0^{ *}(\mathcal{C}^{\prime})}_{\operatorname{Ext}(\mathcal{C}^{\prime})}(X( \mathcal{C}^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{\prime}))\phi^{0}_{ \operatorname{Ext}(\mathcal{D}^{\prime})}(X_{+}(\mathcal{D}^{\prime},j^{\prime }),G).\] Multiplying (21) and (22) gives \[\phi^{\xi}_{B(n)}(E_{10k+5},\mathbf{\hat{\mathbb{C}}}_{k},A_{3}( 2^{L}),F,G)\phi^{\xi}_{B(n)}(\mathbf{\hat{\mathbb{C}}}_{k},A_{3}(2^{L}),G)\] \[\qquad=\sum_{\begin{subarray}{c}\mathcal{C},\mathcal{D},i,j\\ C^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{\prime}\end{subarray}}\Big{[}\phi^{ \xi}_{B(n)}(F,X_{-}(\mathcal{C},i))\phi^{0^{*}(\mathcal{C})}_{\operatorname{ Ext}(\mathcal{C})}(X(\mathcal{C},\mathcal{D},i,j))\phi^{0}_{ \operatorname{Ext}(\mathcal{D})}(X_{+}(\mathcal{D},j),E_{10k+5},G)\] \[\qquad\qquad\times\phi^{\xi}_{B(n)}(X_{-}(\mathcal{C}^{\prime},i^{\prime}))\phi^{0^{*}(\mathcal{C}^{\prime})}_{\operatorname{Ext}(\mathcal{C} ^{\prime})}(X(\mathcal{C}^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{\prime})) \phi^{0}_{\operatorname{Ext}(\mathcal{D}^{\prime})}(X_{+}(\mathcal{D}^{\prime},j ^{\prime}),G)\Big{]}. \tag{23}\] We use the following estimate which is the random cluster analogue of [7, Lemma 6.1]. It is essentially a corollary of the so-called strong separation lemmas, the random cluster version of which we prove in Section 4. To get from the strong separation lemmas to the following estimate, a proof sketch of no essential difference can be found in [7]. There exists a uniform constant \(c_{9}\) such that the following holds for all choices of circuits \(\mathcal{C},\mathcal{C}^{\prime},\mathcal{D},\mathcal{D}^{\prime}\) and arc indices \(i,i^{\prime},j,j^{\prime}\): \[\frac{\phi_{\operatorname{Ext}(C)}^{0^{\prime}(C)}(X(\mathcal{C},\mathcal{D} ^{\prime},i,j^{\prime}))\phi_{\operatorname{Ext}(C^{\prime})}^{0^{\prime}(C^{ \prime})}(X(C^{\prime},\mathcal{D},i^{\prime},j))}{\phi_{\operatorname{Ext}(C)} ^{0^{\prime}(C)}(X(\mathcal{C},\mathcal{D},i,j))\phi_{\operatorname{Ext}(C^{ \prime})}^{0^{\prime}(C^{\prime})}(X(C^{\prime},\mathcal{D}^{\prime},i^{ \prime},j^{\prime}))}<c_{9}. \tag{24}\] Applying (24) to the summand of (23), we have \[\phi_{B(n)}^{\xi}(F,X_{-}(\mathcal{C},i))\phi_{\operatorname{Ext }(C)}^{0^{\prime}(C)}(X(\mathcal{C},\mathcal{D},i,j))\phi_{\operatorname{Ext} (\mathcal{D})}^{0}(X_{+}(\mathcal{D},j),E_{10k+5},G)\] \[\qquad\qquad\times\phi_{B(n)}^{\xi}(X_{-}(C^{\prime},i^{\prime}) )\phi_{\operatorname{Ext}(C^{\prime})}^{0^{\prime}(C^{\prime})}(X(C^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{\prime}))\phi_{\operatorname{Ext}( \mathcal{D}^{\prime})}^{0}(X_{+}(\mathcal{D}^{\prime},j^{\prime}),G)\] \[> c_{9}^{-1}\phi_{B(n)}^{\xi}(F,X_{-}(C,i))\phi_{\operatorname{Ext }(C)}^{0^{\prime}(C)}(X(\mathcal{C},\mathcal{D}^{\prime},i,j^{\prime}))\phi_{ \operatorname{Ext}(\mathcal{D}^{\prime})}^{0}(X_{+}(\mathcal{D}^{\prime},j^{ \prime}),G)\] \[\qquad\qquad\times\phi_{B(n)}^{\xi}(X_{-}(C^{\prime},i^{\prime}) )\phi_{\operatorname{Ext}(C^{\prime})}^{0^{\prime}(C^{\prime})}(X(C^{\prime},\mathcal{D},i^{\prime},j))\phi_{\operatorname{Ext}(\mathcal{D})}^{0}(X_{+}( \mathcal{D},j),E_{10k+5},G).\] Summing over \(\mathcal{C},\mathcal{D},i,j,C^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{ \prime}\), by the domain Markov property, \[> c_{9}^{-1}\sum_{\begin{subarray}{c}\mathcal{C},\mathcal{D},i,j\\ \mathcal{C}^{\prime},\mathcal{D}^{\prime},i^{\prime},j^{\prime}\end{subarray}} \left[\phi_{B(n)}^{\xi}(F,X_{-}(C,i))\phi_{\operatorname{Ext}(\mathcal{D}^{ \prime})}^{0}(X_{+}(\mathcal{D}^{\prime},j^{\prime}),G)\phi_{\operatorname{ Ext}(C)}^{0^{\prime}(C)}(X(\mathcal{C},\mathcal{D}^{\prime},i,j^{\prime}))\right.\] \[\qquad\times\phi_{B(n)}^{\xi}(X_{-}(C^{\prime},i^{\prime}))\phi_{ \operatorname{Ext}(\mathcal{D})}^{0}(X_{+}(\mathcal{D},j),E_{10k+5},G)\phi_{ \operatorname{Ext}(C^{\prime})}^{0^{\prime}(C^{\prime})}(X(C^{\prime}, \mathcal{D},i^{\prime},j))\right]\] \[= c_{9}^{-1}\phi_{B(n)}^{\xi}(\mathbf{\hat{C}}_{k},A_{3}(2^{L}),F,G)\phi_{B(n)}^{\xi}(E_{10k+5},\mathbf{\hat{C}}_{k},A_{3}(2^{L}),G). \tag{23}\] Dividing both sides by \(\phi_{B(n)}^{\xi}(\mathbf{\hat{C}}_{k},A_{3}(2^{L}),G)\phi_{B(n)}^{\xi}( \mathbf{\hat{C}}_{k},A_{3}(2^{L}),F,G)\), we have (20) with \(c_{8}=c_{9}^{-1}\). From (20), we can remove the conditioning on \(G\) using a nearly identical argument. Although the above proof is formally similar to the proof of Lemma 4.4 in [6], it heavily relies on the domain Markov property, so the choice of the domain and the order of application are crucial. ## 4. Arm Separation for the Random Cluster Model As indicated in the proof of Lemma 3.3, (24) depends on the following two strong arm separation lemmas in combination with the gluing constructions explained in Section 2. **Lemma 4.1** (External Arm Separation).: _Fix an integer \(m\geq 2\) and let \(n_{1}\leq n_{2}-3\). Consider an open circuit \(\mathcal{C}\) in \(B(2^{n_{1}})\) with \(m\) defects \(e_{1},\dots,e_{m}\). Let \(\mathcal{A}(\mathcal{C},2^{n_{2}})\) be the event that_ 1. _there are_ \(2m\) _alternating disjoint open arms and dual-closed arms from_ \(\mathcal{C}\) _to_ \(\partial B(2^{n_{2}})\) _in_ \(B(2^{n_{2}})\setminus\operatorname{Int}(\mathcal{C})\)_;_ 2. _the_ \(m\) _dual-closed paths emenate from_ \(e_{j}^{*}\) _to_ \(\partial B(2^{n_{2}})^{*}\)_, respectively._ _We note that the locations of the defects \(e_{1},\dots,e_{m}\) are implicit in the notation of \(\mathcal{C}\). Let \(\tilde{\mathcal{A}}(\mathcal{C},2^{n_{2}})\) be the event that \(\mathcal{A}(\mathcal{C},2^{n_{2}})\) occurs with \(2m\) arms \(\gamma_{1},\dots,\gamma_{2m}\) (open, dual-closed alternatingly) whose endpoints in \(\partial B(2^{n_{2}})\) or \(\partial B(2^{n_{2}})^{*}\), \(f_{1},\dots,f_{2m}\), satisfy_ \[2^{-n_{2}}\min_{k\neq l}|f_{k}-f_{l}|\geq\frac{1}{2m}.\] _Then, there is a constant \(c_{10}(m)>0\) independent of \(n_{1},n_{2},\mathcal{C}\), and the boundary condition \(\xi\) such that_ \[\phi^{\xi}_{B(2^{n_{2}})}(\mathcal{A}(\mathcal{C},2^{n_{2}}))\leq c_{10}(m) \phi^{\xi^{\prime}}_{B(2^{n_{2}})}(\tilde{\mathcal{A}}(\mathcal{C},2^{n_{2}})) \tag{25}\] _for some boundary condition \(\xi^{\prime}\) on \(B(2^{n_{2}})\)._ **Lemma 4.2** (Internal Arm Separation).: _Fix an integer \(m\geq 2\) and let \(n_{3}+3\leq n_{4}\). Consider an open circuit \(\mathcal{D}\) in \(B(2^{n_{4}})^{c}\) with \(m\) defects \(g_{1},\ldots,g_{m}\). Let \(\mathcal{B}(2^{n_{3}},\mathcal{D})\) be the event that_ 1. _there are_ \(2m\) _alternating disjoint open arms and dual-closed arms from_ \(\partial B(2^{n_{3}})\) _to_ \(\mathcal{D}\) _in_ \(\mathrm{Int}(\mathcal{D})\setminus B(2^{n_{3}})\)_;_ 2. _the_ \(m\) _dual-closed paths emenate from_ \(g^{*}_{j}\) _to_ \(\partial B(2^{n_{3}})^{*}\)_, respectively._ _Let \(\tilde{\mathcal{B}}(2^{n_{3}},\mathcal{D})\) be the event that \(\mathcal{B}(2^{n_{3}},\mathcal{D})\) occurs with \(2m\) arms \(\gamma_{1},\ldots,\gamma_{2m}\) (open, dual-closed alternatingly) whose endpoints in \(\partial B(2^{n_{3}})\) or \(\partial B(2^{n_{3}})^{*}\), \(h_{1},\ldots,h_{2m}\), satisfy_ \[2^{-n_{3}}\min_{k\neq l}|h_{k}-h_{l}|\geq\frac{1}{2m}.\] _Then, there is a constant \(c_{11}(m)>0\) independent of \(n_{3},n_{4},\mathcal{D}\), and the boundary condition \(\xi\) such that_ \[\phi^{\xi}_{\mathrm{Int}(\mathcal{D})}(\mathcal{B}(2^{n_{3}},\mathcal{D})) \leq c_{11}(m)\phi^{\xi^{\prime}}_{\mathrm{Int}(\mathcal{D})}(\tilde{ \mathcal{B}}(2^{n_{3}},\mathcal{D})) \tag{26}\] _for some boundary condition \(\xi^{\prime}\) on \(\mathcal{D}\)._ **Remark 1**.: \(\xi^{\prime}\) _arises due to a technical challenge in the proof. However, for the purpose of (24), any boundary condition suffices as the RSW estimates we have are uniform in boundary conditions._ The proofs of Lemma 4.1 and 4.2 are similar, and we only provide the proof for the former. Arm separation techniques are classical techniques that date back to Kesten [23, 26]. They were first developed to show well-separatedness for arms crossing square annuli. In our case, the annulus consists of one square boundary and one circuitous boundary. The main obstacle for directly applying the classical arm separation arguments is that the geometry of the circuit may generate bottlenecks that prevent arms from being separated on certain scales. In the first part of the proof, we address this through a construction that "leads" the interfaces to the boundary of \(B(2^{n_{1}})\). We note that this part of the proof for the random cluster model is identical to that of [7, Lemma 6.2] as the constructions are purely topological. However, we include here for the reader's convenience. The second step is to define a family of disjoint annuli in levels, which groups the arms based on their relative distances. In the following proof, the details for this step is provided last. The final part of the proof depends on an arm separation statement in each annuli defined in the previous step, for which we provide the details in Lemma 4.3. We note that our proof is stated in full generality compared to the proof in [7] which is stated for \(m=2\), and therefore slightly deviates from it in notation. Proof of Lemma 4.1.: Given the circuit \(\mathcal{C}\) with defects \(e_{1},\ldots,e_{m}\), we assume the occurrence of \(\mathcal{A}(\mathcal{C},2^{n_{2}})\). The first step is to "extend" the circuit \(\mathcal{C}\) to \(\partial B(2^{n_{1}})\) so that the arms will not be tangled due to the geometry of \(\mathcal{C}\). For \(i=1,\ldots,m\), \(\alpha_{i}^{l}\) be the counterclockwise-most dual-closed path emenating from \(e_{i}^{*}\) to \(\partial B(2^{n_{1}}+1/2)\) in \(B(2^{n_{1}}+1/2)\setminus\mathcal{C}\) and \(\alpha_{i}^{r}\) the clockwise-most dual-closed path emenating from \(e_{i}^{*}\) to \(\partial B(2^{n_{1}}+1/2)\) in \(B(2^{n_{1}}+1/2)\setminus\mathcal{C}\). We denote by \(a_{i}^{l}\) the first vertex on \(\partial B(2^{n_{1}})\) to the counterclockwise side of \(\alpha_{i}^{l}\) and \(a_{i}^{r}\) the first vertex on \(\partial B(2^{n_{1}})\) to the clockwise side of \(\alpha_{i}^{r}\). Let \(\beta_{i}^{l}\) be the counterclockwise-most open path from the lower right end-vertex of \(e_{i}\) to \(a_{i}^{r}\) in \(B(2^{n_{1}})\setminus\mathcal{C}\) and \(\beta_{i}^{r}\) be the clockwise-most open path from the top left end-vertex of \(e_{i+1}\) to \(a_{i+1}^{l}\) in \(B(2^{n_{1}})\setminus\mathcal{C}\). Here, the indices are cyclic, meaning that \(i=i\bmod m\). Note that it is necessary that \(a_{i}^{l}\neq a_{i}^{r}\); it is possible that \(a_{i}^{r}=a_{i+1}^{l}\), but by the assumption that \(\mathcal{A}(\mathcal{C},n_{2})\) occurs, \(a_{i+1}^{l}\) must be on the clockwise side of \(a_{i}^{r}\) on \(\partial B(2^{n_{1}})\). We identify the last intersection of \(a_{i}^{l}\) and \(a_{i}^{r}\) from \(e_{i}\) to \(\partial B(2^{n_{1}})\), which can possibly be \(e_{i}\). Let \(\alpha_{i}\) be the union of the piece of \(a_{i}^{l}\) from the last intersection to \(\partial B(2^{n_{1}}+1/2)\) and the piece of \(a_{i}^{r}\) from the last intersection to \(\partial B(2^{n_{1}}+1/2)\). Let \(R_{i}\) be the domain bounded by \(\alpha_{i}\) and the piece of \(\partial B(2^{n_{1}})\) between \(a_{i}^{l}\) and \(a_{i}^{r}\) on \(a_{i}^{l}\)'s clockwise side. We now define a path \(\beta_{i}\). If \(\beta_{i}^{l}\) and \(\beta_{i}^{r}\) intersect, we define \(\beta_{i}\) analogously to \(\alpha_{i}\). Otherwise, we define \(\beta_{i}\) to be the union of the piece of \(\beta_{i}^{l}\) from its last intersection with \(\mathcal{C}\) to \(\partial B(2^{n_{1}})\), the piece of \(\beta_{i}^{r}\) from its last intersection with \(\mathcal{C}\) to \(\partial B(2^{n_{1}})\), and the piece of \(\mathcal{C}\) that connects the aforementioned two pieces. Let \(S_{i}\) be the domain bounded by \(\beta_{i}\) and the piece of \(\partial B(2^{n_{1}})\) between \(a_{i}^{r}\) and \(a_{i+1}^{l}\) on \(a_{i}^{r}\)'s clockwise side. Note that in the case \(a_{i}^{r}=a_{i+1}^{l}\), \(\beta_{i}\) and \(S_{i}\) consist of only the vertex \(a_{i}^{r}\). Let \(R:=(B(2^{n_{2}})\setminus B(2^{n_{1}}))\cup(\cup_{i=1}^{m}R_{i})\cup(\cup_{i= 1}^{m}S_{i})\). Note that once \(\{\alpha_{i},\beta_{i}\}_{i}\) is fixed, the conditional distribution of the cluster configuration inside \(R\) is (uniquely) determined by the status of \(\alpha_{i}\) and \(\beta_{i}\). Let \(\mathcal{A}(R)\) denote the event that 1. there is a dual-closed arm connecting \(\alpha_{i}\) to \(\partial B(2^{n_{2}})^{*}\) in \(R\) for \(i=1,\ldots,m\); 2. there is an open arm connecting \(\beta_{i}\) to \(\partial B(2^{n_{2}})\) in \(R\) for \(i=1,\ldots,m\). Let \(\tilde{\mathcal{A}}(R)\) be the event that \(\mathcal{A}(R)\) occurs with \(2m\) arms \(\gamma_{1},\ldots,\gamma_{2m}\) (dual-closed, open alternatingly) whose endpoints in \(\partial B(2^{n_{2}})\) or \(\partial B(2^{n_{2}})^{*}\), \(f_{1},\ldots,f_{2m}\), satisfy \(2^{-n_{2}}\min_{k\neq l}|f_{k}-f_{l}|\geq 1/(2m)\). Lemma 4.1 is then equivalent to \[\phi_{B(2^{n_{2}})}^{\xi}(\mathcal{A}(R))\,\leq c_{10}^{\prime}(m)\phi_{B(2^{ n_{2}})}^{\xi^{\prime}}(\tilde{\mathcal{A}}(R))\] for some boundary condition \(\xi^{\prime}\) and some constant \(c_{10}^{\prime}(m)>0\) that only depends on \(m\). Figure 2. A representation of the construction in the first step with \(m=2\). This figure is topologically equivalent to [7, Fig. 4], with a relabeling. Let us first relabel the vertices \(a_{1}^{I},a_{1}^{r},\ldots,a_{m}^{I},a_{m}^{r}\) by \(x_{1},\ldots,x_{2m}\) where \(x_{2i-1}=a_{i}^{I}\) and \(x_{2i}=a_{i}^{r}\) for \(i=1,\ldots,m\). The next step is to identify _critical scales_, scales of neighborhoods of these vertices comparable to the distance between them. We now informally introduce the notion of _level-\(j\) annuli_ so we can finish the proof before returning to formally defining them at the end of the proof. For \(j=1,\ldots,2m-1\), \(\mathcal{I}_{j}\) is a collection of indices. \(\mathcal{I}_{j}\) keeps track of groups of \(j+1\) vertices among \(x_{1},\ldots,x_{2m}\) on level \(j\) and the index indicates the first vertex in a group in clockwise order. For each level \(j\), the difference of two indices in \(\mathcal{I}_{j}\) is at least \(j+1\). Let \(\mathcal{L}_{j}\) be the collection of level-\(j\) annuli: \(\mathcal{L}_{j}:=\{\operatorname{Ann}_{j}(i):i\in\mathcal{I}_{j}\}\). The level-\(j\) annuli \(\operatorname{Ann}_{j}(i)\) satisfy: * If \(i\in\mathcal{I}_{j}\), that is, \(\operatorname{Ann}_{j}(i)\) is nonempty, then \(\operatorname{Ann}_{j}(i)\) is centered on \(\partial B(2^{n_{1}})\) and both its inner box and outer box enclose exactly \(j+1\) vertices \(x_{i},x_{i+1},\ldots,x_{i+j}\). That is, \(\operatorname{Ann}_{j}(i)\) is crossed by \(j\) arms. * Level-\(j\) annuli are mutually disjoint and disjoint from annuli of other levels. * There is exactly one level-\(2m\) annulus and at most \(\lfloor\frac{2m}{j+1}\rfloor\) (and possibly zero) level-\(j\) annuli. * All level-\(j\) annuli are contained in \(B(2^{n_{1}+1})\), for \(j=1,\ldots,2m-1\). The level-\(2m\) annulus is \(\operatorname{Ann}_{2m}(1)=\operatorname{Ann}(2^{n_{1}+1},2^{n_{2}})\). \(\mathcal{A}(R)\) implies the simultaneous occurrence of crossings in each of the annuli defined above intersected with the domain \(R\), which can cause the annuli to have irregular boundaries. However, since all annuli (excluding the level-\(2m\) annulus) are centered on \(\partial B(2^{n_{1}})\) and the boundary of \(R\) (the \(\alpha_{i}\) and \(\beta_{i}\)) are in the interior of \(B(2^{n_{1}})\), each annulus \(\operatorname{Ann}_{j}(i)\) intersected with \(R\) necessarily contains one of the top-, bottom-, left-, or right-half of \(\operatorname{Ann}_{j}(i)\). We call the half annulus \(\operatorname{Ann}_{j}^{\mathrm{h}}(i)\). If there are two choices, choose the top or bottom over the left or right. For \(j=1,\ldots,2m-1\), let \(E_{j}(i)\) be the event that there exist \(j\) disjoint crossings in \(\operatorname{Ann}_{j}^{\mathrm{h}}(i)\) such that the color of each crossing is determined by the vertices the annulus encloses. In particular, let \(E_{2m}\) be the event that \(\operatorname{Ann}_{2m}\) is crossed by \(2m\) disjoint alternating open and dual-closed crossings. Then \(\mathcal{A}(R)\) implies the occurence of \(\cap_{j=1}^{2m}\cap_{i\in\mathcal{I}_{j}}E_{j}(i)\). By repeatedly applying the domain Markov property, we have \[\phi_{B(2^{n_{2}})}^{\xi}(\mathcal{A}(R))\leq\phi_{B(2^{n_{2}})}^{\xi}(\cap_{ j=1}^{2m}\cap_{i\in\mathcal{I}_{j}}E_{j}(i))=\prod_{j=1}^{2m}\prod_{i\in \mathcal{I}_{j}}\mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{D_{j,i}}^{\xi_{j,i }}(E_{j}(i))\right], \tag{27}\] where \(D_{j,i}=\cup_{k=1}^{j-1}(\cup_{D\in\mathcal{L}_{k}}D)\cup(\cup_{k=1}^{i} \operatorname{Ann}_{j}(k))\), that is, \(D_{j}\) is the union of all annuli up to level \(j-1\) union the union of all annuli in on level-\(j\) up to index \(i\). The exception is \(D_{2m}=B(2^{n_{2}})\). And \(\xi_{j,i}\) is the random variable of boundary conditions on \(\partial D_{j,i}\) induced by conditioning on the outside. \(\xi_{2m}=\xi\). Note that \(D_{j_{1},i_{1}}\subset D_{j_{2},i_{2}}\) if \(j_{1}<j_{2}\) or if \(j_{1}=j_{2}\) and \(i_{1}<i_{2}\). Let \(\tilde{E}_{j}(i)\) be the event that \(E_{j}(i)\) occurs and the exit points of the crossings are separated, that is the distance between any two exit points are at least \(\delta/2m\) times the length of the boundary of the box they are on, for some \(\delta>1/8\). The following lemma is an arm separation statement that compares the separated event to the regular arm event. **Lemma 4.3**.: _For any \(i,j,\xi\), there is a \(c_{12}=c_{12}(m)>0\) such that for any \(j=1,\ldots,2m\),_ \[\phi_{D_{j,i}}^{\xi}(E_{j}(i))\leq c_{12}\phi_{D_{j,i}}^{\xi}(\tilde{E}_{j}(i)). \tag{28}\] Proof.: This a classical result using RSW and FKG estimates except on half-annuli. Nonetheless, all parts of the classical argument apply. We refer the reader to the proof of [4, Proposition 5.6]. Applying Lemma 4.3 to each probability in the RHS of (27) and then the domain Markov property and we have \[\prod_{j=1}^{2m}\prod_{i\in\mathcal{I}_{j}} \mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{D_{j,i}}^{\xi_{j,i}}(E_ {j}(i))\right]\] \[\leq\tilde{c}_{12}\prod_{j=1}^{2m}\prod_{i\in\mathcal{I}_{j}} \mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{D_{j,i}}^{\xi_{j,i}}(\tilde{E}_{j}( i))\right]\] \[=\tilde{c}_{12}\mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{D_{1,i_ {1}}}^{\xi_{i,i_{1}}}(\tilde{E}_{1}(i_{1}))\right]\mathbb{E}_{B(2^{n_{2}})}^{ \xi}\left[\phi_{D_{1,i_{2}}}^{\xi_{i,i_{2}}}(\tilde{E}_{1}(i_{2}))\right]\] \[\qquad\times\prod_{k=3}^{|\mathcal{I}_{1}|}\mathbb{E}_{B(2^{n_{2} })}^{\xi}\left[\phi_{D_{1,i_{k}}}^{\xi_{1,i_{k}}}(\tilde{E}_{1}(k))\right] \prod_{j=2}^{2m}\prod_{i\in\mathcal{I}_{j}}\mathbb{E}_{B(2^{n_{2}})}^{\xi} \left[\phi_{D_{j,i}}^{\xi_{j,i}}(\tilde{E}_{j}(i))\right]\] \[=\tilde{c}_{12}\mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{D_{1,i_ {2}}}^{\xi^{\prime}}(\tilde{E}_{1}(i_{1})\cap\tilde{E}_{1}(i_{2}))\right]\] \[\qquad\times\prod_{k=3}^{|\mathcal{I}_{1}|}\mathbb{E}_{B(2^{n_{2} })}^{\xi}\left[\phi_{D_{1,i_{k}}}^{\xi_{1,i_{k}}}(\tilde{E}_{1}(i_{k}))\right] \prod_{j=2}^{2m}\prod_{i\in\mathcal{I}_{j}}\mathbb{E}_{B(2^{n_{2}})}^{\xi} \left[\phi_{D_{j,i}}^{\xi_{j,i}}(\tilde{E}_{j}(i))\right]\] \[\ldots\] \[=\tilde{c}_{12}\mathbb{E}_{B(2^{n_{2}})}^{\xi}\left[\phi_{B(2^{n_{ 2}})}^{\xi^{\prime}}(\cap_{j=1}^{2m}\cap_{i\in\mathcal{I}_{j}}\tilde{E}_{j}(i)) \right],\] where \(\xi^{\prime}\) is a random variable of boundary conditions on \(B(2^{n_{2}})\) and \(\tilde{c}_{12}\) is some power of \(c_{12}\). It remains to "glue" the crossings that occur in the \(\tilde{E}_{j}\) events together so that \(\tilde{\mathcal{A}}(R)\) occurs, which we refer to Section 2 for details on the gluing constructions. We make a special note that connecting a crossing in the inner-most annulus inward to the boundary of \(R(\alpha_{i}\) or \(\beta_{i})\) has a constant cost due to RSW. Then, there exists \(c\) that depends only on \(m\) such that for any arbitrary boundary condition \(\xi^{\prime}\), \[\phi_{B(2^{n_{2}})}^{\xi^{\prime}}(\cap_{j=1}^{2m}\tilde{E}_{j})\leq c\phi_{B (2^{n_{2}})}^{\xi^{\prime}}(\tilde{\mathcal{A}}(R))\] as desired. We now formally define \(\mathcal{I}_{j}\) and \(\mathcal{L}_{j}\). Recall that \(x_{1},\ldots,x_{2m}\) are \(2m\) vertices on \(\partial B(2^{n_{1}})\). Let us again use cyclic indexing, i.e. \(i=i\) mod \((2m)\). Recall further that each level-\(j\) annulus is crossed by \(j\) arms and encloses \(j+1\) vertices. The purpose of defining these annuli (and groupings of vertices) is to identify which arms are close relative to the scale and which ones are far away. We start with level 1. Let \(\mathcal{I}_{1}\) be the collection of indices such that the index \(i\) is \(I_{1}\) if the distance between \(x_{i}\) and \(x_{i+1}\) is logarithmically smaller than the distance between them and any other adjacent vertices, that is, \(i\in\mathcal{I}_{1}\) if \[|x_{i}-x_{i+1}|<2^{-5}\cdot\min\{|x_{i-1}-x_{i}|,|x_{i+1}-x_{i+2}|\}. \tag{29}\] For any \(i\in\mathcal{I}_{1}\), we define \[\ell_{1}(i):=\min\{\ell:\exists x\in 2^{\ell}\mathbb{Z}^{2}\cap\partial B(2^{n_{ 1}})\text{ such that }B(x,2^{\ell})\supset\{x_{i},x_{i+1}\}\}.\] Let \(x_{1}(i)\) be the center for such a box \(B(x_{1}(i),2^{\ell_{1}(i)})\). If there are several choices for \(x_{1}(i)\), we choose the first in lexicographical order. Next, we define \[\ell_{1}^{\prime}(i):=\min\{\ell\geq\ell_{1}(i):B(x_{1}(i),2^{\ell})\ni x_{i-1} \text{ or }B(x_{1}(i),2^{\ell})\ni x_{i+2}\}-3.\] Condition (29) guarantees the existence of \(x_{1}(i)\) and ensures that \(\ell_{1}(i)<\ell_{1}^{\prime}(i)\leq n_{1}\). Let \(\operatorname{Ann}_{1}(i):=\operatorname{Ann}(x_{1}(i);2^{\ell_{1}(i)},2^{ \ell_{1}^{\prime}(i)})\). Finally, we let \(\mathcal{L}_{1}:=\{\operatorname{Ann}_{1}(i):i\in\mathcal{I}_{1}\}\). For \(j=2,\ldots,2m-2\), we define \(\mathcal{I}_{j}\) and \(\mathcal{L}_{j}\) inductively. Let \(\mathcal{I}_{j}\) again be a collection of indices. An index \(i\) is in \(\mathcal{I}_{j}\) if \[\max_{k,\ell\in\{i,\ldots,i+j\}}\big{|}x_{k}-x_{\ell}\big{|}<2^{-5}\cdot \min\{|x_{i-1}-x_{i}|,|x_{i+j}-x_{i+j+1}|\}. \tag{30}\] For any \(i\in\mathcal{I}_{j}\), we define \[\ell_{j}(i):=\min\{\ell:\exists x\in 2^{\ell}\mathbb{Z}^{2} \cap\partial B(2^{n_{1}})\text{ such that }\] \[B(x,2^{\ell})\supset\{x_{i},\ldots,x_{i+j}\}\cup(\cup_{h=1}^{j-1 }\cup_{k=i}^{i+j}\operatorname{Ann}_{h}(k))\}\] where \(\operatorname{Ann}_{h}(k)=\emptyset\) if \((k,k+1,\ldots,k+h)\not\in\mathcal{I}_{h}\). Let \(x_{j}(i)\) be the center for such a box \(B(x_{j}(i),2^{\ell_{j}(i)})\). If there are several choices for \(x_{j}(i)\), we choose the first in lexicographical order. Next, we define \[\ell_{j}^{\prime}(i):=\min\{\ell\geq\ell_{j}(i):B(x_{j}(i),2^{\ell})\ni x_{i-1 }\text{ or }B(x_{j}(i),2^{\ell})\ni x_{i+j+1}\}-3.\] Again, condition (30) guarantees the existence of \(x_{j}(i)\) and ensures that \(\ell_{j}(i)<\ell_{j}^{\prime}(i)\leq n_{1}\). We let \(\operatorname{Ann}_{j}(i):=\operatorname{Ann}(x_{j}(i);2^{\ell_{j}(i)},2^{ \ell_{j}^{\prime}(i)})\) and \(\mathcal{L}_{j}:=\{\operatorname{Ann}_{j}(i):i\in\mathcal{I}_{j}\}\). Note that the definition of \(\ell_{j}(i)\) ensures that level-\(j\) annuli are disjoint from annuli of lower levels. The case \(j=2m-1\) is different from the previous cases: for there to be a level-\((2m-1)\) annulus, all \(2m\) vertices must be concentrated relative to the scale of \(\partial B(2^{n_{1}})\). We say Figure 3. \(x_{i},x_{i+1},x_{i+2},x_{i+3}\), and \(x_{i+4}\) are five vertices on \(\partial B(2^{n_{1}})\). There are two disjoint crossings, one open and one dual-closed, in the annulus \(\operatorname{Ann}_{2}(i+1)\). There are four disjoint crossings, alternatingly dual-closed and open, in the annulus \(\operatorname{Ann}_{4}(i)\). if \[\max_{k,\ell\in\{1,\ldots,2m\}}|x_{k}-x_{\ell}|<2^{n_{1}-5}.\] If \(\mathcal{I}_{2m-1}\) is nonempty, we define \(\ell_{2m-1}\) similar to before: \[\ell_{2m-1} :=\min\{\ell:\exists x\in 2^{\ell}\mathbb{Z}^{2}\cap\partial B(2^{n_{1} })\text{ such that }\] \[B(x,2^{\ell})\supset\{x_{1},\ldots,x_{2m}\}\cup(\cup_{h=1}^{2m-2} \cup_{k=1}^{2m}\operatorname{Ann}_{h}(k))\}\] Let \(x_{2m-1}\) be the center for such a box \(B(x_{2m-1},2^{\ell_{2m-1}})\). If there are several choices for \(x_{2m-1}\), we choose the first in lexicographical order. For the second-to-last level, we define \[\ell_{2m-1}^{\prime}:=n_{1}-2.\] Similarly to before, we define \(\operatorname{Ann}_{2m-1}(1):=\operatorname{Ann}(x_{2m-1};2^{\ell_{2m-1}},2 ^{\ell_{2m-1}^{\prime}})\). Let \(\mathcal{L}_{2m-1}=\{\operatorname{Ann}_{2m-1}(1)\}\) if \(\mathcal{I}_{2m-1}\) is nonempty and empty otherwise. Finally, we define \(\mathcal{L}_{2m}:=\{\operatorname{Ann}_{2m}(1)\}=\{\operatorname{Ann}(2^{n_ {1}+1},2^{n_{2}})\}\). **Remark 2**.: _Although Lemmas 4.1 and 4.2 are stated for \(2m\) alternating arms, the proof can be adapted to accommodate any color sequence such that the dual-closed arms and defect edges are matched, thus including the three-arm case. For consecutive open arms, the constructions in step one and two remain the same except there are multiple open arms emenating from \(\beta_{i}\) in the definition of \(\mathcal{A}(R)\). This subsequently changes the definitions of \(E_{j}(i)\), but the argument carries through as the arm separation statement still holds. Consecutive closed arms can be considered as having zero open arm between them, the argument for which follows the consecutive open arms case essentially._ ## 5. Outline of the Proof of Theorem 1.1 As an extension to the results derived in [6], the proof of the main result follows the same strategy with modifications in certain arguments. For completeness, we outline the proof with an emphasis on the present application and point to the main differences. An alternate, more detailed outline is offered in [6, Section 2]. The proof is essentially divided into three steps: In the first step, we construct shortcuts around edges on the lowest crossing and show that the existence of such shortcuts has a "good" probability. The second step uses an iterative scheme to improve upon shortcuts. Finally, we find the maximal collection of disjoint shortcuts and sum up the total savings. **Step 0: The lowest crossing \(\ell_{n}\).** The estimate on the length of \(\ell_{n}\) relies on the observation that \(\ell_{n}\) consists only of three-arm points: since \(\ell_{n}\) is the lowest crossing, by duality, from every edge \(e\) in \(\ell_{n}\) there are two disjoint open arms and a dual-closed arm to distance \(\operatorname{dist}(e,B(n))\). In conjunction with some smoothness control, we have \[\mathbb{E}[\#\ell_{n}\mid\mathcal{H}_{n}]\leq Cn^{2}\pi_{3}(n). \tag{31}\] **Step 1: Construction of shortcuts.** For any \(\epsilon>0\), an edge \(e\) on the lowest crossing \(\ell_{n}\), we look for an arc \(r\) over \(e\) that saves at least \((1/\epsilon-1)\#r\) edges. The event \(\hat{E}_{k}(e)=\hat{E}_{k}(e,\epsilon,\delta)\) discribes such an arc circumventing \(e\) on scale \(k\). The exact definition of \(\hat{E}_{k}\) is quite involved, see [6, Section 5]. We only state the properties and results relevant to the argument: 1. \(\hat{E}_{k}(e)\) depends only on \(\operatorname{Ann}(e;2^{k},2^{K})\) where \(K=k+\lfloor\log(1/\epsilon)\rfloor\); 2. For each \(e\in\ell_{n}\), \(\hat{E}_{k}(e,\epsilon,\delta)\) implies the existence of an \(\delta\)-shortcut around \(e\). That is, there is an open arc \(r\subset B(e,3\cdot 2^{k})\) such that \(r\) only intersects with \(\ell_{n}\) at its two endpoints \(u(e)\) and \(v(e)\) and \[\frac{\#r}{\#\tau}\leq\delta;\] where \(\tau\) denotes the portion of \(\ell_{n}\) between \(u(e)\) and \(v(e)\). See [6, Proposition 5.4]. 3. \(\hat{E}_{k}^{\prime}\) is a similar event to \(\hat{E}_{k}\) on scale \(k\) that relates to a "U-shaped region" and \(\mathsf{s}_{k}\) is the shortest path in the U-shaped region. If for some \(\epsilon\in(0,\frac{1}{2})\), \(\delta>0\), and \(k\geq 1\), \[\mathbb{E}[\#\mathsf{s}_{k}\mid\hat{E}_{k}^{\prime}]\leq\delta 2^{2k}\pi_{3}(2^{k})\] holds, then for all \(L\geq 1\), (32) \[\phi_{B(n)}^{\xi}(\hat{E}_{k}(e,\epsilon,\delta)\mid A_{3}(e,2^{L}))\geq c_{0 }\epsilon^{4}\quad\text{for all $L\geq 1$}.\] See [6, Equation (5.29)]. Property (2) relies mostly on topological considerations and therefore applies to the random cluster model. For property (3), we refrain from elaborating further despite the original proof using, at times, independence, generalized FKG, and gluing constructions. This is because we feel the techniques to convert these arguments for the random cluster model are sufficiently represented in the proofs that we do include, especially in that of the following proposition; and to completely reproduce all necessary parts of the proof of property (3) would require lots of notation and mostly verbatim steps that translate directly for the random cluster model. The key result in this step is that the probability that no shortcut exists for any scale \(k\) is small: **Proposition 5.1**.: _There is a constant \(c_{13}\) such that if \(\delta_{j}>0\), \(j=1,\dots,L\), is a sequence of parameters such that for some \(\epsilon\in(0,\frac{1}{4})\),_ \[\mathbb{E}[\#\mathsf{s}_{j}\mid\hat{E}_{j}^{\prime}]\leq\delta_{j}2^{2j}\pi_{ 3}(2^{j}), \tag{33}\] _then for any \(L^{\prime}<L\),_ \[\phi_{B(n)}^{\xi}\left(\cap_{j=L^{\prime}}^{L}\hat{E}_{j}(e,\epsilon,\delta_ {j})^{c}\mid A_{3}(e,2^{L})\right)\leq 2^{-c_{13}\frac{\epsilon^{4}}{\log(1/ \epsilon)}(L-L^{\prime})}.\] Proof of Proposition 5.1 subject to Theorem 3.1.: Let \(E_{k}=\hat{E}_{kN}\). By property (3), the combination of (33) with the observation that the occurrence of a circuit in \(\operatorname{Ann}(2^{(10k+i)N},2^{(10k+i+1)N})\) conditional on a three-arm event has constant probability due to RSW and gluing constructions (see Proposition 2.1) implies that \[\phi_{B(n)}^{\xi}(E_{10k+5}\cap\hat{\mathbb{C}}_{k}\mid A_{3}(2^{L}))\geq c_{0 }^{\prime}\epsilon^{4}\] for \(0\leq k\leq\frac{L}{10N}-1\). Note that \(c_{0}^{\prime}\) is uniform in \(k\). We observe the following chain of set inclusions with changes of indices including \(j=\ell N\) in the equality and \(\ell=10k\) in the first inclusion: \[\bigcap_{j=L^{\prime}}^{L}\hat{E}_{j}^{c}=\bigcap_{\ell=\lceil\frac{L^{\prime }}{N}\rceil}^{\lfloor\frac{L}{N}\rfloor}E_{\ell}^{c}\subset\bigcap_{k=\lceil \frac{L^{\prime}}{\ell N\ell N}\rceil}^{\lfloor\frac{L}{\ell N\ell N}\rfloor- 1}(E_{10k+5})^{c}\subset\left\{\sum_{k=\lceil\frac{L^{\prime}}{\ell N\ell N} \rceil}^{\lfloor\frac{L}{\ell N\ell N}\rfloor-1}\mathbf{1}\{E_{10k+5},\hat{ \mathbb{C}}_{k}\}=0\right\}.\] Thus, applying Theorem 3.1 by choosing \(N=\lfloor\log(1/\epsilon)\rfloor\) and \(L-L^{\prime}\geq 40\), we obtain \[\phi_{B(n)}^{\xi}\left(\cap_{k=L^{\prime}}^{L}\hat{E}_{k}(e,\epsilon, \delta_{j})^{c}\ \Big{|}\ A_{3}(e,2^{L})\right) \leq\phi_{B(n)}^{\xi}\left(\sum_{k=\{\frac{l^{\prime}}{l^{\prime \prime}}\}}^{\lfloor\frac{L}{l^{\prime\prime}}\rfloor-1}\mathbf{1}\{E_{10k+5},\hat{\mathbf{C}}_{k}\}=0\ \Bigg{|}\ A_{3}(2^{L})\right)\] \[\leq\exp(-\frac{cc_{0}^{\prime}\epsilon^{4}}{\log(1/\epsilon)}(L- L^{\prime})).\] ### Step 2: Iteration in the "U-shaped region" In this step, we inductively improve the length of the "best possible" shortcuts for a fixed scale. The function of this step is to ensure that (33) is satisfied. **Proposition 5.2** ([6], Proposition 7.1).: _There exist constants \(C_{1},C_{2}\) such that for any \(\epsilon>0\) sufficiently small, \(L\geq 1\), and \(2^{k}\geq(C_{1}\epsilon^{-4}(\log(1/\epsilon)^{2}))^{L}\), we have_ \[\mathbb{E}[\#_{\xi}\ |\ \hat{E}_{k}^{\prime}]\leq(C_{2}\epsilon^{1/2})^{L}2^{2k }\pi_{3}(2^{k}).\] The constructions are detailed in [6, Section 6 & 7] and we only give the high level heuristics. In step 1, shortcuts are constructed in a "U-shaped" region. Conditional on an event \(\hat{E}_{k}^{\prime}\) which is a superset of \(\hat{E}_{k}\) for the "U-shaped" region at scale \(2^{k}\), the starting estimate of a piece of shortcut is \[\mathbb{E}[\#_{\xi}\ |\ E_{k}^{\prime}]\leq C_{0}2^{2k}\pi_{3}(2^{k}). \tag{34}\] The factor \(2^{2k}\) comes from the five-arm points in the construction. Suppose, at stage \(i\), one can construct a shortcut of the order at most \[\mathbb{E}[\#_{\xi}\ |\ E_{k}^{\prime}]\leq\delta_{k}(i)2^{2k}\pi_{3}(2^{k}).\] Through constructions detailed in [6, Section 7], we get an additional gain of \(\sim\epsilon^{1/2}\) as long as there is enough space, i.e., when \(2^{k}\geq C(\epsilon)^{i}\) for some \(C(\epsilon)\sim\epsilon^{-4}(\log\frac{1}{\epsilon})^{2}\). We then iterate this procedure. Since the proof of this proposition relies mostly on intricate algebraic manipulations, we simply cite the conclusion and refer the reader to the original paper for more explanation. ### Step 3: Compilation The final estimate accounts for edges too close to the origin or the boundary, edges on \(\ell_{n}\) that don't have shortcuts in step 1, and a maximal collection of disjoint shortcuts that are optimized in step 2. For the reader's benefit, we recreate the compilation here. We first define a truncated box \(\hat{B}(n)=B(n-n^{\delta})\setminus B(n^{\delta})\) for \(\delta>0\) small enough such that \(n^{1+2\delta}\leq n^{2}\pi_{3}(n)\). For each \(e\in\hat{B}(n)\), we let \(L^{\prime}=\lceil\frac{\delta}{8}\log n\rceil\) and \(L=\lfloor\frac{\delta}{4}\log n\rfloor\). We apply Proposition 5.1 for \(L^{\prime}=\lceil\frac{\delta}{8}\log n\rceil\) and \(L=\lfloor\frac{\delta}{4}\log n\rfloor\) and obtain \[\phi_{B(n)}^{\xi}(\text{there is no $n^{-c}$-shortcut around $e$ }|\ e\in\ell_{n}) \leq\phi_{B(n)}^{\xi}\Big{(}\bigcap_{j=\lceil\frac{\delta}{8} \log n\rceil}^{\lfloor\frac{\delta}{4}\log n\rfloor}\hat{E}_{j}(e,\epsilon,n^ {-c})^{c}\ |\ A_{3}(e,n^{\delta/2})\Big{)}\] \[\leq 2^{-c_{13}\epsilon^{4}\frac{\delta}{8}\log n}\] \[\leq n^{-\theta}\] for some \(\theta>0\). We choose a collection of \(n^{-c}\)-shortcuts around edges of \(\ell_{n}\) such that the shortcuts are disjoint and the number of edges circumvented is maximal. Conditional on the existence of a horizontal crossing, any edge \(e\) in \(B(n)\) falls into one of three categories: in the margin of the box, with no \(n^{-c}\)-shortcut, or with a \(n^{-c}\)-shortcut. Thus, \(S_{n}\) has the following estimate: \[\mathbb{E}_{\mathbb{Z}^{2}}[S_{n}\mid\mathcal{H}_{n}] \leq Cn^{1+\delta}+n^{-\theta}\mathbb{E}[\#\ell_{n}\mid\mathcal{H}_ {n}]+n^{-c}\mathbb{E}[\#\ell_{n}\mid\mathcal{H}_{n}]\] \[\leq Cn^{-\min\{\delta,\theta,c\}}n^{2}\pi_{3}(n).\] ## 6. Estimating Without Reimer's Inequality In the radial case, there is no natural crossing like the lowest crossing to compare to. Instead, we consider "lowest-like" paths between successive circuits around the origin. One nuisance in this construction occurs when two circuits are close and there is not enough space for there to be three arms to a large distance. However, if this happens, closeby circuits form a bottleneck which implies an arm event with more than three arms. This ensures that the three-arm probability is an upper bound. The details of the construction are encapsulated in [29, Lemma 2.3], and similarly [29, Lemma 4.5, 4.7]. Recall that \(A_{3}(n_{1},n_{2})\) denotes the three-arm event in the annulus \(\operatorname{Ann}(n_{1},n_{2})\) and \(\pi_{3}(n_{1},n_{2})\) its probability with domain \(B(n)\) and boundary condition \(\xi\). For any \(j\geq 3\), let \(A_{j}(n_{1},n_{2})\) (\(\pi_{j}(n_{1},n_{2})\), resp.) denote the polychromatic \(j\)-arm event (probability, resp.) with exactly \(j-1\) disjoint open arms and one dual-closed arm. Let \(\pi^{\prime}_{j}(n_{1},n_{2})\) denote the monochromatic \(j\)-arm probability. In an abuse of notation, for a box "centered at an edge", we write \(B(e,n)\) in place of \(B(e_{x},n)\) where \(e_{x}\) denotes the first endpoint of the edge \(e\) in lexicographical order. **Lemma 6.1**.: _Fix \(\epsilon>0\) and an integer \(R\) such that for any \(0\leq n_{1}<n_{2}\), \(\pi^{\prime}_{2R+2}(n_{1},n_{2})\leq\pi_{3}(n_{1},n_{2})(n_{1}/n_{2})^{\epsilon}\). Let \(\mathcal{H}_{R}(e,M)\) be the event that there exist \(0=\ell_{0}\leq\ell_{1}\leq\cdots\leq\ell_{R}=\lfloor\log(M)\rfloor\):_ 1. \(A_{3,OOC}(e,2^{\ell_{1}-1})\) _occurs;_ 2. _for_ \(i\geq 2\)_, if_ \(\ell_{i-1}<\lfloor\log(M)\rfloor\)_, there are_ \(2i\) _disjoint open arms and one closed dual arm from_ \(\partial B(e,2^{\ell_{i-1}})\) _to_ \(\partial B(e,2^{\ell_{i}-1})\)_; and_ 3. _if_ \(\ell_{R}<\lfloor\log(M)\rfloor\)_, there are_ \(2R+2\) _disjoint open arms from_ \(\partial B(e,2^{\ell_{R}})\) _to_ \(\partial B(e,M)\)_._ _Then,_ \[\phi^{\xi}_{B(n)}(\mathcal{H}_{R}(e,M))\leq CM^{-r}\pi_{3}(M).\] The above estimate relies essentially on the following proposition. **Proposition 6.2**.: _Let \(0\leq n_{1}<n_{2}\), \(i\geq 1\),_ \[\pi_{2i+1}(n_{1},n_{2})\leq\left(\frac{n_{1}}{n_{2}}\right)^{(2i-2)\alpha} \pi_{3}(n_{1},n_{2}).\] _for some \(\alpha\in(0,1)\)._ In Bernoulli percolation, this is done by applying Reimer's inequality. A weak form of Reimer's inequality for the random cluster model can be found in [30]. However, it requires the events to not only have disjoint occurrences but also occur on disjoint clusters. The arms in the arm event \(\pi_{2i+1}\) that [29] concerns belong to the same cluster, since they are portions of consecutive circuits chained together by a radial arm. Therefore, the weak estimate is not applicable to our problem. We provide a proof using conditional probability and quad-crossing RSW. Proof.: It suffices to show that \[\pi_{2i+1}(n_{1},n_{2})\leq\left(\frac{n_{1}}{n_{2}}\right)^{\alpha}\pi_{2i}(n_{1 },n_{2})\] for \(2i\geq 3\). Since there is at least one dual-closed arm in any configuration of \(A_{2i+1}(n_{1},n_{2})\), we condition on a dual-closed arm and the first open arm on its clockwise side and the (consecutive) first \(2i-2\) disjoint open arms on its counterclockwise side and apply the domain Markov property. As in the proof of Claim 1 (but omitting many details here), let \(\mathcal{U}\) denote the random region that contains the \(2i\) arms and whose boundaries consist of a dual-closed arm, an open arm, and portions of \(\partial B(n_{1})\) and \(\partial B(n_{2})\). Then, conditional on \(\mathcal{U}\), \[\phi^{\xi}_{B(n)}(A_{2i+1,CO\cdots O}(n_{1},n_{2})) =\sum_{\text{admissible }U}\phi^{\xi}_{B(n)}(A_{1,O}(n_{1},n_{2},U^{c})\mid\mathcal{U}=U)\phi^{\xi}_ {B(n)}(\mathcal{U}=U) \tag{35}\] \[=\sum_{\text{admissible }U}\mathbb{E}^{\xi}_{B(n)}\left[\phi^{ \eta}_{B(n)\setminus U}(A_{1}(n_{1},n_{2},U^{c}))\right]\phi^{\xi}_{B(n)}( \mathcal{U}=U)\] where \(A_{1}(n_{1},n_{2},U^{c})\) is the one arm event in \(\text{Ann}(n_{1},n_{2})\) restricted to \(U^{c}\) and \(\eta\) is uniquely determined by \(\xi\) and \(U\). By quad-crossing RSW estimates (5), the one-arm probability decays at \((n_{2}/n_{1})^{-\alpha}\) for some \(\alpha\in(0,1)\). Applying the above estimate into (35) and we have \[\eqref{eq:A_1}\leq\left(\frac{n_{1}}{n_{2}}\right)^{\alpha}\sum_{\text{admissible }U}\phi^{0}_{B(n)}(\mathcal{U}=U)=\left(\frac{n_{1}}{n_{2}}\right)^{\alpha} \phi^{0}_{B(n)}(A_{2i,CO\cdots O}(n_{1},n_{2})).\] We note that to apply quad-crossing RSW, the extremal distance for each quad \(\text{Ann}(2^{\ell},2^{\ell+1})\setminus U\), which for convenience we call \(\mathcal{D}\) here, needs to be uniformly lower bounded over all admissible \(U\). To our advantage, bottlenecks in \(\mathcal{D}\) make the extremal distance larger. The boundary of \(\mathcal{D}\) defines four arcs: \((ab)\) on \(\partial B(2^{\ell})\), \((cd)\) on \(\partial B(2^{\ell+1})\), and \((bc)\) and \((da)\) in the interior of \(\text{Ann}(2^{\ell},2^{\ell+1})\). Indeed, if \(\mathcal{D}\) is contained in another quad \(\mathcal{D}^{\prime}\) with the same landing arcs as \(\mathcal{D}\), then \[\ell_{\mathcal{D}}[(ab),(cd)]\geq\ell_{\mathcal{D}^{\prime}}[(ab),(cd)]. \tag{36}\] We verify (36) in Appendix A. Let \(\alpha\) be a topological path in \(U\), disjoint from \((bc)\) and \((da)\), and \(\mathcal{D}^{\prime}\) be all of \(\text{Ann}(2^{\ell},2^{\ell+1})\) with arcs \((ab)\), \((bc)^{\prime}\), \((cd)\), and \((da)^{\prime}\), where \((bc)^{\prime}\) consists of a portion of \(\partial B(2^{\ell})\), \(\alpha\), and a portion of \(\partial B(2^{\ell+1})\), see the blue arc in Figure 4, and similarly, \((da)^{\prime}\) consists of another portion of \(\partial B(2^{\ell})\), \(\alpha\), and another portion of \(\partial B(2^{\ell+1})\), see the red arc in Figure 4. Clearly, \(\mathcal{D}\) is contained in \(\mathcal{D}^{\prime}\). Then, \[\ell_{\mathcal{D}^{\prime}}[(ab),(cd)]\geq\frac{2^{\ell}}{\min\{\#(ab),\#(cd) \}}\geq\frac{1}{16}.\] ## Appendix A Extremal Distance and Resistance In this section, we verify (36) through the definition of extremal distance by the resistance of an electrical network. **Definition 3** ([4]).: (37) \[\ell_{\Omega}[(ab),(cd)]:=\sup_{g:\mathcal{E}(\Omega)\to\mathbb{R}_{+}}\frac{ \left[\inf_{\gamma:(ab)\leftrightarrows(cd)}\sum_{e\in\gamma}g_{e}\right]^{2}} {\sum_{e\in\mathcal{E}(\Omega)}g_{e}^{2}}.\] Let \(\Omega_{2}\) be a rectangle with vertices \(a,b,c,d\), labeled in counterclockwise order. Then, the arcs \((ab)\), \((bc)\), \((cd)\), and \((da)\) are the four sides of the rectangle. Let \((bc)^{\prime}\) be an arc from \(b\) to \(c\) contained in \(\Omega_{2}\), and \((da)^{\prime}\) be an arc from \(d\) to \(a\) contained in \(\Omega_{2}\). Then, \(\Omega_{1}\) bounded by \((ab)\), \((bc)^{\prime}\), \((cd)\), and \((da)^{\prime}\) is a subdomain of \(\Omega_{2}\). We want to show \[\ell_{\Omega_{1}}[(ab),(cd)]\geq\ell_{\Omega_{2}}[(ab),(cd)]. \tag{38}\] For any fixed \(g:\mathcal{E}(\Omega_{2})\to\mathbb{R}_{+}\), since \(\mathcal{E}(\Omega_{1})\subset\mathcal{E}(\Omega_{2})\), we have \(\{\gamma:(ab)\stackrel{{\Omega_{1}}}{{\leftrightarrow}}(cd)\} \subset\{\gamma:(ab)\stackrel{{\Omega_{2}}}{{\leftrightarrow}}(cd)\}\). Then, \[\inf_{\Omega_{1}\atop\gamma:(ab)\stackrel{{\Omega_{1}}}{{ \leftrightarrow}}(cd)}\sum_{e\in\gamma}g_{e}\geq\inf_{\Omega_{2}\atop\gamma: (ab)\stackrel{{\Omega_{2}}}{{\leftrightarrow}}(cd)}\sum_{e\in \gamma}g_{e}.\] For the denominator, we use \(\mathcal{E}(\Omega_{1})\subset\mathcal{E}(\Omega_{2})\) again: \[\sum_{e\in\mathcal{E}(\Omega_{1})}g_{e}^{2}\leq\sum_{e\in\mathcal{E}(\Omega_{ 2})}g_{e}^{2}.\] Therefore, \[\frac{\left[\inf_{\gamma:(ab)\stackrel{{\Omega_{1}}}{{ \leftrightarrow}}(cd)}\sum_{e\in\gamma}g_{e}\right]^{2}}{\sum_{e\in\mathcal{E} (\Omega_{1})}g_{e}^{2}}\geq\frac{\left[\inf_{\gamma:(ab)\stackrel{{ \Omega_{2}}}{{\leftrightarrow}}(cd)}\sum_{e\in\gamma}g_{e}\right]^{2}}{\sum_{e \in\mathcal{E}(\Omega_{2})}g_{e}^{2}}.\] (38) follows from taking supremum over all \(g:\mathcal{E}(\Omega_{2})\to\mathbb{R}_{+}\).
2309.14474
Gastro-Intestinal Tract Segmentation Using an Explainable 3D Unet
In treating gastrointestinal cancer using radiotherapy, the role of the radiation oncologist is to administer high doses of radiation, through x-ray beams, toward the tumor while avoiding the stomach and intestines. With the advent of precise radiation treatment technology such as the MR-Linac, oncologists can visualize the daily positions of the tumors and intestines, which may vary day to day. Before delivering radiation, radio oncologists must manually outline the position of the gastrointestinal organs in order to determine position and direction of the x-ray beam. This is a time consuming and labor intensive process that may substantially prolong a patient's treatment. A deep learning (DL) method can automate and expedite the process. However, many deep neural networks approaches currently in use are black-boxes which lack interpretability which render them untrustworthy and impractical in a healthcare setting. To address this, an emergent field of AI known as Explainable AI (XAI) may be incorporated to improve the transparency and viability of a model. This paper proposes a deep learning pipeline that incorporates XAI to address the challenges of organ segmentation.
Kai Li, Jonathan Chan
2023-09-25T19:16:19Z
http://arxiv.org/abs/2309.14474v1
# Gastro-Intestinal Tract Segmentation Using an Explainable 3D Unet ###### Abstract In treating gastrointestinal cancer using radiotherapy, the role of the radiation oncologist is to administer high doses of radiation, through x-ray beams, toward the tumor while avoiding the stomach and intestines. With the advent of precise radiation treatment technology such as the MR-Linac, oncologists can visualize the daily positions of the tumors and intestines, which may vary day to day. Before delivering radiation, radio oncologists must manually outline the position of the gastrointestinal organs in order to determine position and direction of the x-ray beam. This is a time consuming and labor intensive process that may substantially prolong a patient's treatment. A deep learning (DL) method can automate and expedite the process. However, many deep neural networks approaches currently in use are black-boxes which lack interpretability which render them untrustworthy and impractical in a healthcare setting. To address this, an emergent field of AI known as Explainable AI (XAI) may be incorporated to improve the transparency and viability of a model. This paper proposes a deep learning pipeline that incorporates XAI to address the challenges of organ segmentation. GI Tract Segmentation, XAI, UNet, Instance Segmentation, GradCAM, 3D Medical Image Segmentation + Footnote †: journal: Computer Vision and Pattern Recognition 1 Footnote 1: _etext:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_::_:_::_:_:_:_::_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_::_:_:_:_::_:_:_:_:_:_:_::_:_:_:_:_::_:_:_:_:_:_::_:_:_:_:_:_:_::_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_:_::_:_:_:_:_:_::_:_:_:_:_::_:_:_::_:_:_:_:_::_:_:_:_:_:_:_::_:_:_:_:_:_:_::_:_:_::_:_:_:_::_:_:_:_:_:_:_::_:_:_::_:_::_:_::_::_:_::_::_::_:_::_:_::_::_::_:_::_:_::_::_:_::_:_::_::_::_::_:_::_::_::_:_::_::_::_:_::_::_::_::_:_::_:_::_::_::_::_::_::_::_::_::_:_:::_::_::_:::_:::_::_::_::_:::_:::_::_::::_::_::::_ rendered from 2D MRI scans. These MRI scans are from actual cancer patients who had 1-5 scans on separate days during their treatment. These scans vary significantly in size. There are 4 unique sizes: 234 x 234, 266 x 266, 276 x 276, and 310 x 360 pixels. Each scan has a corresponding mask segmenting the small bowel, large bowel, and stomach. There are several instances where masks overlap one another, which makes this a multilabel segmentation task. To improve the prediction accuracy, a 5 fold ensemble training approach [7] was used. Each fold uses approximately 240 volumes for training--80% of the dataset, and the remaining 60 for validation-20% of the dataset. The stratified group K fold method is used to ensure folds are balanced. Namely, each fold contains the same proportion of annotations for each volume. For testing, the model is run against a hidden test set that consists of about 50 cases, where each case contains anywhere from 1 to 5 volumes. The training and testing images are available on Kaggle's "UW-Madison GI Tract Image Segmentation" competition [1]. ### Image Preprocessing Each input is first cropped in order to remove unnecessary background space, then normalized. The model is trained on smaller patches of 160 x 160 x 80 pixels rather than an entire volume, primarily because of memory limitations of computing resources. These patches are created using a 3D random spatial crop. Other transformations include random flip, random affine, random grid distortion, random dropouts, and random shifting and scaling in intensity--each with a probability value of 0.5. These help reduce the chances of overfitting. The AdamW optimizer [9] was used with an initial learning rate of \(5\times 10^{-4}\), which decays to \(1\times 10^{-4}\) through a learning rate scheduler, which is a one cycle learning rate [10]. Each fold is trained on 120 epochs initially, then fine tuned using 40 epochs on a lower learning rate of \(3\times 10^{-4}\). Due to memory Figure 1: Illustration of 5 fold ensembling. Ensembled model has better generalization performance compared to each individual fold Figure 3: UNet Model as described by original paper. Note the activation function for the model in this paper is PReLU rather than ReLU Figure 2: Sample raw image compared to preprocessed image, which features a random crop, change in intensity, and random flip. limitations on the machine, a batch size of 4 was chosen, although it is not necessarily optimal. For the same reason, the number of epochs is also limited. The predictions of this model were submitted to Kaggle's "UW-Madison GI Tract Image Segmentation" competition for testing. The final accuracy of the model was to be tested using a specific metric imposed by the competition: \(0.4\times DSC+0.6\times hausdorff\ distance\) DSC is the dice similarity coefficient [11], given as: \(DSC=\frac{2\times|X\cap Y|}{|X|+|Y|}\) The Hausdorff distance [11] is calculated as follows: \(d_{\text{H}}(X,Y)=\max\left\{\underset{x\in X}{\sup}\,d(x,Y),\,\underset{y \in Y}{\sup}\,d(X,y)\,\right\}\) In the above equations, X and Y denote the tensors of the predicted mask and the ground truth mask. Although the testing metric is a linear combination of both DSC and Hausdorff distance, in computer vision literature, segmentation models generally do not attempt to directly minimize Hausdorff distance due to its sensitivity to noise and outliers. Moreover, Hausdorff distance is determined solely by the largest error, and using it as a loss function may lead to poor segmentation performance and algorithm instability [12]. Therefore, the loss function used for this model was purely dice loss, which is equal to 1 - dice similarity coefficient (DSC), or \(1-\frac{2\times|X\cap Y|}{|X|+|Y|}\). It is worth noting that cross entropy [b] is another common loss function in medical image segmentation. Although DSC generally leads to better results [12], it is also possible to utilize a weighted loss function combining the two metrics. This may in fact help the model learn the right features better which could result in a higher test score. ### Visualizations For model interpretability, two pytorch visualization libraries were used. Namely, _Pytorch GradCAM_[13] was used for gradient based methods, which includes the GradCAM and guided GradCAM implementation, while _Captum_[14] was used for the activation based method, DeepLifT. ## 4 Results The visualizations generated from GradCAM, DeepLifT and Guided GradCAM are shown in figures 4, 5, and 6 respectively. Each visualization is shown next to its ground truth mask. Training results for each fold are displayed in table 2. Note that total accuracy is calculated using the weighted average of the validation DSC and validation hausdorff distance, as specified in section 3.3. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Parameter & Run 1 & Run 2 \\ \hline Batch Size & 4 & 4 \\ \hline Initial Learning Rate & \(5\times 10^{-4}\) & \(3\times 10^{-4}\) \\ \hline Epochs & 120 & 40 \\ \hline Minimum Learning Rate & \(1\times 10^{-4}\) & \(1\times 10^{-4}\) \\ \hline \end{tabular} \end{table} Table 1: Table summarizing various hyperparameters for the two training runs used to train each fold. The initial learning rate was set to ensure stable training yet quick convergence due to the limited number of epochs. \begin{table} \begin{tabular}{|p{113.8pt}|p{113.8pt}|p{113.8pt}|p{113.8pt}|} \hline Fold & Training DSC & Validation DSC & Total Accuracy \\ \hline Fold & 0.865 & 0.825 & 0.962 & 0.906 \\ \hline Fold 1 & 0.868 & 0.805 & 0.960 & 0.898 \\ \hline Fold 2 & 0.860 & 0.812 & 0.963 & 0.903 \\ \hline Fold 3 & 0.903 & 0.820 & 0.964 & 0.906 \\ \hline Fold 4 & 0.833 & 0.825 & 0.943 & 0.899 \\ \hline \end{tabular} \end{table} Table 2: Final training DSC, validation DSC, validation Hausdorff distance, and total score for each training fold. ## 5 Discussion The ensembled 3D UNet is a strong performing model as it is able to achieve a score of 0.855 which is among the top 40% of scores in the competition. Based on the visualizations, there remain several areas that could be improved. For instance, in the DeepLiftT heatmaps, the model fails to detect some portions of the organ entirely. Furthermore, there is activation in the upper part of the scan, which is an area of the arm. These errors can be minimized through better preprocessing, such as smarter cropping of the input images. Another observation is that for certain folds there is a substantial discrepancy between the validation score and the training score: in fold 4, the training DSC is 0.903 while the validation DSC is only 0.820. It was also noted that around the last 20 epochs of training, the validation DSC began decreasing with each subsequent epoch despite training DSC increasing. These are indications of overfitting. One solution for this is to increase the data augmentation or transformations--for instance, adding random scaling of contrast and random noise. Visualizations also show there are several improvements that can be made to the training data. One error is that the labels are often truncated so the bottom parts of the volume are unlabelled. This can be seen in figure 4 where visualizations clearly indicate the presence of the small bowel despite the mask being empty. ## 6 Conclusion A vanilla 3D UNet is able to achieve relatively accurate predictions, based on its performance in the competition. This model is intended to be a baseline and its performance can be further enhanced by using transfer learning, that is, applying an advanced classification algorithm such as EfficientNet, ResNet, VGG16, and more, as the encoder for the UNet. Improving faulty training data could also help strengthen the model. As next steps, one could label the bottom slices of the training volume which are missing masks or leave them out of training altogether. One could also experiment with different loss functions, such as Focal Loss, Binary Cross Entropy Loss (BCE Loss), and Intersection over Union (IoU).
2309.12005
GPS-VIO Fusion with Online Rotational Calibration
Accurate global localization is crucial for autonomous navigation and planning. To this end, various GPS-aided Visual-Inertial Odometry (GPS-VIO) fusion algorithms are proposed in the literature. This paper presents a novel GPS-VIO system that is able to significantly benefit from the online calibration of the rotational extrinsic parameter between the GPS reference frame and the VIO reference frame. The behind reason is this parameter is observable. This paper provides novel proof through nonlinear observability analysis. We also evaluate the proposed algorithm extensively on diverse platforms, including flying UAV and driving vehicle. The experimental results support the observability analysis and show increased localization accuracy in comparison to state-of-the-art (SOTA) tightly-coupled algorithms.
Junlin Song, Pedro J. Sanchez-Cuevas, Antoine Richard, Raj Thilak Rajan, Miguel Olivares-Mendez
2023-09-21T12:22:54Z
http://arxiv.org/abs/2309.12005v2
# Improving GPS-VIO Fusion with Adaptive Rotational Calibration ###### Abstract Accurate global localization is crucial for autonomous navigation and planning. To this end, GPS-aided Visual-Inertial Odometry (GPS-VIO) fusion algorithms are proposed in the literature. This paper presents a novel GPS-VIO system that is able to significantly benefit from the online adaptive calibration of the rotational extrinsic parameter between the GPS reference frame and the VIO reference frame. The behind reason is this parameter is observable. This paper provides novel proof through nonlinear observability analysis. We also evaluate the proposed algorithm extensively on diverse platforms, including flying UAV and driving vehicle. The experimental results support the observability analysis and show increased localization accuracy in comparison to state-of-the-art (SOTA) tightly-coupled algorithms. Sensor Fusion, State Estimation, Kalman Filter ## I Introduction The accuracy, robustness and reliability of the pose estimation are essential for having safe autonomous navigation capabilities in mobile robots. In the past few decades, the Global Positioning System (GPS) has been widely used for localization in outdoor scenes in terrestrial environments, since it offers a robust global localization solution without accumulating drift over time. However, due to the high-level noise of consumer grade GPS sensors, accurate positioning cannot be typically achieved by using GPS sensors alone. Moreover, in urban scenes, GPS signals are vulnerable to the interference of the local environment, such as signal occlusions, or bounces due to high-rise buildings, which further degrades the GPS positioning performance. In those GPS-degraded or -denied scenarios, Visual-Inertial Odometry (VIO) techniques and Simultaneous Localization and Mapping (SLAM) algorithms are conventionally implemented. VIO algorithms do not suffer from these interruptions, and provide high-precision and high-frequency local state estimation, however, these algorithms also have their inherent drawbacks. For instance, VIO systems cannot provide long-term drift-free localization and heading. In [1], they actually prove that VIO systems have four unobservable Degrees of Freedom (DoF), namely the 3D positions and yaw. SLAM techniques mitigate this drawback thanks to the simultaneous calculation of the localization, the map, the execution of the loop-closure, and map alignment algorithms. Those mechanisms allow SLAM techniques to decrease the localization uncertainty and the long-term drift. As a counterpart, SLAM techniques demand high computational and memory resources. Although SLAM seems to be the ideal approach for GPS-denied environments, in other scenarios in which the GPS could be degraded but it is still accessible, GPS positioning and visual inertial navigation system can be combined to provide an accurate, high frequency, robust localization and long-term drift-free localization. The fusion of the three sensors involved in this process, GPS, camera and IMU has produced promising results and can achieve locally accurate and globally drift-free localization. GPS-aided VIO algorithms have been previously proposed in the literature [2, 3]. The spatial transformation to couple both the GPS and the VIO reference frame is shown to be unobservable with linear observability analysis [2, 4]. However, Linearization implies that the derived observability is locally and may be unreliable for the original nonlinear system [5, 6]. Thus, it is necessary to revisit the observable property with the tool of nonlinear observability analysis. More specifically, we aim to show in this paper that the rotational extrinsic parameter between GPS reference frame and VIO reference frame is observable. In this paper, we propose a novel filter-based GPS-VIO system which is specially focused on including a reliable and accurate estimation of the rotation extrinsic between GPS reference frame and VIO reference frame. Having a reliable calibration is essential to improve the accuracy of the system. The key contributions of this work are summarized as: * We propose a novel filter-based estimator to fuse GPS measurements and visual-inertial data and simultaneously estimate the rotational extrinsic parameter between the GPS and VIO frames online. * We prove that the rotational extrinsic parameter is observable via nonlinear observability analysis, and support the conclusion with simulations. * We evaluate the localization accuracy of the proposed algorithm on multiple public datasets, including small scale flying datasets and large scale driving datasets, and show the superior performance of our proposed algorithms. ## II Related work Sensor fusion of camera and IMU is a well studied topic [7, 8]. Visual-inertial fusion algorithms can be broadly classified into two categories: optimization-based methods and filter-based methods. As compared to filter-based methods, optimization-based methods achieve higher theoretical accuracy. Representative works include VINS-Mono [9], Basalt [10] and ORB-SLAM3 [11]. Their high computational cost is the major disadvantage of optimization-based methods. In contrast, sliding-window filter-based methods, such as the Multi-State Constraint Kalman Filter (MSCKF) [12, 13, 14], are more efficient and achieve comparable accuracy. The combination of a camera and an IMU can only generate relative pose estimation, resulting in the unobservability of global position and absolute yaw [1]. Therefore, pure VIO systems tend to drift over time [15]. Recent works have employed GPS measurement to eliminate this drift. These methods can be divided into loosely-coupled methods and tightly-coupled methods. VINS-Fusion is a loosely-coupled approach, which fuses GPS position measurements and output pose of VIO subsystem [16]. However, the fusion algorithm is unable to improve the VIO subsystem. Therefore, the inner correlations of all measurements are discarded, causing suboptimal localization results. Gomsf is a similar loosely-coupled work [17]. Tightly-coupled methods fully exploit the complementary merits of multi-sensor data, and are promising to further improve the accuracy. A tightly-coupled estimator based on sliding window optimization is proposed in [18]. The rotation between the GPS reference frame and the VIO reference frame is included in the state vector, but the non-synchronization between GPS timestamp and VIO system timestamp is neglected. [3] describes another tightly-coupled optimization-based approach. The comparative experiments with VINS-Fusion have demonstrated that tightly-coupled methods are superior to loosely-coupled methods. However, the transformation between GPS reference frame and VIO reference frame is not estimated in [3]. The closest to our work is [2], which is a tightly-coupled estimator based on MSCKF. This approach enables online spatial-temporal GPS-IMU calibration. The extrinsic parameters between the GPS reference frame and the VIO reference frame are inserted into the state during initialization, however, marginalized after all states are transformed from the VIO reference frame to the GPS reference frame [2]. The main difference between the GPS-IMU and the GPS-VIO online spatial calibration is that the first one refers to the relative position of the GPS sensor frame and the IMU frame, while the second one refers to the relative transformation between the GPS reference frame and the VIO reference frame. Consequently, the approach of [2] does not estimate the extrinsic parameters between GPS-VIO online. The behind reason is they show the extrinsic parameters are unobservable with linear observability analysis. However, linear observability analysis maybe unreliable for nonlinear system. A locally observable system is sure to be globally observable, but a locally unobservable system maybe globally observable [6, 19, 20]. Our main contribution in this work is to point out that rotational extrinsic parameter is globally observable using nonlinear observability analysis. This novel observability conclusion is similar to our recent accepted work [21]. The unavoidable errors caused by imposing fixed extrinsic parameters after GPS-VIO initialization lead to miss-calculations of the fusion algorithms in long distances. Without online calibration, the estimation error of rotational extrinsic parameter at the start will deteriorate the localization accuracy, especially when the GPS noise is relatively large. [18, 22, 23] adopt explicitly online calibration of the rotational extrinsic parameter to improve localization accuracy. To simplify the state estimation complexity, [23] disables the online estimation once the rotational extrinsic parameter is converged. However, neither of them provide a theoretical observability analysis. In this paper, we prove the rotational extrinsic parameter is observable; hence, including it in the state vector is a promising and theoretically guaranteed mean to improve the accuracy of the state estimator. ## III Problem Formulation ### _Reference frames and Notation_ The coordinate systems used in this work are shown in the Fig. 1. \(\{E\}\) represents the East-North-Up (ENU) coordinate system, which is the reference frame of GPS position measurements. An arbitrary GPS measurement can be chosen as the origin of \(\{E\}\) frame. \(\{V\}\) is the reference frame of the VIO system. After the initialization of the VIO system, the orientation of this coordinate system is gravity aligned. \(\{I\}\) and \(\{C\}\) represent IMU coordinate frame and camera coordinate system, respectively. \(G\) is the position of the antenna of the GPS receiver. We use the notation \({}^{V}\left(\bullet\right)\) to represent a quantity in the coordinate frame \(\{V\}\). The position of point \(I\) in the frame \(\{V\}\) is expressed as \({}^{V}p_{I}\). The velocity of frame \(\{I\}\) in the frame \(\{V\}\) is expressed as \({}^{V}v_{I}\). Furthermore, we use quaternion to represent the attitude of rigid body [24]. \({}^{l}_{V}q\) represents the orientation of frame \(\{I\}\) with respect to frame \(\{V\}\), and its corresponding rotation matrix is given by \({}^{l}_{V}R\). Similar notations also apply to the other reference frames. \(\left[\bullet\right]_{\times}\) denotes the skew symmetric matrix corresponding to a three-dimensional vector, and \(\left[\bullet\right]^{T}\) is used to represent the transpose of a matrix. ### _Classical MSCKF-based VIO structure_ According to [2][14], the classic MSCKF-based VIO algorithm usually defines the following states \[\begin{array}{l}x=\left[\begin{array}{ccccc}x_{1}^{T}&x_{c_{1}}^{T}&\cdots &x_{c_{N}}^{T}\end{array}\right]^{T}\\ x_{I}=\left[\begin{array}{ccccc}I_{V}q^{T}&V_{I}^{T}&v_{I}^{T}&V_{I}^{T}&p_ {f}^{T}&b_{\omega}^{T}&b_{a}^{T}\end{array}\right]^{T}\\ x_{c_{i}}=\left[\begin{array}{ccccc}I_{V}q^{T}&V_{I}^{T}&p_{f}^{T}\end{array} \right]^{T}\\ \end{array} \tag{1}\] Fig. 1: Coordinate systems. where \({}^{V}p_{f}\) represents feature position, \(f\), in the VIO reference frame \(\{V\}\). To make the presentation concise, only one feature point is described here. \(b_{\omega}\) and \(b_{a}\) are the biases of the IMU angular velocity and the linear acceleration measurements, respectively. \(x_{I}\) indicates the current IMU state. \(x_{c_{i}}\) is obtained by extracting the first two quantities of \(x_{I}\) at different image times. Then the state of the whole MSCKF system \(x\) can be constructed by augmenting \(N\) historical \(x_{c_{i}}\) in \(x_{I}\). After successful initialization and setting appropriate initial value and covariance for \(x\), the VIO system follows the Kalman filter pipeline. IMU measurements are used for the propagation of \(x_{I}\). Whenever a new image is received, \(x\) is augmented with the pose clone of the current \(x_{I}\) and the visual constraints between multiple pose clones are utilized to update the state. For more details of this part, we refer interested readers to Open-VINS [14]. ### _GPS Measurement Update_ Assuming that the first GPS position measurement as the origin of the \(\{E\}\) frame, the subsequent GPS measurements are denoted as \({}^{E}p_{G}\). Each GPS observation can be formulated as1 Footnote 1: Measurement equation (2) is used here just for the convenience of the observability analysis. In the implementation, we adopt the interpolation measurement equation as [2]. \[\begin{split} z&={}^{E}p_{G}={}^{E}p_{V}+{}^{E}_{V}R {}^{V}p_{G}\\ &={}^{E}p_{V}+{}^{E}_{V}R\left({}^{V}p_{I}+{}^{I}_{V}R{}^{TI}p_{ G}\right)\end{split} \tag{2}\] where \({}^{I}p_{G}\) is the position of point \(G\) in the IMU frame \(\{I\}\). This paper assumes that this quantity is known, since \({}^{I}p_{G}\) can be obtained from CAD model or calibrated before the system runs. \({}^{V}_{V}R\) and \({}^{V}p_{I}\) are quantities expressed in the VIO reference frame \(\{V\}\). \({}^{E}p_{V}\) and \({}^{E}_{V}R\) are the transformations between frame \(\{V\}\) and frame \(\{E\}\). Since these two frames are gravity aligned, we can simply use the yaw angle to parameterize the rotation matrix between them. Therefore, \({}^{E}_{V}R\) can be expressed as \[\begin{split}{}^{E}_{V}R=\left[\begin{array}{ccc}\cos\psi&- \sin\psi&0\\ \sin\psi&\cos\psi&0\\ 0&0&1\end{array}\right]\end{split} \tag{3}\] where \(\psi\) is the relative yaw angle between GPS reference frame and VIO reference frame. To make the measurement equation usable, \(\psi\) and \({}^{E}p_{V}\) must be known. In [2], these are calculated in the initialization stage of the GPS-VIO system and are marginalized later. The main difference between our work and theirs is that we will provide a more suitable nonlinear observability analysis in Section IV-B, to decide the inclusion of observable quantities to the system state vector for potential online refinement. To analyze the observability of all the extrinsic parameters between the frame \(\{V\}\) and the frame \(\{E\}\), \(\psi\) and \({}^{E}p_{V}\) are included in the state vector. Moreover, the time offset between the GPS and IMU should also be modeled for the sake of real world experiments. Thus, the new system state vector then becomes \[x=\left[\begin{array}{cccc}x_{I}^{T}&x_{c_{1}}^{T}&\cdots&x_{c_{N}}^{T}&\psi &{}^{E}p_{V}^{T}&{}^{I}t_{G}\end{array}\right]^{T} \tag{4}\] where \(\psi\) and \({}^{E}p_{V}\) represent the interested extrinsic parameters, and \({}^{I}t_{G}\), is the time offset between the GPS and IMU clock2. The subset of system state related to the GPS measurement equation is noted as \(x_{s}\) Footnote 2: As time offset \({}^{I}t_{G}\) calibration is not the focus of this work, it is ignored in following analysis, but considered in real-world experiments to compensate non-synchronization (Section V-C). \[x_{s}=\left[\begin{array}{cccc}\psi q^{T}&{}^{V}p_{I}^{T}&\psi&{}^{E}p_{V}^ {T}\end{array}\right]^{T} \tag{5}\] The measurement Jacobian \(H\) is expressed as \[H=\frac{\partial\tilde{z}}{\partial\tilde{x}_{s}}=\left[\begin{array}{cccc}- {}^{E}_{V}R_{V}^{I}R^{T}\left[{}^{I}p_{G}\right]_{\times}&{}^{E}_{V}R&{}^{H} _{\psi}{}^{V}p_{G}&I_{3}\end{array}\right] \tag{6}\] \[H_{\psi}=\left[\begin{array}{cccc}-\sin\psi&-\cos\psi&0\\ \cos\psi&-\sin\psi&0\\ 0&0&0\end{array}\right] \tag{7}\] To keep this paper focused and concise, we omit the description of the GPS-VIO system's initialization, as well as the proper handling of time-offsets among the different sensors, like \({}^{I}t_{G}\). Interested readers can refer to [2]. ## IV Observability Analysis ### _Comments on Linear Observability Analysis_ The linear observability analysis of the GPS-VIO system has been investigated previously in [2] and detailed in [4]. However, the unobservable property obtained about extrinsic parameters in these works maybe misleading, as they apply linear observability analysis for a typically nonlinear GPS-VIO system. As discussed in Section II, a locally unobservable system maybe globally observable [6, 19, 20]. Moreover, no experiments were performed in [2, 4] to validate the observability conclusions regarding the extrinsic parameters. In this work, we employ a more appropriate nonlinear observability analysis for the nonlinear GPS-VIO system to obtain observable property and provide experiments to solidify the analysis. ### _Nonlinear Observability Analysis_ We now conduct a nonlinear observability analysis, following the standard Lie derivatives method [20]. Removing the IMU bias and pose clones from the state vector (4), to simplify the formulation, the state becomes \[x=\left[\begin{array}{cccc}{}^{I}_{V}q^{T}&{}^{V}v_{I}^{T}&{}^{V}p_{I}^{T}&{ }^{V}p_{f}^{T}&\psi&{}^{E}p_{V}^{T}\end{array}\right] \tag{8}\] Following immediately, we write the kinematic equations as \[\left[\begin{array}{c}{}^{I}_{V}\dot{q}\\ {}^{V}\dot{v}_{I}\\ {}^{V}\dot{p}_{I}\\ {}^{V}\dot{p}_{f}\\ {}^{V}\dot{p}_{f}\\ {}^{V}\dot{p}_{f}\\ {}^{V}\dot{p}_{V}\end{array}\right]=\underbrace{\left[\begin{array}{c}0_{4\times 1 }\\ g\\ {}^{V}v_{I}\\ 0_{3\times 1}\\ 0\\ 0\\ 0_{3\times 1}\\ \end{array}\right]}_{f_{0}}+\underbrace{\left[\begin{array}{c}\frac{1}{2} \Xi\left({}^{I}_{V}q\right)\\ 0_{3}\\ 0_{3}\\ 0_{3}\\ 0_{1\times 3}\\ 0_{3}\\ \end{array}\right]}_{0_{3}}\omega+\underbrace{\left[\begin{array}{c}0_{4 \times 3}\\ {}^{V}_{V}R^{T}\\ 0_{3}\\ 0_{3}\\ 0_{3}\\ 0_{3}\\ \end{array}\right]}_{f_{2}} \tag{9}\] where \(g\) is denoted as the local gravity vector. \(\omega\) and \(a\) are de-biased IMU angular velocity and linear acceleration measurements, respectively. Here, we use the time derivative property of quaternions \[\dot{q}=\frac{1}{2}\Omega\left(\omega\right)q=\frac{1}{2}\Xi\left(q\right)\omega \tag{10}\] The definition of \(\Xi\left(q\right)\) can be found in [24]. Next, we list the usable measurement equations. The camera measurement equation is \[h_{1}\left(x\right)={}^{C}p_{f}={}^{C}_{I}R^{I}_{V}Rp+{}^{C}p_{I} \tag{11}\] where \(p={}^{V}p_{f}-{}^{V}p_{I}\). The norm constraint of the unit quaternion is also considered as a measurement equation \[h_{2}\left(x\right)={}^{I}_{V}q{}^{T}_{V}q-1=0 \tag{12}\] The measurement equation of the GPS is \[h_{3}\left(x\right)={}^{E}p_{G}={}^{E}p_{V}+{}^{E}_{V}R{}^{V}p_{I} \tag{13}\] where without loss of generality, we assume \({}^{I}p_{G}=0_{3\times 1}\) to simplify the expression. #### Iv-B1 Zeroth-Order Lie Derivatives The zeroth-order Lie derivative of a function is itself. \[\begin{array}{l}\pounds^{0}h_{1}={}^{C}p_{I}+{}^{C}_{I}R^{I}_{V}Rp\\ \pounds^{0}h_{2}={}^{V}_{I}q{}^{T}_{V}q-1\\ \pounds^{0}h_{3}={}^{E}p_{V}+\frac{F}{V}R{}^{V}p_{I}\end{array} \tag{14}\] The gradients of zeroth-order Lie derivatives with respect to \(x\) are \[\begin{array}{l}\nabla\pounds^{0}h_{1}=\left[\begin{array}{cccc}X_{1}&0_{ 3}&-{}^{C}_{I}R^{I}_{V}R&{}^{C}_{I}R^{I}_{V}R&0_{3\times 1}&0_{3}\\ \pounds^{0}h_{2}=\left[\begin{array}{cccc}2^{I}_{V}q{}^{T}&0_{1\times 3}&0_{1 \times 3}&0_{1\times 3}&0&0_{1\times 3}\\ 0_{3\times 4}&0_{3}&{}^{E}_{V}R&0_{3}&H_{\psi}{}^{V}p_{I}&I_{3}\\ \end{array}\right]\end{array} \tag{15}\] where \(X\) represents a quantity that does not need to be computed explicitly, as it does not affect the observability analysis. #### Iv-B2 First-Order Lie Derivatives The first-order Lie derivative of \(h_{1}\) with respect to \(f_{0}\) is computed as \[\pounds^{1}_{f_{0}}h_{1}=\nabla\pounds^{0}h_{1}\bullet f_{0}=-{}^{C}_{I}R^{ I}_{V}R{}^{V}v_{I} \tag{16}\] The gradient of \(\pounds^{1}_{f_{0}}h_{1}\) with respect to \(x\) is \[\nabla\pounds^{1}_{f_{0}}h_{1}=\left[\begin{array}{cccc}X_{2}&-{}^{C}_{I} R^{I}_{V}R&0_{3}&0_{3}&0_{3\times 1}&0_{3}\\ \end{array}\right] \tag{17}\] The first-order Lie derivative of \(h_{1}\) with respect to \(f_{1}\) is computed as \[\pounds^{1}_{f_{1}}h_{1}=\nabla\pounds^{0}h_{1}\bullet f_{1}=\frac{1}{2}X_{1 }\Xi\left({}^{I}_{V}q\right) \tag{18}\] where the gradient of \(\pounds^{1}_{f_{1}}h_{1}\) with respect to \(x\) is \[\nabla\pounds^{1}_{f_{1}}h_{1}=\left[\begin{array}{cccc}X_{3}&0_{9\times 3 }&X_{4}&-X_{4}&0_{9\times 1}&0_{9\times 3}\\ \end{array}\right] \tag{19}\] The first-order Lie derivative of \(h_{3}\) with respect to \(f_{0}\) is computed as \[\pounds^{1}_{f_{0}}h_{3}=\nabla\pounds^{0}h_{3}\bullet f_{0}={}^{E}_{V}R{}^ {V}v_{I} \tag{20}\] and the gradient of \(\pounds^{1}_{f_{0}}h_{3}\) with respect to \(x\) is \[\nabla\pounds^{1}_{f_{0}}h_{3}=\left[\begin{array}{cccc}0_{3\times 4}&{}^{E}_{V}R&0_{ 3}&0_{3}&H_{\psi}{}^{V}v_{I}&0_{3}\\ \end{array}\right] \tag{21}\] #### Iv-B3 Observability analysis By stacking the gradients of previously calculated Lie derivatives together, the following observability matrix is constructed \[\mathcal{O}=\left[\begin{array}{cccc}\nabla\pounds^{0}h_{1}\\ \nabla\pounds^{0}h_{2}\\ \nabla\pounds^{0}h_{3}\\ \nabla\pounds^{1}_{f_{0}}h_{1}\\ \nabla\pounds^{1}_{f_{0}}h_{3}\\ \end{array}\right]=\] \[\left[\begin{array}{cccc}X_{1}&0_{3}&-{}^{C}_{V}R&0_{3\times 1}&0_{3}\\ 2^{V}_{V}q{}^{T}&0_{1\times 3}&0_{1\times 3}&0_{1\times 3}&0&0_{1\times 3}\\ 0_{3\times 4}&0_{3}&{}^{E}_{V}R&0_{3}&H_{\psi}{}^{V}p_{I}&I_{3}\\ X_{2}&-{}^{C}_{V}R&0_{3}&0_{3}&0_{3\times 1}&0_{3}\\ X_{3}&0_{9\times 3}&X_{4}&-X_{4}&0_{9\times 1}&0_{9\times 3}\\ 0_{3\times 4}&{}^{E}_{V}R&0_{3}&0_{3}&H_{\psi}{}^{V}v_{I}&0_{3}\\ \end{array}\right] \tag{22}\] Adding the fourth column to the third column, \(\mathcal{O}\) becomes \[\mathcal{O}=\left[\begin{array}{cccc}X_{1}&0_{3}&0_{3}&{}^{C}_{V}R&0_{3 \times 1}&0_{3}\\ 2^{V}_{V}q{}^{T}&0_{1\times 3}&0_{1\times 3}&0_{1\times 3}&0&0_{1\times 3}\\ 0_{3\times 4}&0_{3}&{}^{E}_{R}&0_{3}&H_{\psi}{}^{V}p_{I}&I_{3}\\ X_{2}&-{}^{C}_{V}R&0_{3}&0_{3}&0_{3\times 1}&0_{3}\\ X_{3}&0_{9\times 3}&0_{9\times 3}&-X_{4}&0_{9\times 1}&0_{9\times 3}\\ 0_{3\times 4}&{}^{E}_{V}R&0_{3}&0_{3}&H_{\psi}{}^{V}v_{I}&0_{3}\\ \end{array}\right] \tag{23}\] \({}^{E}_{V}R\) in the third column can be used to eliminate \(H_{\psi}{}^{V}p_{I}\) in the fifth column and \(I_{3}\) in the sixth column. Thus, \(\mathcal{O}\) can be reduced to \[\mathcal{O}=\left[\begin{array}{cccc}X_{1}&0_{3}&0_{3}&{}^{C}_{V}R&0_{3 \times 1}&0_{3}\\ 2^{V}_{V}q{}^{T}&0_{1\times 3}&0_{1\times 3}&0_{1\times 3}&0&0_{1\times 3}\\ 0_{3\times 4}&0_{3}&{}^{E}_{V}R&0_{3}&0_{3\times 1}&0_{3}\\ X_{2}&-{}^{C}_{V}R&0_{3}&0_{3}&0_{3\times 1}&0_{3}\\ X_{3}&0_{9\times 3}&0_{9\times 3}&-X_{4}&0_{9\times 1}&0_{9\times 3}\\ 0_{3\times 4}&{}^{E}_{V}R&0_{3}&0_{3}&H_{\psi}{}^{V}v_{I}&0_{3}\\ \end{array}\right] \tag{24}\] The sixth column corresponds to the translation part of the extrinsic parameter between frame \(\{V\}\) and frame \(\{E\}\). This column is not full rank, so the translation part is unobservable. Finally, we analyze the rotation part of the extrinsic parameter. Let us focus on \(H_{\psi}{}^{V}v_{I}\) in the fifth column. The fifth column cannot be eliminated by other columns and is full rank in general case. So the rotational extrinsic parameter is observable. It is worth noting that the rank of fifth column can become zero if zero velocity motion incurs, more specifically, \(x\) and \(y\) direction. Therefore, horizontal velocity excitation affects the observability of the rotational extrinsic parameter. ## V Results We develop the proposed algorithm based on Open-VINS [14], which is a state-of-the-art VIO framework. When GPS information is usable, the system state including the rotational extrinsic parameter between the GPS reference frame and the VIO reference frame is updated via Section III-C. Since [2] is not open-sourced, we have implemented our own version by following their paper. In the results presented here, our implementation [2] is referred as **"GPS-VIO-fixed"**. The prefix "fixed" comes from the fact that the spatial transformation is marginalized after the initialization of the system. While our algorithm continues to calibrate the rotational extrinsic parameter online after initialization. First, we design a simulation environment to verify the observability conclusion. Then, the proposed algorithm is evaluated on two public datasets. One is the small-scale EuRoC dataset [25], which has seen extensive use in the VIO research community. Noisy GPS measurements are simulated by adding Gaussian noise to the groundtruth position. It is featured by UAV flying. Another one is the large-scale KAIST dataset with real GPS measurements in challenging urban scenes [26]. It is featured by vehicular driving. The path length and GPS noise of each selected KAIST sequence is longer than 7km and larger than 6m, respectively. ### _Validation of the Observability Analysis_ To verify our observability proof, we build a simulation environment based on Open-VINS [14]. The groundtruth trajectory of MH_01_easy in EuRoC dataset is used to generate simulated multi-sensor data, including 400Hz IMU, 10Hz image and 10Hz GPS. The noise of the GPS sensor is approximated by applying multivariate Gaussian noises with a standard deviation of 0.2m on the positions. To verify the convergence capability of discovered observable quantity, the calibration of the rotation extrinsic parameter is performed with different initial guesses. We start with an error of 20 degrees and add 50 degrees increment until we reach 170 degrees, we then reiterate with negative angles from -20 to -170 degrees. Fig. (a)a shows the convergence of the yaw error. Between 21s to 45s, the convergence of yaw error reaches to steady state because of the stationary motion status. As mentioned in Section IV-B, zero velocity motion leads to the unobservability of rotation extrinsic parameter. At other times, there exist velocity excitation. Before 21s, the motion space near the starting point is relatively small compared to GPS noise. After 45s, the moving distance exceeds 10m, which is far greater than GPS noise. Fig. (a)a shows one standard deviation (1 \(\sigma\)) of the yaw error. Initial one standard deviation of the yaw error is set to 4 rad, considering the largest initial yaw error is close to \(\pi\) rad. The estimation of the yaw error consistently converges to near zero with small uncertainty, and the convergence process is robust to the relatively large initial error. Apart from UAV trajectory, we also repeat the above steps with the planer vehicular trajectory of Urban39 in KAIST dataset. Larger GPS noise and practical GPS noise characteristic are considered. The vertical noise is set to twice the horizontal noise. The GPS noise is defined as \[n_{gps}\sim\mathcal{N}\left(0_{3\times 1},diag(1,1,4)\right) \tag{25}\] Fig. (b)b shows the convergence results with vehicular trajectory. Both yaw error and its corresponding uncertainty consistently converges to near zero, even for the near \(\pi\) rad initial error. All these results from Fig. 2 support that the rotational extrinsic parameter is observable. ### _EuRoC dataset_ There are 11 sequences in EuRoC dataset. Each sequence is classified into easy, medium or hard according to the level of difficulty for the VIO algorithms. Image and IMU data are available at 20Hz and 200Hz respectively, and the groundtruth position and orientation are provided at 200Hz. We test all sequences to verify the convergence of the rotational extrinsic parameter between the GPS reference frame and the VIO reference frame. Similarly to the previous experiment, the simulated GPS measurements are obtained by adding Gaussian noise (\(\sigma=0.2m\)) to the groundtruth position. The GPS frequency is sampled to be 20Hz. For the initialization of our GPS-VIO system, it is assumed that we do not have any accurate initial estimation for \(\psi\). And the initial value of \(\psi\) is naively set as \(\hat{\psi}=0\). \(\hat{F}\hat{p}_{V}\) is set as the first GPS measurement received after successful VIO initialization. The groundtruth of \(\psi\) is acquired by querying the groundtruth orientation value at the initialization time. Fig. (a)a shows the convergence of \(\psi\) over time. The range of the initial yaw error is \([-178.47^{\circ},177.67^{\circ}]\). Estimation error \(\left(\hat{\psi}-\psi\right)\) of each sequence approaches to near zero quickly and perfectly. Results verify the observability of \(\psi\). Table I shows the Absolute Trajectory Error (ATE) of the different algorithms on all the sequences. We include the results for GPS positioning, optimization-based VIO (SVO2.0 [27]), filter-based VIO (Open-VINS [14]), two variants of the Fig. 3: (a) \(\psi\) convergence over time. (b) Horizontal view of aligned trajectory with different level of GPS noise. Fig. 2: Top: \(\psi\) convergence over time respect to different initial guesses. Bottom: One standard deviation (1 \(\sigma\)) of \(\psi\). tightly-coupled optimization-based GPS-VIO approach [3], GPS-VIO-fixed [2] and our proposed algorithm. As [3] relies on manually setting initial rotational extrinsic parameter, we provide two variants: initializing \(\psi\) as zero as ours, or initializing \(\psi\) as groundtruth. Our approach outperforms other state-of-the-art competitors on most sequences because of online adaptive rotational calibration. Regarding the first three sequences, we achieve less but close accuracy compared to the second variant of [3]. The possible reason is that the VIO subsystem of [3], SVO2.0 [27], performs better than our VIO subsystem, Open-VINS [14], in the first three sequences. However, SVO2.0 suffers from relatively naive VIO initialization strategy3 for most sequences. Footnote 3: [https://github.com/uzh-rpg/rpq_svo_pro_open/blob/master/doc/known_issues_and_improvements.md](https://github.com/uzh-rpg/rpq_svo_pro_open/blob/master/doc/known_issues_and_improvements.md) ### _KAIST dataset_ The KAIST datasets are collected in highly complex urban environments. It is very challenging to achieve high-precision localization in these environments using consumer grade sensors. Because many moving objects exist in the streets and dense high-rise buildings corrupt GPS signals. GPS position covariance of Urban39 dataset is shown in Fig. 3(a). GPS is low quality and Non-Gaussian in practice. GPS has larger uncertainty in \(z\) direction compared to \(x\) and \(y\) direction. Image, IMU and GPS of KAIST datasets are received at 10Hz, 100Hz and 5Hz respectively. We refer to the initialization algorithm of [2] to obtain the initial \(\psi\) and \({}^{E}\)\({}_{PV}\). The initialization distance is set as 20m. \({}^{E}\)\({}_{PV}\) is fixed after initialization and only \(\psi\) is estimated online. As the groundtruth orientation in the GPS reference frame is unavailable (see Section 4.3 in [26]), (\(\psi-\psi_{0}\)) is plotted in Fig. 3(b) to show the convergence trend over time. \(\psi_{0}\) is the initial value of \(\psi\). The deviation from the initial value is less than \(2.5^{\circ}\) for each sequence. Although the calibration of time offset between GPS and IMU, \({}^{I}\)\({}^{t}\)\({}_{G}\), is not the focus of this paper, we still need to deal with it carefully. It has a negative impact on the localization accuracy without proper handling, especially when different sensor clocks are not hardware-synchronized [2]. Fig. 3(b) also shows the time offset calibration results, which are initialized from 0s. The average final converged values is \(-0.13\pm 0.03\)s. We evaluate the ATE of GPS positioning, VIO (Open-VINS [14]), GPS-VIO-fixed and our proposed algorithm. Results are summarized in TABLE II. Our algorithm provides the highest localization accuracy. VIO suffers from drift issue from long trajectory. Moreover, the scale information of VIO system is unobservable when the vehicle undergoes constant acceleration motion [28, 29]. These issues can be solved by fusing GPS measurements once GPS-VIO system is successfully initialized (see Urban33 in TABLE II). ## VI Conclusion This paper presents a novel tightly-coupled filter-based GPS-VIO algorithm which can benefit from the online estimation of the rotational extrinsic parameter between the GPS and the VIO reference frame. The proposed algorithm is able to adaptively refine the rotational calibration, thus improve the localization performance. The novel study on the observability of extrinsic parameter demonstrates that nonlinear observability analysis is more comprehensive and profound than linear observability analysis, for a nonlinear system. It is advised to validate the unobservable property derived from linear observability analysis in simulations. In future, we will investigate if we can obtain better localization results by formulating the estimation algorithm directly with the GNSS raw observations [22, 30]. \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **ID** & \begin{tabular}{c} **Path** \\ **len(km)** \\ \end{tabular} & **GPS** & **VIO [14]** & \begin{tabular}{c} **GPS-VIO-** \\ **fixed [2]** \\ \end{tabular} & **Ours** \\ \hline 28 & 11.5 & 8.66 & 10.78 / 1.44 & 7.71 / 1.75 & **4.67 / 1.42** \\ \hline 31 & 11.4 & 7.26 & 76.87 / 1.58 & 6.85 / 1.62 & **5.56 / 1.55** \\ \hline 33 & 7.6 & 8.95 & – & 7.77 / 2.90 & **4.94 / 1.27** \\ \hline 38 & 11.4 & 7.09 & 7.53 / 1.26 & 5.55 / 1.25 & **3.66 / 1.22** \\ \hline 39 & 11.1 & 6.43 & 8.73 / 1.93 & 5.50 / 1.48 & **2.63 / 1.24** \\ \hline \end{tabular} \end{table} TABLE II: ATE (meter / degree) Comparison with the SOTA on the KAIST Dataset. \(-\) means trajectory divergence. \begin{table} \begin{tabular}{|c|c|c|c|c|c|c|} \hline **ID** & **VIO [27]** & **VIO [14]** & **A** & **B** & **C** & **Ours** \\ \hline MH01 & 0.064 & 0.084 & 0.137 & **0.031** & 0.114 & 0.036 \\ \hline MH02 & 0.052 & 0.086 & 0.110 & **0.036** & 0.126 & 0.040 \\ \hline MH03 & 0.118 & 0.124 & 0.119 & **0.048** & 0.174 & 0.062 \\ \hline MH04 & 0.203 & 0.169 & 0.292 & 0.068 & 0.080 & **0.061** \\ \hline MH05 & 0.240 & 0.200 & 0.312 & 0.056 & 0.176 & **0.049** \\ \hline V101 & 0.064 & 0.054 & 0.0 & 0.041 & 0.039 & **0.037** \\ \hline V102 & 0.082 & 0.046 & 0.312 & 0.048 & 0.050 & **0.037** \\ \hline V103 & 0.066 & 0.048 & 0.365 & 0.068 & 0.091 & **0.041** \\ \hline V201 & 0.085 & 0.041 & 0.106 & 0.038 & 0.098 & **0.035** \\ \hline V202 & 0.111 & 0.040 & 0.123 & 0.046 & 0.042 & **0.033** \\ \hline V203 & 0.156 & 0.067 & 0.154 & 0.098 & 0.073 & **0.044** \\ \hline \end{tabular} \({}^{1}\) A: results of [3] by initializing \(\psi\) as zero. \({}^{2}\) B: results of [3] by initializing \(\psi\) as groundtruth. \({}^{3}\) C: results of GPS-VIO-fixed [2] by initializing \(\psi\) through Section IV in [2], which suffers from relatively large GPS noise (see Fig. 3(b)). The initialization distance is set as 2m. \({}^{4}\) Ours: results of proposed method by initializing \(\psi\) as zero. \end{table} TABLE I: ATE (meter) Comparison with the SOTA on the EuRoC Dataset. The ATE of GPS trajectory is 0.347m. Fig. 4: (a) Real world GPS position covariance in three directions. (b) Top: \((\psi-\psi_{0})\) convergence over time. Bottom: Calibration results of the time offset between GPS and IMU.
2309.05579
Towards inferring the geometry of kilonovae
Recent analysis of the kilonova, AT2017gfo, has indicated that this event was highly spherical. This may challenge hydrodynamics simulations of binary neutron star mergers, which usually predict a range of asymmetries, and radiative transfer simulations show a strong direction dependence. Here we investigate whether the synthetic spectra from a 3D kilonova simulation of asymmetric ejecta from a hydrodynamical merger simulation can be compatible with the observational constraints suggesting a high degree of sphericity in AT2017gfo. Specifically, we determine whether fitting a simple P-Cygni line profile model leads to a value for the photospheric velocity that is consistent with the value obtained from the expanding photosphere method. We would infer that our kilonova simulation is highly spherical at early times, when the spectra resemble a blackbody distribution. The two independently inferred photospheric velocities can be very similar, implying a high degree of sphericity, which can be as spherical as inferred for AT2017gfo, demonstrating that the photosphere can appear spherical even for asymmetrical ejecta. The last-interaction velocities of radiation escaping the simulation show a high degree of sphericity, supporting the inferred symmetry of the photosphere. We find that when the synthetic spectra resemble a blackbody the expanding photosphere method can be used to obtain an accurate luminosity distance (within 4-7 per cent).
Christine E. Collins, Luke J. Shingles, Andreas Bauswein, Stuart A. Sim, Theodoros Soultanis, Vimal Vijayan, Andreas Floers, Oliver Just, Gerrit Leck, Georgios Lioutas, Gabriel Martínez-Pinedo, Albert Sneppen, Darach Watson, Zewei Xiong
2023-09-11T16:08:43Z
http://arxiv.org/abs/2309.05579v2
# Towards inferring the geometry of kilonovae ###### Abstract Recent analysis of the kilonova, AT2017gfo, has indicated that this event was highly spherical. This may challenge hydrodynamics simulations of binary neutron star mergers, which usually predict a range of asymmetries, and radiative transfer simulations show a strong direction dependence. Here we investigate whether the synthetic spectra from a 3D kilonova simulation of asymmetric ejecta from a hydrodynamical merger simulation can be compatible with the observational constraints suggesting a high degree of sphericity in AT2017gfo. Specifically, we determine whether fitting a simple P-Cygni line profile model leads to a value for the photospheric velocity that is consistent with the value obtained from the expanding photosphere method. We would infer that our kilonova simulation is highly spherical at early times, when the spectra resemble a blackbody distribution. The two independently inferred photospheric velocities can be very similar, implying a high degree of sphericity, which can be as spherical as inferred for AT2017gfo, demonstrating that the photosphere can appear spherical even for asymmetrical ejecta. The last-interaction velocities of radiation escaping the simulation show a high degree of sphericity, supporting the inferred symmetry of the photosphere. We find that when the synthetic spectra resemble a blackbody the expanding photosphere method can be used to obtain an accurate luminosity distance (within 4 - 7 per cent). keywords: neutron star mergers - radiative transfer - methods: numerical ## 1 Introduction The kilonova AT2017gfo, which was coincident with the gravitational wave signal GW170817 (Abbott et al., 2017), has provided us with a wealth of observations (e.g., Smart et al., 2017; Pian et al., 2017; Villar et al., 2017; Coulter et al., 2017). Recently, Sneppen et al. (2023b) presented evidence that AT2017gfo was highly spherical, which is surprising given that binary neutron star merger simulations show strong asymmetries (Bauswein et al., 2013; Sekiguchi et al., 2015; Bovard et al., 2017; Radice et al., 2018; Foucart et al., 2023; Combi & Siegel, 2023). Sneppen et al. (2023b) inferred the expansion velocity of the ejecta by analysing the most prominent feature in the spectra of AT2017gfo, which has been suggested to be a Sr ii P-Cygni feature (Watson et al., 2019; Gillanders et al., 2022; Domoto et al., 2021, 2022, although see Tarumi et al., 2023 for discussion of He i as an alternative explanation for this feature). Such line profile analysis is predominantly sensitive to the line of sight velocity component. Sneppen et al. (2023b) also inferred a photospheric radius using the expanding photosphere method (EPM; Baade, 1926; Kirshner & Kwan, 1974; Eastman et al., 1996) and, assuming homologous expansion of the ejecta, converted this to an expansion velocity of the photosphere. This method is primarily sensitive to the expansion velocity perpendicular to the line of sight. They found remarkable consistency between the velocities obtained via these two methods across two observed epochs, suggesting consistency between line of sight and perpendicular expansion velocities, which is indicative of a near-spherical explosion. To quantify the inferred sphericity, Sneppen et al. (2023b) defined a zero-centred asymmetry index, \(\Upsilon=\frac{v_{\parallel}-\parallel}{v_{\parallel}+v_{\parallel}}\), where \(v_{\parallel}\) is the expansion velocity of the photosphere primarily along the line of sight (obtained from the P-Cygni profile analysis) and \(v_{\perp}\) is the expansion velocity of the photosphere primarily in the direction perpendicular to the line of sight (obtained from the EPM), and found \(\Upsilon=0.00\pm 0.02\)1. In addition to this analysis, they identify that the shape of the P-Cygni feature is best matched by assuming a spherical photosphere across multiple epochs (see Sneppen et al., 2023). Shingles et al. (2023) presented a three-dimensional radiative transfer calculation using ejecta from a binary neutron star merger simulation coupled to a nucleosynthesis network. In the polar directions, the spectra resembled the observations of AT2017gfo remarkably well, considering the model was not tuned in any way to match this event, although at earlier times than those observed (see Shingles et al., 2023 for discussion). The synthetic observables showed variations with both polar and azimuthal viewing angles, and therefore are not isotropic. Here, we analyse this simulation to determine whether we can infer information about the underlying symmetry of the ejecta from the synthetic observables. The aim of this paper is to determine whether the geometry and overall degree of sphericity of the ejecta is encoded in the observables as suggested by Sneppen et al. (2023). To this end, we consider the synthetic observables predicted by the 3D radiative transfer calculation by Shingles et al. (2023) from an asymmetric merger simulation to analyse the velocities that would be derived from these observables following the method laid out by Sneppen et al. (2023), and investigate whether the apparent consistency of \(v_{\parallel}\) and \(v_{\perp}\) is a fundamental challenge to the explosion model. Determining whether the geometry of simulations is compatible with observations will further our understanding of binary neutron star mergers, including the high density Equation of State, the dynamics of matter ejection, including the role of neutrinos, and the underlying rapid neutron capture (r-process) nucleosynthesis. The EPM is typically used to measure luminosity distances, \(\rm D_{L}\), to supernovae, and can be used to obtain distance measurements independently of the cosmological distance ladder, which can lead to independent measurements of the Hubble constant (de Jaeger et al., 2017; Gall et al., 2018; Sneppen et al., 2023, 20). Sneppen et al. (2023) applied the EPM to measure the distance to AT2017gfo and found \(\rm D_{L}\) to be consistent with previous distance measurements. In addition to inferring the geometry of our simulations, we aim to test how accurately the luminosity distance to our synthetic spectra can be measured using the EPM. ## 2 Simulations ### Merger ejecta symmetry We consider the 3D binary neutron star merger simulation of equal mass 1.35 \(\rm M_{\odot}\) neutron stars considered by Shingles et al. (2023) and Collins et al. (2023). The mass ejected into equal solid-angle bins, spaced by polar angle, for this model is shown in Figure 1. The simulation predicts more mass per solid angle near the equator compared to near the poles. There is a mild equatorial (i.e., north vs. south) asymmetry, visible in Figure 1, which to some degree may be physical (as a result of stochastic fluctuations and hydrodynamic instabilities in the matter flow), but may also be amplified by numerical effects (e.g. particle noise). We discuss these aspects in more detail in Appendix A. We note that the merger simulation was only evolved until 20 ms after the merger (simulating only dynamical ejecta), at which time a snapshot of the merger data was taken and mapped into artis. The merger simulation shows some variation in asymmetry with time, and it terminates before the matter ejection ceases and the ejecta configuration reaches its final state. We thus do not expect an overall match with AT2017gfo. Also shown is the mass of material ejected in the velocity range 0.15c \(<\) v \(<\) 0.45c, \(\rm M_{\rm v}\), which is approximately in the line forming region during the first day of the kilonova (Section 3.3). Note that 44 per cent of the ejecta mass is below 0.15c. Figure 2 shows how the ejecta mass is distributed with polar angle and radial velocity. The ejecta exhibit asymmetry both in the total mass ejected into different directions, and in the velocity at which the bulk of the mass is ejected. To quantify the asymmetry of the ejecta in the simulation, we define a zero-centred asymmetry parameter2 for the ejected mass, Footnote 2: A zero-centred asymmetry parameter was defined by Sneppen et al. (2023) to quantify the sphericity in the inferred velocities of the photosphere. Similarly, we use zero-centred asymmetry parameters to quantify sphericity, however the measurements on the sphericity of the mass and luminosity are not directly comparable to the quantity inferred by Sneppen et al. (2023). \[\rm T_{M}=\frac{M_{\rm eq}-M_{\rm pole}}{M_{\rm eq}+M_{\rm pole}}, \tag{1}\] where \(\rm M_{\rm eq}\) is the mass within a solid-angle at the equator and \(\rm M_{\rm pole}\) is the mass within an equal solid-angle at the pole. The masses within equal solid-angles (plotted in Figures 1 and 2) are listed in Table 1 for solid-angles near the poles and equator. The values of \(\rm T_{M}\) are listed in Table 2. The values of \(\rm T_{M}\) indicate a moderate level of asymmetry in the mass per solid-angle ejected near the poles compared to the equator, however, \(\rm T_{M,v}\) (considering only the mass ejected within the velocity range 0.15c \(<\) v \(<\) 0.45c) indicates that the mass approximately within the line forming region within the first day of the kilonova is more symmetric, which may lead to a higher level of symmetry in the synthetic observables than indicated by \(\rm T_{M}\). The abundances of synthesised elements in the ejecta also show asymmetries. We show the distribution of Sr synthesised in the ejecta in Figure 2, which is predominantly responsible for the strongest feature in our simulated spectra (Shingles et al., 2023). Note that Figure 1: Mass ejected by polar angle, where each bin has an equal solid-angle. The height of each bar indicates the mass ejected within the solid-angle bin. The black lines indicate the mass ejected into the velocity range which is approximately in the line forming region of the kilonova within the first day (assuming homologous expansion). the feature is also composed of contributions from Y and Zr, which show a similar distribution in the ejecta to Sr. We show the total mass of these and other representative elements ejected by polar angle in Figure 11. Higher masses of Sr, Y and Zr are synthesised near the poles than near the equator (due to the higher \(Y_{e}\) near the poles). Sneppen et al. (2023) suggest that the line shape of the spectral feature in AT2017gfo indicates a near spherical distribution of Sr, which is not shown by the distribution of Sr synthesised in our model. Using the asymmetry parameter, \(\Upsilon_{\rm M}^{\rm Sr}\), listed in Table 2, the mass distribution of Sr shows a moderate level of asymmetry. Within the velocity range \(0.15\rm c<v<0.45c\), the distribution of Sr with polar angle is very asymmetric (e.g., \(\Upsilon_{\rm M,v}^{\rm Sr}=-0.71\)). ### Radiative transfer simulation Given the asymmetry of the ejecta model, we aim to test whether this level of asymmetry can be consistent with the observational constraints inferred by Sneppen et al. (2023) for AT2017gfo, which appear to favour near spherical symmetry. For this, we analyse the 3D radiative transfer kilonova simulation, 3D AD2, carried out by Shingles et al. (2023) to identify whether, if subjected to a similar analysis as data from actual observations, it can match (or be ruled out by) constraints on its apparent sphericity, such as the Y parameter used by Sneppen et al. (2023). This simulation was carried out using the multi-dimensional, time-dependent Monte Carlo radiative transfer code arts(Sim, 2007; Kromer et al., 2010; Shingles et al., 2020; Shingles et al., 2023, based on the methods of Lacy, 2002, 2003, 2005). We note that in arts there is no photosphere defined in the simulation. We can therefore infer the apparent photosphere from the synthetic observables (using the same methods as for observations) without the constraint of a photospheric boundary being imposed in the simulation. To reduce Monte Carlo noise in the orientation-dependent synthetic observables, escaping packets of radiation are binned into ten equal solid-angle bins, defined by polar angle. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\mu_{\rm ij}\) (pole) & \(\mu_{\rm ij}\) (equator) & \(\Upsilon_{\rm M}\) & \(\Upsilon_{\rm M,v}\) & \(\Upsilon_{\rm M}^{\rm Sr}\) & \(\Upsilon_{\rm M,v}^{\rm Sr}\) \\ \([\cos(\theta)]\) & \([\cos(\theta)]\) & & & \\ \hline \([0.8,1.01]\) (+ve pole) & \([0.0,0.2]\) (eq.) & \(0.22\) & \(-0.094\) & \(-0.31\) & \(-0.71\) \\ \([-1.0,-0.8]\) (\(-\)ve pole) & \([-0.2,0.0]\) (eq.) & \(0.33\) & \(0.037\) & \(-0.29\) & \(-0.66\) \\ \hline \hline \end{tabular} \end{table} Table 2: Asymmetry parameter comparing mass ejected within equal solid-angle bins near the equator and at the poles using the values in Table 1. The range in polar-angle of each bin, \(\mu_{\rm ij}\), (with a width of \(\cos(\theta)=0.2\)) is listed. \begin{table} \begin{tabular}{c c c c c} \hline \hline \(\mu_{\rm ij}\) (pole) & \(\mu_{\rm ij}\) (equator) & \(\Upsilon_{\rm M}\) & \(\Upsilon_{\rm M,v}\) & \(\Upsilon_{\rm M}^{\rm Sr}\) & \(\Upsilon_{\rm M,v}^{\rm Sr}\) \\ \([\cos(\theta)]\) & \([\cos(\theta)]\) & & & \\ \hline \([0.8,1.01]\) (+ve pole) & \([0.0,0.2]\) (eq.) & \(0.22\) & \(-0.094\) & \(-0.31\) & \(-0.71\) \\ \([-1.0,-0.8]\) (\(-\)ve pole) & \([-0.2,0.0]\) (eq.) & \(0.33\) & \(0.037\) & \(-0.29\) & \(-0.66\) \\ \hline \hline \end{tabular} \end{table} Table 1: Mass ejected within equal solid-angle bins, defined by polar angle, at the poles and around the equator. We define \(\mu_{\rm ij}\) to be the range in polar-angle of each bin in the ejecta, which each have a width of \(\cos(\theta)=0.2\). Included is the total mass, M, within the solid-angle and the mass, M\({}_{v}\), within the velocity range (assuming homologous expansion) \(0.15\rm c<v<0.45c\), which is approximately within the spectral line-forming region during the first day of the kilonova. This is also shown for the mass of Sr, M\({}^{\rm Sr}\), and the mass of Sr within the velocity range, M\({}_{\rm v}^{\rm Sr}\). Figure 2: Total mass (a) and mass of Sr (b) ejected into polar angle bins, where each angle bin has an equal solid-angle with a width of \(\cos(\theta)=0.2\). The colour scale indicates the mass lying within each radial zone. Note a lower cut has been placed on mass to highlight the model structure. ## 3 Results ### Synthetic observables The synthetic light curves and spectra for this simulation are presented by Shingles et al. (2023). In this section, we focus on the observer orientation dependence and the level of isotropy shown by the synthetic observables. #### 3.1.1 Observer orientation variation in bolometric luminosity As a first quantification of the degree to which the simulation predicts observer orientation dependencies we compare the bolometric light curves in different directions. As presented by Shingles et al. (2023), the light curves in the lines of sight at the poles are brighter than those at the equator, due to lower densities and lower \(Y_{e}\) near the poles (see Collins et al., 2023). In Figure 3 we plot the angle-dependent bolometric light curves as isotropic-equivalent luminosities (i.e., from the simulation we record the energy emitted per second into each solid angle bin to obtain light curves in erg s\({}^{-1}\) sr\({}^{-1}\) for each orientation; we then scale these to an equivalent isotropic luminosity by multiplication by the full-sphere solid angle of 4\(\pi\)-sr). Note that the solid-angle bins around the poles encompass the inferred observer angle of AT2017gfo (between 19\({}^{\circ}\) and 25\({}^{\circ}\); Mooley et al., 2022). We define a zero-centred asymmetry index \[Y_{\rm bol}=\frac{L_{\rm eq}-L_{\rm pole}}{L_{\rm eq}+L_{\rm pole}} \tag{2}\] (similar to Sneppen et al., 2023b, but not directly comparable), where \(L_{\rm pole}\) is the luminosity at either pole and \(L_{\rm eq}\) is the luminosity near the equator. The luminosity emitted into the solid-angle bins above (\(0<\cos(\theta)<0.2\)) and below (\(-0.2<\cos(\theta)<0\)) the equator is almost identical - see Figure 3. The maximum deviation of \(Y_{\rm bol}\) between the poles and the equator is \(\Upsilon_{\rm bol}\approx-0.18\) (Figure 3). This clearly demonstrates that the synthetic observables for this ejecta model are not isotropic, however, according to this simple asymmetry metric, the synthetic observables are not as asymmetric as the mass distribution of the ejecta. We discuss this further in Section 3.1.2. #### 3.1.2 Observer orientation variation in band-limited light curves We show the \(griz\) light curves for this simulation in Figure 4. Similarly to the bolometric light curves, the band-limited light curves are not isotropic, but also do not show a very strong observer orientation dependence. In all directions, the light curves initially peak in the bluer bands, and then become brighter in the redder bands, showing similar behaviour to the observed blue to red colour evolution of AT2017gfo. This was also found for this ejecta model in an approximate sense by Collins et al. (2023). Despite having a composition with a higher lanthanide fraction at the equator than at the poles, we do not predict a significantly redder colour at the equator than the poles (see Figure 2). Although, we note that the atomic data considered in this simulation does not include actinides (see Shingles et al., 2023 for details of the atomic data), and therefore the opacity may be underestimated. This suggests that in a 3D simulation, a high lanthanide fraction at the equator does not necessarily lead to a significantly redder spectral energy distribution (SED) viewing from the equator than viewing from the poles. The reason for this behaviour can be seen in Figure 5, where we show the location in the ejecta (in velocity space, assuming homologous expansion) where radiation last interacted before being emitted towards an observer viewing from a polar or an equatorial direction. The radiation viewed from a given direction has been emitted from a broad range of ejecta, both parallel and perpendicular to the line of sight. Viewed from an equatorial direction, radiation is emitted from high opacity, lanthanide rich regions of the ejecta near the equator, but in addition to this, radiation is also emitted towards an observer at the equator from lower opacity regions (with lower lanthanide fractions) of the ejecta near the poles. We therefore find that the asymmetry of the ejecta (in mass distribution and the variation in \(Y_{e}\)) does not strongly influence the anisotropy of the light curves for varying observer orientations, since an observer does not view radiation emitted from only one region of the ejecta. #### 3.1.3 Observer orientation variation in spectra As discussed by Shingles et al. (2023), the model predicts that the spectra would not appear the same to an observer viewing towards the poles as to an observer near the equator. We show the viewing-angle dependent spectra in Figure 6. The model spectra show phases that resemble the observed spectra of AT2017gfo in the direction of the poles (see Shingles et al., 2023), however, at earlier times than those observed. The more rapid evolution is likely due to the lower mass of our model compared to the inferred mass of AT2017gfo (see discussion by Shingles et al., 2023). The spectra in the directions of the poles show a feature resembling a P-Cygni profile, which in the spectra of AT2017gfo has been suggested to be Sr ii. As indicated by the band-limited light curves, the SEDs at the poles and equator peak at similar wavelengths, however, at the equator the spectra are relatively featureless and fainter than the directions near the poles (Figure 6). Therefore, our model spectra are significantly dependent on observer orientation. We now examine whether determination of the asymmetry parameter, \(\Upsilon\), for the inferred photospheric velocities from these synthetic spectra can yield results consistent with observational constraints. Figure 3: Bolometric light curves averaged over azimuthal angle, showing the variation with polar angle. Also shown is the zero-centred asymmetry index, \(\Upsilon_{\rm bol}\) comparing the luminosity at the poles to the luminosity at the equator. In the lower plot, the blue line shows the negative pole compared to the equator (\(-1<\cos(\theta)<-0.8\) and \(-0.2<\cos(\theta)<0\)) and orange shows the positive pole compared to the equator (\(0.8<\cos(\theta)<1\) and \(0<\cos(\theta)<0.2\)). ### Inferring photospheric velocities In this section we apply the same method as Sneppen et al. (2023) to our simulated spectra to determine the level of symmetry that would be inferred. We use our model spectra at 0.4 days and 0.6 days, since these resemble the spectra of AT2017gfo at 1.43 days and 2.43 days, which were analysed by Sneppen et al. (2023). We refer to the model spectrum at 0.4 days as epoch 1 and at 0.6 days as epoch 2. #### 3.2.1 Photospheric velocity from P-Cygni feature The most prominent feature in the spectra of AT2017gfo has been suggested to be a Sr ii P-Cygni feature. Our simulated spectra show a similar feature (Figure 6), as discussed by Shingles et al. (2023). We infer \(v_{\parallel}\) from the simulated feature, assuming it can be modelled as a simple P-Cygni feature dominated by Sr ii, however, we note that in the simulation the feature is actually a blend of features, predominantly Sr ii, Y ii and Zr ii. Figure 7 shows the P-Cygni profiles used to infer \(v_{\parallel}\). The P-Cygni profiles were generated using a line profile calculator3 based on the Elementary Supernova Model of Jeffery & Branch (1990). We assume the Sr ii triplet lines to be of similar strength, and that one line (which we choose to be the mid-wavelength line at 10327.31 A) is representative of the triplet. The P-Cygni profile is characterised by the rest wavelength, the optical depth and the velocity of the ejecta. The velocities measured from the P-Cygni feature are listed in Table 3. Since the spectra in the equatorial lines of sight do not show any clear features, a velocity cannot be obtained in the same way from these synthetic spectra. Footnote 3: Available from [https://github.com/unocbauer/public-astro-tools](https://github.com/unocbauer/public-astro-tools). For the P-Cygni profile considering an elliptical photosphere we use the version of this modified by Sneppen et al. (2023), see their methods section), available from [https://github.com/Sneppen/Kilonova-analysis](https://github.com/Sneppen/Kilonova-analysis). #### 3.2.2 Photospheric velocity from EPM Following Sneppen et al. (2023), we infer \(v_{\perp}\) using the expanding photosphere method (EPM). Assuming the emitting region to be a sphere of radius \(R_{\rm ph}\), emitting as a blackbody \(B(\lambda,T^{\prime})\), at wavelength, \(\lambda\), where \(T^{\prime}\) is the inferred temperature in the co-moving frame of the ejecta, the inferred luminosity, \(L_{\lambda}^{\rm BB}\), is given by \[L_{\lambda}^{\rm BB}=4\pi R_{\rm ph}^{2}\pi B(\lambda,T^{\prime}), \tag{3}\] where the blackbody flux emitted in the co-moving frame of the ejecta has been transformed into the rest frame of the observer (see Figure 4: Angle-dependent band-limited light curves. Figure 5: Radial velocity and polar angle of the location in the ejecta (\(\mu_{\rm ej}\)) where radiation last interacted (\(v_{\parallel}\)) before escaping towards an observer in the direction, \(\mu_{\rm obs}\), (arriving at the observer at 0.4 days) viewing towards the poles (a and b) and viewing towards the equator (c). I.e., we construct a 2D histogram of the location and radial velocity where Monte Carlo packets of radiation escaping in a given direction last interacted with the ejecta, and weight each bin by the energy represented by the escaping packets. The arrow indicates the direction, \(\mu_{\rm obs}\), of the observer. Sneppen et al., 2023; Sneppen, 2023). The EPM assumes that \(L_{\lambda}^{\rm BB}\) can be equated to the pseudo-bolometric luminosity, \[L^{\rm bol}=4\pi D_{\rm L}^{2}F_{\lambda}, \tag{4}\] where \(F_{\lambda}\) is the flux and \(D_{\rm L}\) is the luminosity distance. The EPM assumes homologous expansion, \[R_{\rm ph}=v_{\rm ph}t, \tag{5}\] where \(v_{\rm ph}\) is the velocity of the photosphere and \(t\) is the time since explosion. Using the EPM to infer \(v_{\rm ph}\), we obtain \(v_{\perp}\) from the inferred blackbody temperature (by matching blackbody distributions to the synthetic spectra - see Figure 7) and the luminosity obtained from the simulation. The velocities inferred are listed in Table 3. We focus on fitting the blackbody distributions to the UV flux, rather than to the IR since the simulated spectra do not appear to show strong absorption features in the UV, but do appear to show a flux deficit in the IR (unlike AT2017gfo). If a blackbody is fit to the IR flux, the peak of the blackbody is much too blue compared to the synthetic spectra. At epoch 1, the values of \(v_{\perp}\) obtained from the EPM are similar to the velocities, \(v_{\parallel}\), inferred from the simulated spectral feature (e.g., 0.34c compared to 0.33c at the positive pole). At epoch 2, however, the velocities, \(v_{\perp}\), inferred using the EPM are much higher than those inferred from the spectral feature, \(v_{\parallel}\), (e.g., 0.38c compared to 0.26c at the positive pole). Additionally, the inferred photospheric velocity from the EPM increases from epoch 1 to epoch 2, in contradiction to the velocities from the spectral feature, the ejecta velocity radiation was emitted from (see Section 3.3), and the expectation that the photosphere would most likely recede with time. At our simulated epoch 2, the spectrum is not well matched by a blackbody distribution at wavelengths redder than \(\sim 10000\) A (unlike AT2017gfo). This likely explains why the EPM does not produce reasonable values of the photospheric velocity for our simulation at epoch 2. To an observer, it would be apparent that the spectra are not well represented by a blackbody at this time, and that the photospheric velocity inferred from the EPM is not realistic. Dessart & Hillier (2005) noted that the EPM is best used at early times when the spectrum is closest to a blackbody. It is possible that the synthetic spectra show poorer agreement with a blackbody compared to AT2017gfo because the atomic data is not complete, for example, we do not include actinides. Therefore opacity could be missing from the simulation. Additionally, we only consider dynamical ejecta, which may also be responsible for the larger deviation from a blackbody. #### 3.2.3 Geometry implied by \(v_{\parallel}\) and \(v_{\perp}\) Following Sneppen et al. (2023b), we use the asymmetry index, \[Y_{v,\rm ph}=\frac{v_{\perp}-v_{\parallel}}{v_{\perp}+v_{\parallel}}, \tag{6}\] Figure 6: Simulated spectra in polar and equatorial directions (\(\mu_{\rm obs}\) is listed in the figure) at 0.4 days (upper) and 0.6 days (lower) compared to the spectra of AT2017gfo at 1.43 days and 2.42 days, respectively. \begin{table} \begin{tabular}{c c c c c c c c} \hline Epoch & Time & \(\mu_{\rm obs}\) & \(T^{\prime}\) & \(v_{\parallel}\) (P-Cygni) & \(v_{\perp}\) (EPM) & \(Y_{v,\rm ph}\) & \(L_{\lambda}^{\rm BB}\) (EPM) & \(L_{\lambda}^{\rm bol}\) \\ & [d] & [\(\cos(\theta)\)] & [K] & [c] & [c] & & [erg s\({}^{-1}\)] & [erg s\({}^{-1}\)] \\ \hline 1 & 0.4 & [\(0.8,1.01\) (\(\pm\) pole) & 4150 & 0.34 & 0.33 & \(-0.015\) & \(1.54\times 10^{41}\) & \(1.43\times 10^{41}\) \\ 1 & 0.4 & [\(-1.0,-0.8\)] (\(\sim\)ve pole) & 4800 & 0.25 & 0.27 & 0.038 & \(1.26\times 10^{41}\) & \(1.46\times 10^{41}\) \\ 1 & 0.4 & [\(0.0,0.2\)] (eq.) & 4000 & & 0.32 & & & \(1.12\times 10^{41}\) \\ 1 & 0.4 & [\(-0.2,0.0\)] (eq.) & 4000 & & 0.32 & & & \(1.14\times 10^{41}\) \\ 2 & 0.6 & [\(0.8,1.0\)] (\(\pm\)ve pole) & 3000 & 0.26 & 0.38 & 0.19 & \(4.69\times 10^{40}\) & \(1.02\times 10^{41}\) \\ 2 & 0.6 & [\(-1.0,-0.8\)] (\(\sim\)ve pole) & 3350 & 0.23 & 0.31 & 0.15 & \(5.42\times 10^{40}\) & \(9.90\times 10^{40}\) \\ 2 & 0.6 & [\(0.0,0.2\)] (eq.) & 2750 & & 0.40 & & & \(7.65\times 10^{40}\) \\ 2 & 0.6 & [\(-0.2,0.0\)] (eq.) & 2750 & & 0.41 & & & \(7.80\times 10^{40}\) \\ \hline \end{tabular} \end{table} Table 3: Quantities for inferring the geometry of our simulation for an observer viewing the simulation from a direction, \(\mu_{\rm obs}\), where we give the range in polar-angle of \(\mu_{\rm obs}\). The parameters listed include the following. Temperature in the co-moving frame of the ejecta, \(T^{\prime}\), inferred from fitting a blackbody \(B(\lambda,T^{\prime})\), transformed into the rest frame of the observer, to the simulated spectra (see Figure 7). Inferred photospheric velocity, \(v_{\parallel}\), from the P-Cygni feature, and the luminosity that would be inferred for these photospheric velocities using the expanding photosphere method (\(L_{\lambda}^{\rm BB}\)), which can be compared to the simulated luminosity \(L^{\rm bol}\). Perpendicular velocity, \(v_{\perp}\), inferred from the simulated luminosity (\(L^{\rm bol}\)) using the EPM. Asymmetry index, \(Y_{v,\rm ph}\), inferred from \(v_{\parallel}\) and \(v_{\perp}\). Note that by 0.6 days the EPM does not predict reliable velocities for our simulation, which is the reason for the higher level of inferred asymmetry at this time. to quantify the degree of sphericity implied by our synthetic spectra, which we show in Table 3. At epoch 1, a high level of symmetry is inferred from both polar directions. Indeed, the inferred symmetry at the positive pole is within the uncertainty of the sphericity inferred by Sneppen et al. (2023b) for AT2017gfo (\(\Upsilon=0.00\pm 0.02\)). This demonstrates that aspherical ejecta can lead to directions that can appear near-symmetrical to an observer, as quantified by the \(\Upsilon_{\nu,\rm ph}\) measurement. At epoch 2, however, the inferred values of \(\Upsilon_{\nu,\rm ph}\) are much higher. As discussed in Section 3.2.2, this is likely because the simulated spectra are no longer well matched by a blackbody and the inferred perpendicular velocities are much higher than expected. This was not the case for AT2017gfo, during the epochs analysed by Sneppen et al. (2023b), where the spectra continued to resemble a blackbody until later times than in our simulation. #### 3.2.4 Distance estimate The EPM can be used to infer the luminosity distance, \(\rm D_{L}\), by equating Equations 3 and 4. We test how accurately the distance can be inferred for our simulated spectra. We set the distance to the simulated spectra as 1 Mpc, and the inferred distances using the EPM are listed in Table 4. For this calculation, we assume a spherical photosphere (\(v_{\parallel}=v_{\perp}\)) and use the velocity, \(v_{\parallel}\), inferred from the P-Cygni profile (Table 3) to determine \(\rm R_{\rm ph}\). We use the temperatures inferred from the blackbody distributions matching the simulated spectra (Table 3), and the flux from the simulated spectra. At epoch 1, the distance can be inferred to a good degree of accuracy (within 4-7 per cent), however, at epoch 2 when the spectra are no longer well matched by a blackbody the distance estimate is much more uncertain (>25 per cent error) and underestimated. For each epoch, the direction with the more spherical value of \(\Upsilon_{\nu,\rm ph}\) (Table 3) gives a closer estimate of distance to the actual distance, possibly indicating that testing the sphericity implied by \(v_{\parallel}\) and \(v_{\perp}\) Figure 7: Model spectra at epoch 1 (0.4 days; left panels) and epoch 2 (0.6 days; right panels) for polar (upper and middle panels) and equatorial (lower panel) observer directions (\(\mu_{\rm ph}\), listed in each panel). Dashed lines show the blackbody distribution (in the co-moving frame of the ejecta transformed into the rest frame of the observer, scaled to match the brightness of the synthetic spectra) best matching the spectra, where the ejecta temperature, \(\rm T^{\prime}\), and the parallel photospheric velocity, \(v_{\parallel}\) are listed in each panel. The blackbody distribution is modified to include a P-Cygni profile for \(\rm Sr\,\rm u\), considering spherical, prolate or oblate ejecta. For the spectra at the equator, indicative photospheric velocities are chosen for the relativistic correction to the blackbody distribution, since photospheric velocity cannot be inferred from a spectral feature at the equator. could be a test for how good a distance estimate can be obtained, but this should be investigated for more models in future. Both the P-Cygni profile analysis and the EPM assume that the photosphere is a sharp boundary at a single velocity. We discuss in Section 3.3 that in our simulation radiation escapes from a broad range of ejecta velocities, which is likely also the case for AT2017gfo, indicating that the photosphere is not a sharp boundary. However, even with this simple assumption an accurate distance estimate can be obtained while the spectra resemble a blackbody. ### Isotropy of spectrum forming region #### 3.3.1 Last-interaction velocities To give an indication of how well the asymmetry parameter, \(\Upsilon_{\rm v,ph}\), from the inferred photospheric velocities represents the symmetry of radiation leaving the simulation, we extract from the artis calculation the ejecta radial velocity (\(v_{\rm i}\)) at which each Monte Carlo packet last interacted with the ejecta before escaping towards an observer. artis does not impose any photospheric boundary condition on the simulation, but the distribution of \(v_{\rm i}\) provides an indication of where radiation-matter interactions are occurring and thus the location of the spectrum forming region (see Figure 5). The mean, energy weighted radial velocity at which radiation packets last interacted with the ejecta (\(\bar{v}_{\rm i}\)) is listed in Table 5 for radiation escaping towards an observer at a polar or equatorial orientation (i.e., packets of radiation escaping into the given solid-angle bin, \(\mu_{\rm obs}\)) at epochs 1 and 2. The range of \(v_{\rm i}\) of packets escaping in a given direction can be seen in Figures 5 and 8 for epoch 1, giving an indication of the extent of the spectrum forming region. We also show in Figures 8 and 9 the range of ejecta velocities where the last absorption process underwent by a packet was with the Sr ii triplet (\(v_{\rm i}^{\rm Sr}\)), with wavelengths of 10036.65, 10327.31 and 10914.87 A, immediately before being re-emitted towards an observer in a given direction. The mean, energy weighted radial velocities where radiation was last absorbed by the Sr ii triplet, \(v_{\rm i}^{\rm Sr}\), are listed in Table 5. As discussed by Shingles et al. (2023), Sr ii is not the only species responsible for shaping the predicted feature. The Sr ii triplet absorption is, however, at the velocities required to match the feature. The values of \(\bar{v}_{\rm i}^{\rm Sr}\) are higher than the values of \(\bar{v}_{\rm i}\) (e.g., 0.42c compared to 0.35c, see Table 5). This is broadly consistent with the general principles of the simple P-Cygni model adopted to fit this feature, i.e., photon interactions in the Sr line are generally occurring in a spatially extended line-forming region that extends above the region in which the pseudo-continuum forms. As expected, the values of \(\bar{v}_{\rm i}\) and \(\bar{v}_{\rm i}^{\rm Sr}\) decrease from epoch 1 to 2, indicating that the spectrum-forming region recedes from epoch 1 to 2 in the simulation, verifying that the velocities inferred at epoch 2 using the EPM are too high. #### 3.3.2 Symmetry of last-interaction velocities To quantify the level of symmetry shown by the mean last-interaction velocities, we compare \(\bar{v}_{\rm i}\) of radiation that escaped towards an observer at a polar orientation (\(\bar{v}_{\rm i,pole}\)) to \(\bar{v}_{\rm i}\) of radiation escaping towards an observer at an equatorial orientation (\(\bar{v}_{\rm i,eq}\)), using \[\Upsilon_{\rm v,i}=\frac{\bar{v}_{\rm i,eq}-\bar{v}_{\rm i,pole}}{\bar{v}_{ \rm i,eq}+\bar{v}_{\rm i,pole}}, \tag{7}\] which is listed in Table 6. At both epochs, \(\Upsilon_{\rm v,i}\) indicates a high degree of symmetry, with relatively low values for \(\Upsilon_{\rm v,i}\). However, \(\Upsilon_{\rm v,i}\) indicates a slightly lower level of symmetry than determined in Section 3.2.3 from the inferred photospheric velocities, \(\Upsilon_{\rm v,ph}\), (at epoch 1), where \(\Upsilon_{\rm v,i}>\Upsilon_{\rm v,ph}\) \begin{table} \begin{tabular}{c c c c} \hline Epoch & Time & \(\mu_{\rm obs}\) & D\({}_{\rm L}\) \\ & [d] & [\(\cos\,(\theta)\)] & [Mpc] \\ \hline 1 & 0.4 & [0.8, 1.0] (+ve pole) & 1.04 \\ 1 & 0.4 & [\(-1.0\), \(-0.8\)] (\(-\)ve pole) & 0.93 \\ 2 & 0.6 & [\(0.8\), 1.0] (+ve pole) & 0.68 \\ 2 & 0.6 & [\(-1.0\), \(-0.8\)] (\(-\)ve pole) & 0.74 \\ \hline \end{tabular} \end{table} Table 4: Luminosity distance, D\({}_{\rm L}\), estimated using the EPM in the direction \(\mu_{\rm obs}\). We set the distance to our simulated spectra as 1 Mpc. The photospheric velocity considered is that inferred from the P-Cygni profile (Table 3). Figure 8: Histograms showing the last-interaction velocities, (a) \(v_{\rm i}\) and (b) \(v_{\rm i}^{\rm Sr}\), weighted by the energy represented by the escaping packets of radiation, escaping towards an observer at the pole (\(\mu_{\rm obs}\); \(0.8<\cos\,(\theta)<1.0\)) or equator (\(\mu_{\rm obs}\): \(0<\cos(\theta)<0.2\)) at 0.4 days. \(\Upsilon_{\rm v,\,i}\) indicates that the level of symmetry increases from epoch 1 to 2 in this model. In agreement with \(\Upsilon_{\rm v,\,ph}\) inferred from spectra at either pole at epoch 1, a higher degree of symmetry (as indicated by \(\Upsilon_{\rm v,\,i}\)) is found at the positive pole (\(0.8<\cos(\theta)<1\)), than at the negative pole (\(-1<\cos(\theta)<-0.8\)). The level of symmetry inferred from radiation escaping at all wavelengths \(\Upsilon_{\rm v,\,i}\), is similar to that shown by only radiation that was last absorbed by the Sr ii triplet, \(\Upsilon_{\rm v,\,i}^{\rm Sr}\). Even though the distribution of Sr in the ejecta is highly asymmetrical (as quantified by \(\Upsilon_{\rm M}^{\rm Sr}\) and \(\Upsilon_{\rm M,\,v}^{\rm Sr}\)), \(\Upsilon_{\rm v,\,i}^{\rm Sr}\) is highly spherical. The high level of symmetry shown by the mean last-interaction velocities, as quantified by \(\Upsilon_{\rm v,\,i}\), supports the high level of symmetry inferred from \(\nu_{\rm j}\) and \(\nu_{\perp}\) at epoch 1, measured from the P-Cygni feature and EPM. This demonstrates that even when the mass distribution and elemental abundances in the ejecta are asymmetric, the line forming region can be relatively close to spherical. ### P-Cygni profile analysis #### 3.4.1 Inferring geometry from P-Cygni profile Following Sneppen et al. (2023b), we analyse the shape of the P-Cygni profile in our synthetic spectra to explore how well it can be used to constrain the geometry of our simulation. Using the simple P-Cygni model, we show the comparison of P-Cygni features with spherical and elliptical photospheres4 to our simulated spectra in Figure 7. We focus on matching the wavelengths of the absorption component of the P-Cygni profile and the blue side of the emission component, given that redward of the emission feature the spectra are not well matched by the blackbody distribution (which was not the case for AT2017gfo). We also note that the red side of the emission component is strongly blended with Ce iii in our simulation (see Shingles et al. 2023, figure 4). Across both epochs, the feature in the simulated spectra at the negative pole is best matched by a P-Cygni feature with a spherical photosphere, and the spectral feature at the positive pole is best matched by a P-Cygni feature with a prolate photosphere. This is surprising since the positive pole was inferred to show a higher degree of symmetry in Section 3.2.3 (Table 3). Since opposite poles can not be fit by assuming the same geometry for the photosphere, this demonstrates that assuming a single photospheric geometry for the entire model is likely too simple an approximation. The range of \(\nu_{\rm i}\) (Figure 8) shows that radiation is emitted from a broad distribution of ejecta, which may not be well captured by assuming the photosphere can be modelled by a single velocity. We note, however, that it was possible for AT2017gfo to obtain a better fit with a P-Cygni profile than for our synthetic spectra (see Sneppen et al. 2023b) and this analysis should be tested for models more closely resembling a blackbody in future. Footnote 4: See Sneppen et al. 2023b for details of the modifications to the line profile calculator to represent an elliptical photosphere. Note that we did not apply the MCMC fitting used by Sneppen et al. (2023b) to fit the P-Cygni profile, but rather varied parameters manually to match the P-Cygni feature. The distributions of Sr, Y and Zr, which are predominantly responsible for shaping the simulated spectral feature, do not show a spherical distribution since significantly lower masses of these elements are ejected at the equator (Figure 10). Since there is a direction for our asymmetric ejecta model at which the spectral feature is best matched by assuming a spherical photosphere, this demonstrates that \begin{table} \begin{tabular}{c c c c c c} \hline Epoch & Time & \(\mu_{\rm obs}\) (pole) & \(\mu_{\rm obs}\) (equator) & \(\Upsilon_{\rm v,\,i}\) & \(\Upsilon_{\rm v,\,i}^{\rm Sr}\) \\ & [d] & [\(\cos(\theta)\)] & [\(\cos(\theta)\)] & (All wavelengths) & (Sr ii triplet absorption) \\ \hline 1 & 0.4 & [0.8, 1.0] (+ve pole) & [0.0, 0.2] (eq.) & 0.028 & 0.023 \\ 1 & 0.4 & [\(-1.0,-0.8\)] (\(-\)ve pole) & [\(-0.2,0.0\)] (eq.) & 0.072 & 0.075 \\ 2 & 0.6 & [0.8, 1.0] (+ve pole) & [0.0, 0.2] (eq.) & 0.015 & 0.013 \\ 2 & 0.6 & [\(-1.0,-0.8\)] (\(-\)ve pole) & [\(-0.2,0.0\)] (eq.) & 0.063 & 0.057 \\ \hline \end{tabular} \end{table} Table 6: Asymmetry index (Equation 7) for the mean last-interaction velocities (\(\bar{\rm v}_{\rm i}\)), listed in Table 5, of packets escaping into the solid-angle \(\mu_{\rm obs}\), in a polar orientation (\(\bar{\rm v}_{\rm i,pole}\)) and an equatorial orientation (\(\bar{\rm v}_{\rm i,eq}\)). This is shown for packets of radiation escaping at all wavelengths, and for radiation that was last absorbed by the Sr ii triplet. \begin{table} \begin{tabular}{c c c c c c} \hline Epoch & Time & \(\mu_{\rm obs}\) & \(\bar{\rm v}_{\rm i}\) & \(\sigma_{\rm v,\,i}\) & \(\bar{\rm v}_{\rm i}^{\rm Sr}\) & \(\sigma_{\rm v,\,i}^{\rm Sr}\) \\ & [d] & [\(\cos(\theta)\)] & (all wavelengths) [c] & [c] & (Sr ii triplet absorption) [c] & [c] \\ \hline 1 & 0.4 & [0.8, 1.0] (+ve pole) & 0.35 & 0.10 & 0.42 & 0.09 \\ 1 & 0.4 & [\(-1.0,-0.8\)] (\(-\)ve pole) & 0.32 & 0.09 & 0.37 & 0.08 \\ 1 & 0.4 & [0.0, 0.2] (eq.) & 0.37 & 0.10 & 0.44 & 0.08 \\ 1 & 0.4 & [\(-0.2,0.0\)] (eq.) & 0.37 & 0.10 & 0.43 & 0.08 \\ 2 & 0.6 & [0.8, 1.0] (+ve pole) & 0.33 & 0.10 & 0.37 & 0.08 \\ 2 & 0.6 & [\(-1.0,-0.8\)] (\(-\)ve pole) & 0.30 & 0.09 & 0.33 & 0.08 \\ 2 & 0.6 & [0.0, 0.2] (eq.) & 0.34 & 0.10 & 0.38 & 0.08 \\ 2 & 0.6 & [\(-0.2,0.0\)] (eq.) & 0.34 & 0.10 & 0.37 & 0.09 \\ \hline \end{tabular} \end{table} Table 5: Mean ejecta radial velocities of the last interaction underwent by a Monte Carlo packet of radiation (\(\bar{\rm v}_{\rm i}\)) before escaping towards an observer in the direction, \(\mu_{\rm obs}\). \(\bar{\rm v}_{\rm i}\) gives an indication of the spectrum-forming region in the ejecta. We show this for packets of radiation escaping at all wavelengths and for packets of radiation last absorbed by the Sr ii triplet (\(\bar{\rm v}_{\rm i}^{\rm Sr}\)) before being re-emitted and escaping towards an observer. Also listed is the standard deviation \(\sigma\) in \(\nu_{\rm i}\) and \(\nu_{\rm i}^{\rm Sr}\). the underlying ejecta does not necessarily have to be symmetrical for the P-Cygni profile shape to appear consistent with a spherical model. #### 3.4.2 P-Cygni profile shape for spherically symmetric ejecta Having shown that an aspherical model can produce line profiles that appear consistent with spherical ejecta, we now explore whether the line shape predicted from a model with spherical ejecta is also well matched if fit with a simple P-Cygni model. Shingles et al. (2023) present a model where they enforce spherically symmetric ejecta by constructing a 1D, spherically averaged version of the 3D model we consider in this paper (1D AD2 from Shingles et al., 2023). Both the mass distribution and elemental abundances are spherically averaged. Imposing spherical symmetry on this model results in spectra and light curves that do not resemble those predicted for any observer direction from the full 3D simulation. The 3D structures in this model are important in shaping the spectra. Only when the 3D structure is included is the model able to produce synthetic observables that resemble AT2017gfo for this model. We note, however, that 1D simulations can produce spectra resembling AT2017gfo (e.g., Watson et al., 2019; Gillanders et al., 2022). As with our 3D model, we fit the spectral feature produced by the 1D spherically symmetric simulation at \(\sim 6000-8000\) A with P-Cygni profiles assuming a spherical, prolate or oblate photosphere (Figure 10). Note that in this 1D simulation, the 'photosphere' (and spectrum-forming region) must be spherical. The best match is with the P-Cygni profile assuming a prolate photosphere. The emission component (on the blue side) increases too steeply to be matched by a spherical or oblate photosphere. However, given that the continuum is not well described by a blackbody, the exact contribution of the P-Cygni profile is more uncertain and less constrained than in the 3D case. Since we know that this simulation is spherical, this indicates that the geometry of the photosphere assumed to calculate a simple P-Cygni profile may not be a good test for spherically symmetric ejecta. In both the 1D spherical simulation and the 3D asymmetric simulation, the spectral feature is actually a blend of features (primarily Sr ii, Y ii, Zr ii and Ce iii - see Shingles et al., 2023), and the simple Sr ii P-Cygni calculation here is likely unable to capture the complex interplay of these spectral features. ## 4 Conclusions We have analysed a 3D radiative transfer simulation of a kilonova carried out by Shingles et al. (2023) to determine whether the simulation is compatible with the inferred symmetry constraints suggested for AT2017gfo by Sneppen et al. (2023). We have shown that although the ejecta from the neutron star merger model have moderate asymmetries in the mass ejected per solid-angle (e.g., Y\({}_{\rm M}\)=0.33), the synthetic light curves produced show a lower level of asymmetry (e.g. \(\rm Y_{bol}=-0.12\) at 0.4 days) than the level of asymmetry in the mass-distribution of the ejecta. The mass ejected within the velocity range that is approximately within the line forming region for the first day of the kilonova shows a high degree of symmetry (e.g., \(\rm Y_{M,v}=0.037\) at the negative pole), which may lead to a higher level of symmetry in the synthetic observables than suggested by \(\rm Y_{M}\), however, the distribution of elements synthe Figure 10: Spectrum at 0.4 days from spherically symmetric ejecta, compared to P-Cygni profiles (assuming the feature can be fit by a simple Sr ii P-Cygni feature) with a spherical, oblate or prolate photosphere. The dashed line shows a blackbody distribution. To plot the P-Cygni profiles, we modify the blackbody with the calculated line profile. However, we note that in the 1D spherical simulation, the spectra are not well represented by a blackbody. Figure 9: Radial velocity and polar angle of the location in the ejecta (\(\mu_{\rm dbs}\)) where radiation was last absorbed by the Sr ii triplet (\(\rm v_{s}^{\rm Sr}\)) before being re-emitted towards an observer in the direction, \(\mu_{\rm dbs}\). (arriving at the observer at 0.4 days) viewing towards the poles (a and b) and towards the equator (c). The arrow indicates the direction, \(\mu_{\rm dbs}\), of the observer. sised in the ejecta in this velocity region is very asymmetric (e.g., \(\Upsilon_{\rm M,\,v}^{\rm Sr}=-0.71\) at the positive pole). Radiation observed in a given line of sight is emitted from a broad range of regions in the ejecta, which has the effect of decreasing the observed anisotropy. For example, the Sr ii triplet absorption is highly spherical (e.g., \(\Upsilon_{\rm v,\,i}^{\rm Sr}=0.023\) at the positive pole) despite the asymmetric distribution of Sr in the ejecta. However, the spectra are not predicted to appear the same at the poles as at the equator. The equatorial spectra are relatively featureless in comparison to the spectra produced near the poles (as discussed by Shingles et al. 2023). Following Sneppen et al. (2023b), we quantify the level of symmetry that would be inferred from our simulation from the photospheric velocities, \(v_{\parallel}\) and \(v_{\perp}\), obtained via simple fitting of the synthetic spectra (i.e., adopting similar methods as can be readily applied to real observations). At the first epoch considered (0.4 days), the values inferred for \(v_{\perp}\) are similar to those obtained for \(v_{\parallel}\), indicating a high degree of sphericity, as quantified by \(\Upsilon_{\rm v,ph}\). On the pole, \(\Upsilon_{\rm v,ph}\) is within the uncertainty inferred for AT2017gfo by Sneppen et al. (2023b). This demonstrates that the synthetic observables can appear consistent with spherical ejecta when viewed from certain directions, even when the ejecta are asymmetric. At the second epoch (0.6 days) the synthetic spectra are no longer well represented by a blackbody, and the inferred values of \(v_{\perp}\) are too high. However, this is not what was found by Sneppen et al. (2023b) for the observations of AT2017gfo, where the spectra continue to resemble a blackbody until later times and the inferred photospheric velocities from AT2017gfo using the EPM were similar to those obtained from the Sr ii P-Cygni feature over the two epochs considered by Sneppen et al. (2023b). The high degree of symmetry determined from the inferred photospheric velocities, quantified by \(\Upsilon_{\rm v,\,ph}\), at epoch 1 is supported by the level of symmetry in the mean last-interaction velocities extracted from the simulation, quantified by \(\Upsilon_{\rm v,\,i}\). The level of symmetry indicated by \(\Upsilon_{\rm v,\,i}\) increases from epoch 1 to 2. At epoch 1, while the spectra are well represented by a blackbody, the EPM can be used to infer the distance to our synthetic spectra to a good degree of accuracy (4-7 per cent). At epoch 2, however, the distance is underestimated (>25 per cent). In our simulation, where a higher degree of sphericity, \(\Upsilon_{\rm v,\,ph}\), was inferred from the synthetic spectra, a more accurate estimate of the distance \(\rm D_{L}\) was obtained. This may suggest that \(\Upsilon_{\rm v,\,ph}\) could be used as a test for how accurate a distance estimate can be obtained from the EPM, however, this should be investigated with future simulations. We compare simple P-Cygni profile models, assuming the photosphere to be spherical or elliptical, to our simulated spectral feature. The shape of the spectral feature produced at one pole is best matched by a P-Cygni profile assuming a spherical photosphere. However, from the opposite pole the shape was best matched by a P-Cygni model assuming a prolate photosphere. Additionally, we found that for a spectrum from a simulation with spherically symmetric ejecta, the feature was best matched by a P-Cygni profile assuming a prolate photosphere. These simulations suggest that fitting the profile shape alone may not be a robust test of spherical symmetry, although we note that the P-Cygni fits to our simulations are less certain than for AT2017gfo, due to the poorer match to a blackbody redward of the P-Cygni profile. In our 3D simulation, there are lines of sight for which it could be inferred that the kilonova is highly spherical using the methods suggested by Sneppen et al. (2023b). The level of symmetry of the last-interaction velocities (\(\Upsilon_{\rm v,\,i}\)) of radiation escaping from our simulation is highly spherical, despite the asymmetric ejecta. This indicates that the combination of \(v_{\parallel}\) and \(v_{\perp}\) can indicate sphericity of the escaping radiation, however, this should not be interpreted to mean that the ejecta are necessarily spherically symmetric. Not all directions in the simulation would appear as spherical as inferred for AT2017gfo. If our simulation is representative of a real kilonova event, then we would expect that a high level of symmetry would not be inferred for all future observations (e.g., at the equator, where we do not predict a spectral feature from which to infer \(v_{\parallel}\)). However, if all future observations appear as spherical as AT2017gfo then this could suggest that this simulations is too anisotropic. More observations and simulations are required to understand the geometry of kilonovae. ## Acknowledgements ECE, AB, OJ, and VV acknowledge support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program under grant agreement No. 759253. EEC, AB, SAS, AS and DW acknowledge support by the European Research Council (ERC) under the European Union's research and innovation program (ERC Grant HEAVYMETAL No. 101071865). LJS, AF, GL, GMP, and ZX acknowledge support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation program (ERC Advanced Grant KLANOVA No. 885281). AB, CEC, AF, OJ, GL, GMP, LJS, and ZX acknowledge support by Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) - Project-ID 279384907 - SFB 1245 and MA 4248/3-1. AB and VV acknowledge support by DFG - Project-ID 138713538 - SFB 881 ("The Milky Way System", subproject A10). AB, CEC, TS, AF, GL, GMP, OJ, LJS, and ZX acknowledge support by the State of Hese within the Cluster Project ELEMENTS. The work of SAS was supported by the Science and Technology Facilities Council [grant numbers ST/P000312/1, ST/T000198/1, ST/X00094X/1]. The Cosmic Dawn Center is funded by the Danish National Research Foundation under grant number 140. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer JUWELS at Julich Supercomputing Centre (JSC). This work was performed using the Cambridge Service for Data Driven Discovery (CSD3), part of which is operated by the University of Cambridge Research Computing on behalf of the STFC DiRAC HPC Facility (www.dirac.ac.uk). The DiRAC component of CSD3 was funded by BEIS capital funding via STFC capital grants ST/P0020307/1 and ST/R002452/1 and STFC operations grant ST/R00689X/1. DiRAC is part of the National e-Infrastructure. CEC and LJS are grateful for computational support by the VIRGO cluster at GSI. NumPy and SciPy (Oliphant 2007), Matplotlib (Hunter 2007) and artistools5 (ARTIS Collaboration et al. 2023) were used for data processing and plotting. Footnote 5: [https://github.com/artis-mcrt/artistools/](https://github.com/artis-mcrt/artistools/) ## Data availability The data underlying this article will be shared on reasonable request to the corresponding author.
2309.04673
On the relative Morrison-Kawamata cone conjecture (II)
Assuming the Morrison-Kawamata cone conjecture for the generic fiber of a Calabi-Yau fibration and the abundance conjecture, we show (1) the finiteness of minimal models, (2) the existence of a weak rational polyhedral fundamental domain under the action of birational automorphism groups, and (3) the finiteness of varieties as targets of contractions. As an application, the finiteness of minimal models and the weak Morrison-Kawamata cone conjecture in relative dimensions $\leq 2$ are established.
Zhan Li
2023-09-09T03:40:41Z
http://arxiv.org/abs/2309.04673v1
# On the relative Morrison-Kawamata cone conjecture (II) ###### Abstract. Assuming the Morrison-Kawamata cone conjecture for the generic fiber of a Calabi-Yau fibration and the abundance conjecture, we show (1) the finiteness of minimal models, (2) the existence of a weak rational polyhedral fundamental domain under the action of birational automorphism groups, and (3) the finiteness of varieties as targets of contractions. As an application, the finiteness of minimal models and the weak Morrison-Kawamata cone conjecture in relative dimensions \(\leq 2\) are established. 2020 Mathematics Subject Classification: 14E30 ###### Contents * 1 Introduction * 2 Preliminaries * 3 Geometry of convex cones * 4 Groups schemes associated to log Calabi-Yau varieties * 5 Relative and generic cone conjectures * 6 Finiteness of contractions, minimal models, and the existence of weak fundamental domains ## 1. Introduction Compared with varieties of general type and Fano varieties, the birational geometry of Calabi-Yau varieties (those with trivial canonical divisors) poses the most challenge to study. The standard conjecture of the minimal model program predicts that a variety with intermediate Kodaira dimension birationally admits a Calabi-Yau fibration. The birational geometry of Calabi-Yau fibrations is largely prescribed by the Morrison-Kawamata cone conjecture [11, 12, 13, 14]. Let \(\Gamma_{B}\) and \(\Gamma_{A}\) be the images of the pseudo-automorphism group \(\operatorname{PsAut}(X/S,\Delta)\) and the automorphism group \(\operatorname{Aut}(X/S,\Delta)\) under the natural group homomorphism \(\operatorname{PsAut}(X/S,\Delta)\to\operatorname{GL}(N^{1}(X/S)_{\mathbb{R}})\), respectively. **Conjecture 1.1** (Morrison-Kawamata cone conjecture).: _Let \((X,\Delta)\to S\) be a klt Calabi-Yau fiber space._ 1. _The cone_ \(\operatorname{Mov}(X/S)_{+}\) _has a (weak) rational polyhedral fundamental domain under the action of_ \(\Gamma_{B}\)_, and there are finitely many minimal models of_ \((X/S,\Delta)\) _up to isomorphisms._ 2. _The cone_ \(\operatorname{Amp}(X/S)_{+}\) _has a (weak) rational polyhedral fundamental domain under the action of_ \(\Gamma_{A}\)_, and there are finitely many contractions from_ \(X/S\) _up to isomorphisms._ Note that there are various versions of the Morrison-Kawamata cone conjecture (see [1]), and the above version is most relevant to this paper. Besides, we only study the first part of the conjecture in this paper as it is the part that concerns the birational geometry of Calabi-Yau fibrations. For simplicity, we refer to the first part of this conjecture as the (weak) cone conjecture for movable cones, or simply the cone conjecture. In our recent work [10], we approach the cone conjecture from the perspective of Shokurov polytopes. We propose a more tractable conjecture and investigate its relationship with the cone conjecture. **Conjecture 1.2** ([10]).: _Let \(f:(X,\Delta)\to S\) be a klt Calabi-Yau fiber space._ 1. _There exists a polyhedral cone_ \(P_{M}\subset\operatorname{Eff}(X/S)\) _such that_ \[\operatorname{PsAut}(X/S,\Delta)\cdot P_{M}\supset\operatorname{Mov}(X/S).\] 2. _There exists a polyhedral cone_ \(P_{A}\subset\operatorname{Eff}(X/S)\) _such that_ \[\operatorname{Aut}(X/S,\Delta)\cdot P_{A}\supset\operatorname{Amp}(X/S).\] Using results of [14] and assuming standard conjectures of the minimal model program (MMP), [10] showed that Conjecture 1.2 is almost equivalent to the cone conjecture when \(R^{1}f_{*}\mathcal{O}_{X}=0\) (i.e., trivial relative irregularity). In fact, if \(S\) is a point, then they are indeed equivalent. Furthermore, assuming trivial relative irregularity, we establish the weak Morrison-Kawamata cone conjecture and the finiteness of minimal models, assuming the abundance conjecture and the cone conjecture for the generic fiber. However, from the Beauville-Bogomolov decomposition of varieties with trivial canonical divisors, varieties with trivial irregularities and non-trivial irregularities may have drastic geometry. In fact, people used to reserve Calabi-Yau varieties only for varieties with trivial canonical divisors and \(h^{i}(\mathcal{O}_{X})=0,1\leq i\leq\dim X\). Nonetheless, such an exclusion is neither reasonable from a birational geometry perspective nor practical from a technical standpoint (the cone conjecture is sensitive to finite covers). This paper tackles the general case without assuming trivial relative irregularity. Our starting point is the following result. **Theorem 1.3**.: _Let \(f:(X,\Delta)\to S\) be a terminal Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). If there exists a polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that_ \[\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot P_{\eta}\supset \operatorname{Mov}(X_{\eta}),\] _then there exists a rational polyhedral cone \(Q\subset\operatorname{Mov}(X/S)\) such that_ \[\operatorname{PsAut}(X,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}.\] In combination with techniques related to the Shokurov polytope, we deduce the following result: **Theorem 1.4**.: _Let \(f:(X,\Delta)\to S\) be a terminal Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). If there exists a rational polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that_ \[\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot P_{\eta}\supset \operatorname{Mov}(X_{\eta}),\] _then \((X/S,\Delta)\) has finite minimal models up to isomorphisms._ In the proof of Theorem 1.4, we need to treat minimal models that are not birational to \(X\) in codimension \(1\). To this end, we study the finiteness of contractions from \(X\). Surprisingly, we establish that the finiteness of (targets of) the contractions can be derived from the movable cone conjecture. This reinforces the fundamental nature of Conjecture 1.2, which encompasses both the finiteness of models and contractions. **Theorem 1.5**.: _Let \(f:(X,\Delta)\to S\) be a klt Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). If there exists a rational polyhedral cone \(Q\subset\operatorname{Eff}(X/S)\) such that_ \[\operatorname{PsAut}(X/S,\Delta)\cdot Q\supset\operatorname{Mov}(X/S),\] _then the set \(\{Z\mid X\to Z/S\text{ is a contraction}\}\) is finite._ **Corollary 1.6**.: _Let \(f:(X,\Delta)\to S\) be a terminal Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). If there exists a polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that_ \[\operatorname{PsAut}(X_{\eta})\cdot P_{\eta}\supset\operatorname{Mov}(X_{\eta }),\] _then the set \(\{Z\mid X\to Z/S\text{ is a contraction}\}\) is finite._ As previously mentioned, the cone conjecture is essential in the study of the birational geometry of varieties with intermediate Kodaira dimensions \(\kappa(X)\). As an illustrative example, we prove that \(X\) only admits finitely many minimal models given \(\dim X-\kappa(X)\leq 2\). It is noteworthy that such an \(X\) may no longer be Calabi-Yau varieties. **Corollary 1.7**.: _Let \(X\) be a normal projective variety with canonical singularities such that \(\dim X-\kappa(X)\leq 2\). Then \(X\) has finitely many minimal models._ Another consequence of Theorem 1.3 is the existence of a weak rational polyhedral fundamental domain under the action of the pseudo-automorphism group. **Theorem 1.8**.: _Let \(f:(X,\Delta)\to S\) be a terminal Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). If there exists a polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that_ \[\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot P_{\eta}\supset\operatorname {Mov}(X_{\eta}),\] _then \(\operatorname{Mov}(X/S)_{+}\) admits a weak rational polyhedral fundamental domain under the action of \(\Gamma_{B}\)._ Applying the above results to the fibrations with \(\dim(X/S)\leq 2\), we obtain the following: **Corollary 1.9** (=Corollary 6.3+Corollary 6.4).: _Let \(f:X\to S\) be a terminal Calabi-Yau fiber space. Suppose that \(\dim(X/S)\leq 2\), then_ 1. \(X/S\) _has finitely many minimal models, and_ 2. \(\operatorname{Mov}(X/S)_{+}\) _admits a weak rational polyhedral fundamental domain under the action of_ \(\Gamma_{B}\)_._ This result generalizes both [10] and [11] where the finiteness of minimal models are established for threefolds with \(\dim(X/S)\leq 2\) and for elliptic fibrations, respectively. To elucidate the main idea behind the proof of Theorem 1.3, we will provide a simplified overview. Let us assume that \(X\to S\) is a Calabi-Yau fibration, and denote its generic fiber as \(X_{\eta}\). Suppose that \(X_{\eta}\) satisfies the cone conjecture. In particular, there is a rational polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that \(\operatorname{PsAut}(X_{\eta})\cdot P_{\eta}\supset\operatorname{Mov}(X_{\eta})\). Our goal is to lift \(P_{\eta}\) to a rational polyhedral cone \(P\subset\operatorname{Eff}(X/S)\) such that \(\operatorname{PsAut}(X/S)\cdot P\supset\operatorname{Mov}(X/S)\). This is essential to meet the requirements of Conjecture 1.2. If \(R^{1}f_{*}\mathcal{O}_{X}=0\), then for Cartier divisors \(D,B\) on \(X\) such that \(D_{\eta}\equiv B_{\eta}\), we have \(D_{\eta}\sim_{\mathbb{Q}}B_{\eta}\) and thus \(D\equiv B/S\) modulo a vertical divisor. However, without \(R^{1}f_{*}\mathcal{O}_{X}=0\), it may happen that \(D\not\equiv B/S\) even after modulo vertical divisors. In other words, there may exist two numerically different divisors (even modulo vertical divisors) on \(X\) that look the same in \(N^{1}(X_{\eta})\). The idea to overcome this difficulty, as had already appeared in [10] in special cases, is to use the natural automorphism group \(\operatorname{Aut}^{0}(X_{\eta})\) to ensure that, as long as \(D_{\eta}\equiv B_{\eta}\), \[D\in\operatorname{Aut}^{0}(X_{\eta})\cdot P_{B}\mod(\text{vertical divisors})\] where \(P_{B}\subset N^{1}(X/S)\) is a rational polyhedral cone associated with \(B\). This process needs to be "globalized" across all divisors on \(P_{\eta}\) uniformly, resulting in an enlarged rational polyhedral cone that fulfills our objective. Nonetheless, overcoming these challenges is not straightforward. (1) We are unable to show \(\operatorname{PsAut}(X/S)\cdot P\supset\operatorname{Mov}(X/S)\); instead, a weaker statement is sufficient for our purposes, namely that \(\operatorname{PsAut}(X/S)\cdot P\) contains the rational elements of \(\operatorname{Mov}(X/S)\). This adaptation requires modifications to results concerning the geometry of convex cones, as found in [11] and [10]. (2) We need to vastly generalize the aforementioned construction that leverages the numerical equivalence to linear equivalence through the action of \(\operatorname{Aut}^{0}(X_{\eta})\). This crucial step draws upon profound results on algebraic groups associated with Calabi-Yau fiber spaces. We discuss the contents of this paper. Section 2 furnishes essential background materials and introduces the notation employed throughout the paper. Section 3 develops the convex geometry needed in this paper. The main result is Proposition 3.8 which is of independent interest. Section 4 studies the group schemes associated with Calabi-Yau fibrations. The main result is Theorem 4.9. The entirety of Section 5 is devoted to the proof of Theorem 1.3, the core of this paper. Section 6 employs the established theorems to prove the remaining results, including those on the finiteness of contractions and minimal models, and the existence of weak fundamental domains. ### Acknowledgments The author is grateful for enlightening discussions with Yong Hu, Jinsong Xu, and Yifei Zhu. We also recognize the substantial influence of [17] on our work, as it served as our starting point and source of inspiration. It is worth noting that while Shokurov's polytope technique is not explicitly stated in [17], it seems that certain aspects of the proof in [17] bear a close resemblance to the proof of Shokurov's polytope. The author acknowledges partial support from grants provided by the Shenzhen municipality and the Southern University of Science and Technology. ## 2. Preliminaries Let \(f:X\to S\) be a projective morphism between normal quasi-projective varieties over an algebraically closed field of characteristic \(0\). Then \(f\) is called a fibration if \(f\) has connected fibers. We write \(X/S\) to mean that \(X\) is over \(S\). For \(\mathbb{K}=\mathbb{Z},\mathbb{Q},\mathbb{R}\) and two \(\mathbb{K}\)-Weil divisors \(A,B\) on \(X\), \(A\sim_{\mathbb{K}}B/S\) means that \(A\) and \(B\) are \(\mathbb{K}\)-linearly equivalent over \(S\). We also refer to a \(\mathbb{K}\)-divisor as simply a divisor. If \(A,B\) are \(\mathbb{R}\)-Cartier divisors, then \(A\equiv B/S\) means that \(A\) and \(B\) are numerically equivalent over \(S\). We use \(\operatorname{Supp}E\) to denote the support of the divisor \(E\). A divisor \(E\) on \(X\) is called a vertical divisor (over \(S\)) if \(f(\operatorname{Supp}E)\neq S\). A vertical divisor \(E\) is called a very exceptional divisor if for any prime divisor \(P\) on \(S\), over the generic point of \(P\), we have \(\operatorname{Supp}f^{*}P\not\subset\operatorname{Supp}E\) (see [1, Definition 3.1]). If \(f\) is a birational morphism, then the notion of a very exceptional divisor coincides with that of an exceptional divisor. Let \(X\) be a normal complex variety and \(\Delta\) be an \(\mathbb{R}\)-divisor on \(X\). Then \((X,\Delta)\) is called a log pair. We assume that \(K_{X}+\Delta\) is \(\mathbb{R}\)-Cartier for a log pair \((X,\Delta)\). We call \(f:(X,\Delta)\to S\) a Calabi-Yau fibration or a Calabi-Yau fiber space if \(X\) is \(\mathbb{Q}\)-factorial, \(X\to S\) is a fibration, and \(K_{X}+\Delta\sim_{\mathbb{R}}0/S\). When \((X,\Delta)\) has lc singularities (see Section 2.2), then \(K_{X}+\Delta\sim_{\mathbb{R}}0/S\) is equivalent to the weaker condition \(K_{X}+\Delta\equiv 0/S\) by [10, Corollary 1.4]. Note that for an effective \(\mathbb{R}\)-divisor \(\Delta\) such that \((X/S,\Delta)\) is klt and satisfies \(K_{X}+\Delta\sim_{\mathbb{R}}0/S\), then there exists a \(\mathbb{Q}\)-divisor \(\Delta^{\prime}\) such that \((X/S,\Delta^{\prime})\) is klt with \(K_{X}+\Delta^{\prime}\sim_{\mathbb{Q}}0/S\) and \(\operatorname{Supp}\Delta=\operatorname{Supp}\Delta^{\prime}\). Therefore, in terms of the cone conjecture, there is no difference to state it for \(\mathbb{Q}\)-linear equivalent or \(\mathbb{R}\)-linear equivalent to \(0\). ### Cones Let \(V\) be a finite-dimensional real vector space. A set \(C\subset V\) is called a cone if for any \(x\in C\) and \(\lambda\in\mathbb{R}_{>0}\), we have \(\lambda\cdot x\in C\). We use \(\operatorname{Int}(C)\) to denote the relative interior of \(C\) and call \(\operatorname{Int}(C)\) the open cone. By convention, the origin is an open cone. A cone is called a polyhedral cone (resp. rational polyhedral cone) if it is a closed convex cone generated by finite vectors (resp. rational vectors). If \(S\subset V\) is a subset, then \(\operatorname{Conv}(S)\) denotes the convex hull of \(S\), and \(\operatorname{Cone}(S)\) denotes the closed convex cone generated by \(S\). As we are only concerned about convex cones in this paper, we also call them cones. Let \(\operatorname{Pic}(X/S)\) be the relative Picard group. Let \[N^{1}(X/S)_{\mathbb{Z}}\coloneqq\operatorname{Pic}(X/S)/\equiv\] be the lattice. Set \(\operatorname{Pic}(X/S)_{\mathbb{K}}\coloneqq\operatorname{Pic}(X/S)\otimes_ {\mathbb{Z}}\mathbb{K}\) and \(N^{1}(X/S)_{\mathbb{K}}\coloneqq N^{1}(X/S)_{\mathbb{Z}}\otimes_{\mathbb{Z}} \mathbb{K}\) for \(\mathbb{K}=\mathbb{Q}\) or \(\mathbb{R}\). If \(D\) is an \(\mathbb{R}\)-Cartier divisor, then \([D]\in N^{1}(X/S)_{\mathbb{R}}\) denotes the corresponding divisor class. To abuse the terminology, we also call \([D]\) an \(\mathbb{R}\)-Cartier divisor. A \(\mathbb{Q}\)-Cartier divisor \(D\) on \(X/S\) is called movable\(/S\) if there exists \(m\in\mathbb{Z}_{>0}\) such that the base locus of the linear system \(|mD/S|\) has codimension \(>1\). An \(\mathbb{R}\)-Cartier divisor is called movable if it is a positive \(\mathbb{R}\)-linear combination of movable Cartier divisors. We list relevant cones inside \(N^{1}(X/S)_{\mathbb{R}}\) which appear in the paper (the notation is slightly different from [12]): 1. \(\operatorname{Eff}(X/S)\): the cone generated by effective Cartier divisors; 2. \(\operatorname{Eff}(X/S)\): the closure of \(\operatorname{Eff}(X/S)\); 3. \(\operatorname{Mov}(X/S)\): the cone generated by movable divisors; 4. \(\operatorname{Mov}(X/S)\): the closure of \(\operatorname{Mov}(X/S)\); 5. \(\operatorname{Mov}(X/S)_{+}\coloneqq\operatorname{Conv}(\overline{\operatorname {Mov}}(X/S)\cap N^{1}(X/S)_{\mathbb{Q}})\) (see Definition 3.1); 6. \(\operatorname{Amp}(X/S)\): the cone generated by ample divisors; 7. \(\overline{\operatorname{Amp}}(X/S)\): the closure of \(\operatorname{Amp}(X/S)\) (i.e., the nef cone); To denote the rational elements in the corresponding cones, we set 1. \(\operatorname{Eff}(X/S)_{\mathbb{Q}}\coloneqq\operatorname{Eff}(X/S)\cap N^{ 1}(X/S)_{\mathbb{Q}}\); 2. \(\operatorname{Mov}(X/S)_{\mathbb{Q}}\coloneqq\operatorname{Mov}(X/S)\cap N^{ 1}(X/S)_{\mathbb{Q}}\); 3. \(\operatorname{Amp}(X/S)_{\mathbb{Q}}\coloneqq\operatorname{Amp}(X/S)\cap N^{ 1}(X/S)_{\mathbb{Q}}\); Moreover, for simplicity, we set 1. \(N^{1}(X/S)\coloneqq N^{1}(X/S)_{\mathbb{R}}\). Let \(K(S)\) be the field of rational functions of \(S\), and \(\overline{K(S)}\) be the algebraic closure of \(K(S)\). Set \(\eta\coloneqq\operatorname{Spec}K(S)\) and \(\bar{\eta}\coloneqq\operatorname{Spec}\overline{K(S)}\). Then \(X_{\eta}\) and \(X_{\bar{\eta}}\) denote the generic fiber and the geometric fiber of \(X\to S\), respectively. The above cones still make sense for \(X_{\eta}/\eta\) and \(X_{\bar{\eta}}/\bar{\eta}\). Recall that for a birational map \(g:X\dashrightarrow Y/S\), if \(D\) is an \(\mathbb{R}\)-Cartier divisor on \(X\), then the pushforward of \(D\), denoted as \(g_{*}D\), is defined as follows. Let \(p:W\to X,q:W\to X\) be birational morphisms such that \(g\circ p=q\), then \(g_{*}D\coloneqq q_{*}(p^{*}D)\). This is independent of the choice of \(p\) and \(q\). Let \(\Delta\) be a divisor on a \(\mathbb{Q}\)-factorial variety \(X\). We use \(\operatorname{Bir}(X/S,\Delta)\) to denote the birational automorphism group of \((X,\Delta)\) over \(S\). Specifically, \(\operatorname{Bir}(X/S,\Delta)\) consists of birational maps \(g:X\dashrightarrow X/S\) such that \(g_{*}\operatorname{Supp}\Delta=\operatorname{Supp}\Delta\). A birational map is called a pseudo-automorphism if it is isomorphic in codimension \(1\). Let \(\operatorname{PsAut}(X/S,\Delta)\) be the subgroup of \(\operatorname{Bir}(X/S,\Delta)\) consisting of pseudo-automorphisms. Let \(\operatorname{Aut}(X/S,\Delta)\) be the subgroup of \(\operatorname{Bir}(X/S,\Delta)\) consisting of automorphisms of \(X/S\). For a field \(K\), if \(X\) is a variety over \(K\) and \(\Delta\) is a divisor on \(X\), then we will omit \(\operatorname{Spec}K\) and use \(\operatorname{Bir}(X,\Delta),\operatorname{PsAut}(X,\Delta)\) and \(\operatorname{Aut}(X,\Delta)\) to denote the birational automorphism group, the pseudo-automorphism group and the automorphism group of \(X/K\), respectively. See [10, Example 2.1] for an example of a birational map which is not a pseudo-automorphism. On the other hand, it is well-known that if \((X/S,\Delta)\) has terminal singularities and \(K_{X}+\Delta\) is \(\operatorname{nef}/S\), then any birational map is a pseudo-automorphism (see Lemma 2.6). Let \(g\in\operatorname{Bir}(X/S,\Delta)\) and \(D\) be an \(\mathbb{R}\)-Cartier divisor on a \(\mathbb{Q}\)-factorial variety \(X\). Because the push-forward map \(g_{*}\) preserves numerical equivalence classes, there is a linear map (recall that \(N^{1}(X/S)\) denotes \(N^{1}(X/S)_{\mathbb{R}}\) under our convention) \[g_{*}:N^{1}(X/S)\to N^{1}(X/S),\quad[D]\mapsto[g_{*}D].\] However, if \(g\in\operatorname{Bir}(X/S)\) is not isomorphic in codimension \(1\), then for a \([D]\in\operatorname{Mov}(X/S)\), \([g_{*}D]\) may not be in \(\operatorname{Mov}(X/S)\). Moreover, \((g,[D])\mapsto[g_{*}D]\) is not a group action of \(\operatorname{Bir}(X/S,\Delta)\) on \(N^{1}(X/S)\). For instance, if \(D\) is a divisor contracted by \(g\), then \(g_{*}^{-1}(g_{*}[D])=0\neq(g^{-1}\circ g)_{*}[D]\). On the other hand, it can be verified that \[\operatorname{PsAut}(X/S,\Delta)\times N^{1}(X/S) \to N^{1}(X/S)\] \[(g,[D]) \mapsto[g_{*}D],\] is a group action. Note that if \(g\in\operatorname{Aut}(X/S,\Delta)\), then \(g_{*}D=(g^{-1})^{*}D\). Although for any \(g\in\operatorname{PsAut}(X/S)\), we can still define the pullback map on an \(\mathbb{R}\)-Cartier divisor, in order to make \(\operatorname{PsAut}(X/S,\Delta)\) acting on \(N^{1}(X/S)_{\mathbb{R}}\) from the left, we use the pushforward map. We use \(g\cdot D\) and \(g\cdot[D]\) to denote \(g_{*}D\) and \([g_{*}D]\), respectively. Let \(\Gamma_{B}\) and \(\Gamma_{A}\) be the images of \(\operatorname{PsAut}(X/S,\Delta)\) and \(\operatorname{Aut}(X/S,\Delta)\) under the natural group homomorphism \[\iota:\operatorname{PsAut}(X/S,\Delta)\to\operatorname{GL}(N^{1}(X/S)).\] Because \(\Gamma_{B},\Gamma_{A}\subset\operatorname{GL}(N^{1}(X/S)_{\mathbb{Z}})\), \(\Gamma_{B}\) and \(\Gamma_{A}\) are discrete subgroups. By abusing the notation, we also write \(g\) for \(\iota(g)\in\Gamma_{B}\), and denote \(\iota(g)([D])\) by \(g\cdot[D]\). Then the cones \(\operatorname{Mov}(X/S)_{\mathbb{Q}},\overline{\operatorname{Mov}}(X/S)\) and \(\operatorname{Mov}(X/S)_{+}\) are all invariant under the action of \(\operatorname{PsAut}(X/S,\Delta)\). Similarly, \(\operatorname{Amp}(X/S)_{\mathbb{Q}},\operatorname{Amp}(X/S)\) and \(\operatorname{Amp}(X/S)_{+}\) are all invariant under the action of \(\operatorname{Aut}(X/S,\Delta)\). We use \(\operatorname{\mathbf{Aut}}(X/S,\Delta)\) and \(\operatorname{\mathbf{Pic}}(X/S)\) to denote the group schemes that represent the automorphism functor and the Picard functor, respectively. See Section 5 for details. ### Minimal models of varieties Let \((X,\Delta)\) be a log pair. For a divisor \(D\) over \(X\), if \(f:Y\to X\) is a birational morphism from a smooth variety \(Y\) such that \(D\) is a divisor on \(Y\), then the log discrepancy of \(D\) with respect to \((X,\Delta)\) is defined to be \[a(D;X,\Delta)\coloneqq\operatorname{mult}_{D}(K_{Y}-f^{*}(K_{X}+\Delta))+1.\] This definition is independent of the choice of \(Y\). A log pair \((X,\Delta)\) (or its singularity) is called sub-klt (resp. sub-lc) if the log discrepancy of any divisor over \(X\) is \(>0\) (resp. \(\geq 0\)). If \(\Delta\geq 0\), then a sub-klt (resp. sub-lc) pair \((X,\Delta)\) is called klt (resp. lc). If \(\Delta\geq 0\) and the log discrepancy of any exceptional divisor over \(X\) is \(>1\), then \((X,\Delta)\) is said to have terminal singularities. A fibration/fiber space \((X,\Delta)\to S\) is called a klt (resp. terminal) fibration/fiber space if \((X,\Delta)\) is klt (resp. terminal). Let \(X\to S\) be a projective morphism of normal quasi-projective varieties. Suppose that \((X,\Delta)\) is klt. Let \(\phi:X\dasharrow Y/S\) be a birational contraction (i.e., \(\phi\) does not extract divisors) of normal quasi-projective varieties over \(S\), where \(Y\) is projective over \(S\). We write \(\Delta_{Y}\coloneqq\phi_{*}\Delta\) for the strict transform of \(\Delta\). Then \((Y/S,\Delta_{Y})\) is a minimal model of \((X/S,\Delta)\) if \(K_{Y}+\Delta_{Y}\) is \(\operatorname{nef}/S\) and \(a(D;X,\Delta)\geq a(D;Y,\Delta_{Y})\) for any divisor \(D\) over \(X\). Note that a minimal model here is called a weak log canonical model in [1]. A minimal model \((Y/S,\Delta_{Y})\) of \((X/S,\Delta)\) is called a good minimal model of \((X/S,\Delta)\) if \(K_{Y}+\Delta_{Y}\) is semi-\(\text{ample}/S\). It is well-known that the existence of a good minimal model of \((X/S,\Delta)\) implies that any minimal model of \((X/S,\Delta)\) is a good minimal model (for example, see [1, Remark 2.7]). By saying that "good minimal models of effective klt pairs exist in dimension \(n\)", we mean that for any projective variety \(X\) of dimension \(n\) over an algebraically closed field of characteristic \(0\), if \((X,\Delta)\) is klt and the Kodaira dimension \(\kappa(K_{X}+\Delta)\geq 0\), then \((X,\Delta)\) has a good minimal model. The following lemma allows to lift a minimal model so that it is isomorphic to the original variety in codimension \(1\). **Lemma 2.1** ([1, Lemma 2.2]).: _Let \((X/S,\Delta)\) be a klt pair with \([K_{X}+\Delta]\in\overline{\operatorname{Mov}}(X/S)\). Suppose that \(g:(X/S,\Delta)\dashrightarrow(Y/S,\Delta_{Y})\) is a minimal model of \((X/S,\Delta)\). Then \((X/S,\Delta)\) admits a minimal model \((Y^{\prime}/S,\Delta_{Y^{\prime}})\) such that_ 1. \(Y^{\prime}\) _is_ \(\mathbb{Q}\)_-factorial,_ 2. \(X,Y^{\prime}\) _are isomorphic in codimension_ \(1\)_, and_ 3. _there exists a morphism_ \(\nu:Y^{\prime}\to Y/S\) _such that_ \(K_{Y^{\prime}}+\Delta_{Y^{\prime}}=\nu^{*}(K_{Y}+\Delta_{Y})\)_._ **Theorem 2.2** ([13, Theorem 2.12]).: _Let \(f:X\to S\) be a surjective projective morphism and \((X,\Delta)\) a klt pair such that for a very general closed point \(s\in S\), the fiber \((X_{s},\Delta_{s}=\Delta|_{X_{s}})\) has a good minimal model. Then \((X,\Delta)\) has a good minimal model over \(S\)._ [13, Theorem 2.12] states for a \(\mathbb{Q}\)-divisor \(\Delta\). However, it still holds for an \(\mathbb{R}\)-divisor \(\Delta\): in the proof of [13, Theorem 2.12], one only needs to replace \(\operatorname{Proj}_{S}\oplus_{m\in\mathbb{Z}_{>0}}R^{0}f_{*}\mathcal{O}_{X}(m (K_{X}+\Delta))\) by the canonical model of \((X/S,\Delta)\) whose existence is known for effective klt pairs by [12]. Indeed, because \(\kappa(K_{X_{s}}+\Delta_{s})\geq 0\) for a very general \(s\in S\) by assumption, \(K_{X}+\Delta\sim_{\mathbb{R}}E/S\) with \(E\geq 0\) by [12, Theorem 3.15]. Let \(V\) be a finite-dimensional \(\mathbb{R}\)-vector space. A polytope (resp. rational polytope) \(P\subset V\) is the convex hull of finite points (resp. rational points) in \(V\). In particular, a polytope is always closed and bounded. We use \(\operatorname{Int}(P)\) to denote the relative interior of \(P\) and call \(\operatorname{Int}(P)\) the open polytope. By convention, a single point is an open polytope. Therefore, \(\mathbb{R}_{>0}\cdot P\) is an open polyhedral cone iff \(P\) is an open polytope. See [12] for the proof of the following results: **Theorem 2.3** ([11, Theorem 3.4]).: _Let \(X\) be a \(\mathbb{Q}\)-factorial variety and \(f:X\to S\) be a fibration. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). Let \(D_{i},i=1,\dots,k\) be effective \(\mathbb{Q}\)-divisors on \(X\). Suppose that \(P\subset\oplus_{i=1}^{k}[0,1)D_{i}\) is a rational polytope such that for any \(\Delta\in P\), \((X,\Delta)\) is klt and \(\kappa(K_{F}+\Delta|_{F})\geq 0\), where \(F\) is a general fiber of \(f\)._ _Then \(P\) can be decomposed into a disjoint union of finitely many open rational polytopes \(P=\sqcup_{i=1}^{m}Q_{i}^{\circ}\) such that for any \(B,D\in Q_{i}^{\circ}\), if \((Y/S,B_{Y})\) is a minimal model of \((X/S,B)\), then \((Y/S,D_{Y})\) is also a minimal model of \((X/S,D)\)._ **Theorem 2.4** ([12, Theorem 2.6]).: _Let \((X,\Delta)\to S\) be a klt Calabi-Yau fiber space. Assume that good minimal models of effective klt pairs exist in dimension \(\dim(X/S)\). Let \(P\subset\operatorname{Eff}(X/S)_{\mathbb{Q}}\) be a rational polyhedral cone. Then \(P\) is a finite union of open rational polyhedral cones \(P=\sqcup_{i=0}^{m}P_{i}^{\circ}\) such that whenever_ 1. \(B,D\) _are effective divisors with_ \([B],[D]\in P_{i}^{\circ}\)_, and_ 2. \((X,\Delta+\epsilon B),(X,\Delta+\epsilon D)\) _are klt for some_ \(\epsilon\in\mathbb{R}_{>0}\)_,_ _then if \((Y/S,\Delta_{Y}+\epsilon B_{Y})\) is a minimal model of \((X/S,\Delta+\epsilon B)\), then \((Y/S,\Delta_{Y}+\epsilon D_{Y})\) is a minimal model of \((X/S,\Delta+\epsilon D)\)._ We need the following consequence of Theorem 2.4. **Lemma 2.5**.: _Let \(f:(X,\Delta)\to S\) be a klt Calabi-Yau fiber space. Assume that good minimal models exist for effective klt pairs in dimension \(\dim(X/S)\). Let \(Q\subset\operatorname{Eff}(X/S)\) be a rational polyhedral cone, and \(\Gamma\subset\operatorname{PsAut}(X/S,\Delta)\) be a subgroup. Then there exists a rational polyhedral subcone \(P\subset Q\cap\operatorname{Mov}(X/S)\) such that_ 1. \(\Gamma\cdot(P\cap N^{1}(X/S)_{\mathbb{Q}})=(\Gamma\cdot Q)\cap\operatorname{Mov }(X/S)_{\mathbb{Q}}\)_, and_ 2. \(\Gamma\cdot P=(\Gamma\cdot Q)\cap\operatorname{Mov}(X/S)\)_._ Proof.: We only show the first item as the second item can be proved by the same argument. The argument below is similar to the proof of [12, Lemma 5.2]. Let \(Q=\sqcup_{i=1}^{\prime m}Q_{i}^{\circ}\) be the decomposition as in Theorem 2.4 such that for effective divisors \(B,D\) with \([B],[D]\in Q_{i}^{\circ}\) and \(1\gg\epsilon>0\), the klt pairs \((X,\Delta+\epsilon B)\) and \((X,\Delta+\epsilon D)\) share same minimal models. We claim that if \(Q_{i}^{\circ}\cap\operatorname{Mov}(X/S)_{\mathbb{Q}}\neq\emptyset\), then \((Q_{i}\cap N^{1}(X/S)_{\mathbb{Q}})\subset\operatorname{Mov}(X/S)_{\mathbb{Q}}\). Let \(h:X\dashrightarrow Y/S\) be a minimal model of \((X,\Delta+\epsilon D)\) for some \([D]\in Q_{i}^{\circ}\cap\operatorname{Mov}(X/S)_{\mathbb{Q}}\neq\emptyset\) and \(1\gg\epsilon>0\). By Lemma 2.1, we can assume that \(h\) is isomorphic in codimension \(1\). Then for any \(B\geq 0\) such that \([B]\in Q_{i}\cap N^{1}(X/S)_{\mathbb{Q}}\), \(h\) is also a minimal model for \((X,\Delta+\epsilon^{\prime}B)\) with \(1\gg\epsilon^{\prime}>0\). By Theorem 2.2, \(K_{Y}+\Delta_{Y}+\epsilon^{\prime}B_{Y}^{\prime}\sim_{\mathbb{Q}}\epsilon^{ \prime}B_{Y}/S\) is semi-ample over \(S\). Thus, \(B\), as the strict transform of \(B_{Y}\), is movable over \(S\). Set \[P\coloneqq\operatorname{Cone}(\cup_{i}Q_{i}\mid(Q_{i}^{\circ}\cap\operatorname {Mov}(X/S)_{\mathbb{Q}}\neq\emptyset)\subset Q.\] Then \(P\) is a rational polyhedral cone and \(P\subset\operatorname{Mov}(X/S)\) by the previous claim. If \([D]\in(\Gamma\cdot Q)\cap\operatorname{Mov}(X/S)_{\mathbb{Q}}\), then there exists \(\gamma\in\Gamma\) such that \(\gamma\cdot[D]\in Q\) and thus lies in \(Q_{i}^{\circ}\) for some \(i\). In particular, \([D]\in\gamma^{-1}\cdot P\). This shows \(\Gamma\cdot(P\cap N^{1}(X/S)_{\mathbb{Q}})\supset(\Gamma\cdot Q)\cap \operatorname{Mov}(X/S)_{\mathbb{Q}}\). The inverse inclusion follows from \(P\subset Q\). The following result is well-known. **Lemma 2.6**.: _If \((X/S,\Delta)\) has terminal singularities and \(K_{X}+\Delta\) is nef\(/S\), then \(\operatorname{Bir}(X)=\operatorname{PsAut}(X)\)._ Proof.: Replacing \((X/S,\Delta)\) by a small \(\mathbb{Q}\)-factorial modification, we can assume that \(X\) is \(\mathbb{Q}\)-factorial. Let \(g:X\dashrightarrow X\) be birational and \(p,q:W\to X\) be birational resolutions such that \(g=q\circ p^{-1}\). Let \(\Delta_{W}\) be the strict transform of \(\Delta\). Let \(\operatorname{Exc}(-)\) denote the exceptional locus of a birational morphism. As \((X/S,\Delta)\) has \(\mathbb{Q}\)-factorial terminal singularities, \(K_{W}+\Delta_{W}=p^{*}(K_{X}+\Delta)+E\) and \(K_{W}+\Delta_{W}=q^{*}(K_{X}+\Delta)+F\) with \(E,F\geq 0\) and \(\operatorname{Supp}E=\operatorname{Exc}(p),\operatorname{Supp}F=\operatorname {Exc}(q)\). Thus \(p^{*}(K_{X}+\Delta)=q^{*}(K_{X}+\Delta)\) by the negativity lemma, and \(\operatorname{Exc}(p)=\operatorname{Exc}(q)\). Hence \[X\backslash p(\operatorname{Exc}(p))\stackrel{{ p^{-1}}}{{ \simeq}}W\backslash\operatorname{Exc}(p)=W\backslash\operatorname{Exc}(q) \stackrel{{ q}}{{\simeq}}X\backslash q(\operatorname{Exc}(q)),\] and \(g\) is isomorphic in codimension \(1\). ## 3. Geometry of convex cones Let \(V(\mathbb{Z})\) be a lattice and \(V(\mathbb{Q})\coloneqq V(\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{Q}\), \(V\coloneqq V(\mathbb{Q})\otimes_{\mathbb{Q}}\mathbb{R}\). A cone \(C\subset V\) is non-degenerate if it does not contain an affine line. This is equivalent to saying that its closure \(\bar{C}\) does not contain a non-trivial vector space. In the following, we assume that \(\Gamma\) is a group and \(\rho:\Gamma\to\operatorname{GL}(\operatorname{V})\) is a group homomorphism. The group \(\Gamma\) acts on \(V\) through \(\rho\). For a \(\gamma\in\Gamma\) and an \(x\in V\), we write \(\gamma\cdot x\) or \(\gamma x\) for the action. For a set \(S\subset V\), set \(\Gamma\cdot S\coloneqq\{\gamma\cdot x\mid\gamma\in\Gamma,x\in S\}\). Suppose that this action leaves a convex cone \(C\) and some lattice in \(V(\mathbb{Q})\) invariant. We assume that \(\dim C=\dim V\). The following definition slightly generalizes [10, Proposition-Definition 4.1]. **Definition 3.1**.: _Under the above notation and assumptions._ 1. _Suppose that_ \(C\subset V\) _is an open convex cone (may be degenerate). Let_ \[C_{+}\coloneqq\operatorname{Conv}(\bar{C}\cap V(\mathbb{Q}))\] _be the convex hull of rational points in_ \(\bar{C}\)_._ 2. _We say that_ \((C_{+},\Gamma)\) _is of polyhedral type if there is a polyhedral cone_ \(\Pi\subset C_{+}\) _such that_ \(\Gamma\cdot\Pi\supset C\)_._ For a set \(S\subset V\), \([S]\) denotes the convex hull of \(S\) in \(V\). A point of a convex set that makes up a face by itself is also called an extreme point. **Proposition 3.2** ([14, Proposition-Definition 4.1]).: _Under the above notation and assumptions. If \(C\) is an open non-degenerate cone, then the following conditions are equivalent:_ 1. _there exists a polyhedral cone_ \(\Pi\subset C_{+}\) _with_ \(\Gamma\cdot\Pi=C_{+}\)_;_ 2. _there exists a polyhedral cone_ \(\Pi\subset C_{+}\) _with_ \(\Gamma\cdot\Pi\supset C\)_;_ 3. _there exists a polyhedral cone_ \(\Pi\subset C_{+}\) _with_ \(\Gamma\cdot\Pi\supset C\cap V(\mathbb{Q})\)_;_ 4. _For every_ \(\Gamma\)_-invariant lattice_ \(L\subset V(\mathbb{Q})\)_,_ \(\Gamma\) _has finitely many orbits in the set of extreme points of_ \([C\cap L]\)_._ _Moreover, in case (2), we necessarily have \(\Gamma\cdot\Pi=C_{+}\)._ Proof.: [14, Proposition-Definition 4.1] has shown that (1) (2) (4) are equivalent. It is straightforward to get (3) from (2). Hence it suffices to show that (3) implies (4). This can be shown by the same argument as [14, Proposition-Definition 4.1] in showing (2) implies (4). We copy the argument below (with minor modifications and some explanations) for the convenience of the reader. We prove that (3) implies (4). Without loss of generality, we may assume that \(\Pi\) is rationally polyhedral. Let \(S\) denote the set of extreme points of \([C\cap L]\). In Step 1 of the proof of [14, Theorem 2.2], it is shown that \([C\cap L]=[C\cap L]+\bar{C}\). Then [14, Lemma 1.6] implies that every extreme point of \([C\cap L]\) belongs to \(C\cap L\subset V(\mathbb{Q})\). As \(S\) is \(\Gamma\)-invariant and \(\Gamma\cdot\Pi\supset C\cap V(\mathbb{Q})\supset S\), to show (4), it suffices to show that \(S\cap\Pi\) is finite. Let \(v_{1},\cdots,v_{r}\) denote the set of primitive integral generators of the extremal rays of \(\Pi\). Any \(e\in S\cap\Pi\) has the property that \[e-v_{i}\notin C\cap\Pi \tag{3.0.1}\] for all \(i\). Otherwise, suppose that \(e-v_{i}\in C\cap\Pi\cap L\). By \(v_{i}\in\Pi\subset C_{+}\) and \(e\in C\) with \(C\) open, we have \(e+v_{i}\in C\cap L\). Hence \[e=\frac{1}{2}(e-v_{i})+\frac{1}{2}(e+v_{i})\in[C\cap L].\] This contradicts that \(e\) is an extreme point of \([C\cap L]\). Now (3.0.1) implies that if we write \(e=\sum_{i=1}^{r}\lambda_{i}v_{i}\) with \(\lambda_{i}\geq 0\), then \(\lambda_{i}\leq 1\) for all \(i\). So \(S\cap\Pi\) is contained in a compact set and hence finite. **Definition 3.3**.: _Let \(\rho:\Gamma\hookrightarrow\operatorname{GL}(V)\) be an injective group homomorphism and \(C\subset V\) be a cone (may not necessarily be open). Let \(\Pi\subset C\) be a (rational) polyhedral cone. Suppose that \(\Gamma\) acts on \(C\). Then \(\Pi\) is called a weak (rational) polyhedral fundamental domain for \(C\) under the action \(\Gamma\) if_ 1. \(\Gamma\cdot\Pi=C\)_, and_ 2. _for each_ \(\gamma\in\Gamma\)_, either_ \(\gamma\Pi=\Pi\) _or_ \(\gamma\Pi\cap\operatorname{Int}(\Pi)=\emptyset\)_._ _Moreover, let \(\Gamma_{\Pi}\coloneqq\{\gamma\in\Gamma\mid\gamma\Pi=\Pi\}\). If \(\Gamma_{\Pi}=\{\operatorname{Id}\}\), then \(\Pi\) is called a (rational) polyhedral fundamental domain._ See [12, Lemma 3.5] for the following application of [14, Theorem 3.8 & Application 4.14]. **Lemma 3.4** ([14, Theorem 3.8 & Application 4.14]).: _Under the notation and assumptions of Definition 3.1. Suppose that \(\rho:\Gamma\hookrightarrow\operatorname{GL}(V)\) is injective. Let \((C_{+},\Gamma)\) be of polyhedral type with \(C\) non-degenerate. Then under the action of \(\Gamma\), \(C_{+}\) admits a rational polyhedral fundamental domain._ For a possibly degenerate open convex cone \(C\), let \(W\subset\overline{C}\) be the maximal \(\mathbb{R}\)-linear vector space. We say that \(W\) is defined over \(\mathbb{Q}\) if \(W=W(\mathbb{Q})\otimes_{\mathbb{Q}}\mathbb{R}\) where \(W(\mathbb{Q})=W\cap V(\mathbb{Q})\). In this case, \(V/W=(V(\mathbb{Q})/W(\mathbb{Q}))\otimes_{\mathbb{Q}}\mathbb{R}\) has a nature lattice structure, and we denote everything in \(V/W\) by \(\tilde{(-)}\). For example, \(\widetilde{(C_{+})}\) is the image of \(C_{+}\) under the projection \(p:V\to V/W\). By the maximality, \(W\) is \(\Gamma\)-invariant, and thus \(V/W,\widetilde{C}\) admit natural \(\Gamma\)-actions. **Lemma 3.5** ([12, Lemma 3.7]).: _Under the above notation and assumptions,_ 1. \(\widetilde{C}=\tilde{\overline{C}}\)_,_ 2. \((\tilde{C})_{+}=\widetilde{(C_{+})}\)_, which is denoted by_ \(\tilde{C}_{+}\)_, and_ 3. _if_ \((C_{+},\Gamma)\) _is of polyhedral type, then_ \((\tilde{C}_{+},\Gamma)\) _is still of polyhedral type. More precisely, if_ \(\Pi\subset C_{+}\) _is a polyhedral cone with_ \(\Gamma\cdot\Pi\supset C\)_, then_ \(\tilde{\Pi}\subset\tilde{C}_{+}\) _and_ \(\Gamma\cdot\tilde{\Pi}\supset\tilde{C}\)_._ The following result can be shown by the same argument as [12, Lemma 3.7]. **Proposition 3.6**.: _Let \(W\subset\overline{C}\) be the maximal vector space. Suppose that \(W\) is defined over \(\mathbb{Q}\). Let \(\tilde{\Gamma}\) be the image of the natural group homomorphism \(\Gamma\to\operatorname{GL}(V/W)\). If \((\tilde{C}_{+},\tilde{\Gamma})\) is of polyhedral type, then there is a rational polyhedral cone \(\Pi\subset C_{+}\) such that \(\Gamma\cdot\Pi=C_{+}\), and for each \(\gamma\in\Gamma\), either \(\gamma\Pi\cap\operatorname{Int}(\Pi)=\emptyset\) or \(\gamma\Pi=\Pi\). Moreover,_ \[\{\gamma\in\Gamma\mid\gamma\Pi=\Pi\}=\{\gamma\in\Gamma\mid\gamma\text{ acts trivially on }V/W\}.\] Proof.: By Lemma 3.4, there is a rational polyhedral cone \(\tilde{\Pi}\) as a fundamental domain of \(\tilde{C}_{+}\) under the action of \(\tilde{\Gamma}\). By Lemma 3.5 (2), let \(\Pi^{\prime}\subset C_{+}\) be a rational polyhedral cone such that \(p(\Pi^{\prime})=\tilde{\Pi}\), where \(p:V\to V/W\). Let \(\Pi\coloneqq\Pi^{\prime}+W\) which is a rational polyhedral cone. As \(W\) is defined over \(\mathbb{Q}\), we have \(W\subset C_{+}\) and \(\Pi\subset C_{+}\). As \(\gamma(\Pi^{\prime}+W)=(\gamma\Pi^{\prime})+W\), we have \(\Gamma\cdot\Pi=C_{+}\) by Lemma 3.5 (2). If \(\gamma\tilde{\Pi}\cap\operatorname{Int}(\tilde{\Pi})=\emptyset\), then \(\gamma\Pi\cap\operatorname{Int}(\Pi)=\emptyset\) as \(\operatorname{Int}(\Pi)\) maps to \(\operatorname{Int}(\tilde{\Pi})\). If \(\gamma\tilde{\Pi}=\tilde{\Pi}\), then we claim that \(\gamma\Pi=\Pi\). In fact, for some \(a\in\Pi^{\prime}\), we have \(\widetilde{(\gamma\cdot a)}=\gamma\cdot\tilde{a}\in\tilde{\Pi}\) and thus \(\gamma\cdot a=b+w\) for some \(b\in\Pi^{\prime},w\in W\). Thus \(\gamma\Pi\subset\Pi\). Similarly, \(\gamma^{-1}\Pi\subset\Pi\). This shows the claim. Moreover, \(\gamma\Pi=\Pi\) iff \(\gamma\) acts trivially on \(\tilde{\Pi}\) iff \(\gamma\) acts trivially on \(V/W\) because \(\tilde{\Pi}\) is a fundamental domain under the action of \(\tilde{\Gamma}\). There are natural maps between Neron-Severi spaces of fibrations and generic/geometric fibers. **Lemma 3.7**.: _Let \(X\to S\) be a fiber space, and \(U\subset S\) be an open set. Then there exist the following natural maps_ 1. \(N^{1}(X/S)\to N^{1}(X_{\eta})\) _with_ \([D]\mapsto[D_{\eta}]\)_,_ 2. \(N^{1}(X/S)\to N^{1}(X_{U}/U)\) _with_ \([D]\mapsto[D_{U}]\)_,_ 3. \(N^{1}(X/S)\to N^{1}(X_{\tilde{\eta}})\) _with_ \([D]\mapsto[D_{\tilde{\eta}}]\)_, and_ 4. \(N^{1}(X_{\eta})\to N^{1}(X_{\tilde{\eta}})\) _with_ \([D]\mapsto[D_{\tilde{\eta}}]\)_._ _Moreover, for any sufficiently small \(U\), we have \(N^{1}(X_{U}/U)\simeq N^{1}(X_{\eta})\), and they both map to \(N^{1}(X_{\tilde{\eta}})\) injectively._ Proof.: The (2) follows from the definition directly, the remaining items can be proved by a similar method as [12, Proposition 4.3]. Hence we just sketch the proof of (4) below. First, as \(N^{1}(X_{\eta})\) is defined over \(\mathbb{Q}\), it suffices to show that for a Cartier divisor \(D\) on \(X_{\eta}\) such that \(D\equiv 0/\eta\), we have \(D_{\bar{\eta}}\equiv 0/\bar{\eta}\). Let \(\tilde{C}\) be a curve on \(X_{\bar{\eta}}\). Then \(\tilde{C}\) is defined over a finite extension of \(K(S)\). In other words, there exist a field extension \(F/K(S)\) with \([F:K(S)]=d<\infty\) and a curve \(C\) on \(X_{F}=X\times_{\eta}\operatorname{Spec}F\) such that \(C_{\bar{\eta}}=\tilde{C}\). By the property of flat base extension, we have \[\chi(\tilde{C},mD_{\bar{\eta}})=\chi(C,mD_{F})\text{ for all }m\in\mathbb{Z},\] where \(D_{F}=D\times_{\eta}\operatorname{Spec}F\). By the definition of the intersection, we have \(D_{\bar{\eta}}\cdot\tilde{C}=D_{F}\cdot C\). Let \(C^{\prime}\) be the image of \(C\) under the natural morphism \(X_{F}\to X_{\eta}\). Then \(C^{\prime}\) is a curve on \(X_{\eta}\). By the same proof of the projection formula (see [1, Proposition 1.10]), we have \(D_{F}\cdot C=d(D\cdot C^{\prime})\). This shows the claim. In [13, Proposition 4.3], we showed that \(N^{1}(X_{U}/U)\to N^{1}(X_{\bar{\eta}})\) is injective for any sufficiently small \(U\). As \(N^{1}(X_{U}/U)\to N^{1}(X_{\eta})\) is surjective and \(N^{1}(X_{U}/U)\to N^{1}(X_{\bar{\eta}})\) factors through \(N^{1}(X_{U}/U)\to N^{1}(X_{\eta})\), we see that \(N^{1}(X_{U}/U)\to N^{1}(X_{\eta})\) is injective. Hence \(N^{1}(X_{U}/U)\simeq N^{1}(X_{\eta})\), and they both map to \(N^{1}(X_{\bar{\eta}})\) injectively. The following proposition is useful to study Calabi-Yau fibrations with non-trivial \(R^{1}f_{*}\mathcal{O}_{X}\). It partially answers a question in [13, Question 4.5]. **Proposition 3.8**.: _Let \((X,\Delta)\to S\) be a klt Calabi-Yau fiber space. Let \(E\) and \(M\) be the maximal vector spaces in \(\overline{\operatorname{Eff}}(X/S)\) and \(\overline{\operatorname{Mov}}(X/S)\), respectively. Then \(E\) and \(M\) are defined over \(\mathbb{Q}\)._ Proof.: By Lemma 3.7 (3), there exists the natural map \[\theta:N^{1}(X/S)\to N^{1}(X_{\bar{\eta}}),\quad[D]\mapsto[D_{\bar{\eta}}].\] We claim that \(\operatorname{Ker}(\theta)=E\). Hence, \(E\) is defined over \(\mathbb{Q}\). For \([D]\in\operatorname{Ker}(\theta)\), we have \(D_{\bar{\eta}}\equiv 0\) on \(X_{\bar{\eta}}\), hence \(D_{\bar{\eta}}+tA_{\bar{\eta}}\) is big for any ample/\(S\) divisor \(A\) on \(X\) and \(t\in\mathbb{R}_{>0}\). Thus \(D+tA\) is also big over \(S\). Take \(t\to 0\), we have \([D]\in\overline{\operatorname{Eff}}(X/S)\). For the same reason, \([-D]\in\overline{\operatorname{Eff}}(X/S)\). Hence \([D]\in E\). Conversely, if \([D]\in E\), then \(\pm[D_{\bar{\eta}}]\in\overline{\operatorname{Eff}}(X_{\bar{\eta}})\). Elements in \(\overline{\operatorname{Eff}}(X_{\bar{\eta}})\) intersect with movable curves non-negatively (see [1]). As movable curves form a full dimensional cone in \(N_{1}(X_{\bar{\eta}})\), we have \(D_{\bar{\eta}}\equiv 0\). To show that \(M\) is defined over \(\mathbb{Q}\). Suppose that \(X^{\prime}\to S\) is a fiber space such that \(X^{\prime}\) is \(\mathbb{Q}\)-factorial and \(X\dasharrow X^{\prime}/S\) is isomorphic in codimension 1. If \(D\) is a divisor on \(X\), then let \(D^{\prime}\) be the push-forward of \(D\) on \(X^{\prime}\). If \(C\) is an irreducible curve on \(X^{\prime}\), then we say that its class covers a divisor if the Zariski closure \[\overline{\bigcup_{[C^{\prime}]=[C]\in N_{1}(X^{\prime}/S)}C^{\prime}}\] in \(X^{\prime}\) has codimension \(\leq 1\). We claim that \[M=E\cap\{[D]\mid D^{\prime}\cdot C=0,\text{ where }X\dasharrow X^{\prime}/S \text{ is isomorphic in codimension 1},\] \[X^{\prime}\text{ is }\mathbb{Q}\text{-factorial, and }C\text{ is a curve whose class covers a divisor}\}. \tag{3.0.0}\] For "\(\subset\)", if \([D]\in M\), then \(\pm[D^{\prime}]\in\overline{\operatorname{Mov}}(X^{\prime}/S)\), and thus \(\pm D^{\prime}\cdot C\geq 0\) for any curve \(C\) whose class covers a divisor in \(X^{\prime}\). Hence, \(D^{\prime}\cdot C=0\). For "\(\supset\)", let \([D]\) belong to the right-hand side of (3.0.0). As \([D]\in E\) and \(K_{X}+\Delta\equiv 0/S\), we can run a \(D\)-MMP with scaling of an ample divisor \(A\) over \(S\) (e.g. [1]). If this MMP consists of divisorial contractions or Mori fibrations, then let \(X^{\prime}\) be the first variety where this occurs. Hence \(X\dasharrow X^{\prime}/S\) is isomorphic in codimension 1 and there is a curve \(C\) on \(X^{\prime}\) whose class covers a divisor such that \(D^{\prime}\cdot C<0\). This contradicts the choice of \(D\). Hence the MMP only consists of flips. If this MMP terminates with \(X^{\prime}\), then \(D^{\prime}\) is nef\(/S\), and thus \([D]\in\overline{\operatorname{Mov}}(X/S)\). If this MMP does not terminate, then the nef threshold \(\lambda_{i}\) in the scaling must approach \(0\) by [5], thus \([D]=\lim_{i\to\infty}[D+\lambda_{i}A]\in\overline{\operatorname{Mov}}(X/S)\). The same argument shows that \([-D]\in\overline{\operatorname{Mov}}(X/S)\). Therefore, \([D]\in M\). As \(N^{1}(X/S)\to N^{1}(X^{\prime}/S)\) is a linear map defined over \(\mathbb{Q}\), and the curve class \([C]\) in \(N_{1}(X^{\prime}/S)\) is an integral element, the vector space \[\{[D]\mid D^{\prime}\cdot C=0,\text{ where }X\dashrightarrow X^{\prime}/S \text{ is isomorphic in codimension 1},\] \[X^{\prime}\text{ is }\mathbb{Q}\text{-factorial, and }C\text{ is a curve whose class covers a divisor}\}.\] is defined over \(\mathbb{Q}\). As \(E\) is defined over \(\mathbb{Q}\), \(M\) is also defined over \(\mathbb{Q}\). ## 4. Groups schemes associated to log Calabi-Yau varieties The main result of this section is Theorem 4.9 that rectifies the numerical equivalence to linear equivalence with the help of automorphism groups. Let \(X\) be a scheme over a scheme \(S\) and \(\operatorname{Aut}(X/S)\) be the set of automorphism of \(X\) over \(S\). Let \(\mathcal{A}ut_{X/S}\) be the automorphism functor defined by \[\mathcal{A}ut_{X/S}(T)\coloneqq\operatorname{Aut}(X_{T}/T),\] where \(T\) is a scheme over \(S\) and \(X_{T}=X\times_{S}T\). If \(S\) is noetherian and \(X\to S\) is a flat projective morphism, then \(\mathcal{A}ut_{X/S}\) is representable by a group scheme \(\operatorname{\mathbf{Aut}}(X/S)\) which is locally of finite type over \(S\) (e.g. see [12, SS5.6.2]). In particular, we have \(\operatorname{\mathbf{Aut}}(X_{\eta}/\eta)\) and \(\operatorname{\mathbf{Aut}}(X_{\bar{\eta}}/\bar{\eta})\) that represent the automorphism functors on the generic and geometric fibers, respectively. For simplicity, we write \(\operatorname{\mathbf{Aut}}(X_{\eta})\) for \(\operatorname{\mathbf{Aut}}(X_{\eta}/\eta)\) and \(\operatorname{\mathbf{Aut}}(X_{\bar{\eta}})\) for \(\operatorname{\mathbf{Aut}}(X_{\bar{\eta}}/\bar{\eta})\). For a \(k\)-scheme \(V\), and a field \(K\) over \(k\), define \(V_{K}\coloneqq V\times_{\operatorname{Spec}k}\operatorname{Spec}K\), and \[V(\operatorname{Spec}K)\coloneqq\operatorname{Hom}_{\operatorname{Spec}K}( \operatorname{Spec}K,V_{K})\] as the set of \((\operatorname{Spec}K)\)-point of \(V\). Thus, \(\operatorname{\mathbf{Aut}}(X_{\eta})(\eta)=\operatorname{Aut}(X_{\eta})\). We also use \(V(K)\) to denote \(V(\operatorname{Spec}K)\) when there is no ambiguity. We use \([g]\in V_{K}\) to denote the \((\operatorname{Spec}K)\)-point of \(V_{K}\) that corresponds to \(g\in V(K)\). Thus, \(\operatorname{\mathbf{Aut}}(X_{\eta})(\eta)=\operatorname{Aut}(X_{\eta})\). Let \(\Delta\) be a divisor on \(X/S\), and \(\Delta_{T}\coloneqq\Delta\times_{S}T\) for a base extension \(T\to S\). Define \(\mathcal{A}ut_{X/S,\Delta}\) as the sub-functor of \(\mathcal{A}ut_{X/S}\) such that \[\mathcal{A}ut_{X/S,\Delta}(T)\coloneqq\{g\in\operatorname{Aut}(X_{T}/T)\}\mid g (\operatorname{Supp}\Delta_{T})=\operatorname{Supp}\Delta_{T}\}.\] If \(S\) is noetherian and \(X\to S\) is a flat projective morphism, then \(\mathcal{A}ut_{X/S,\Delta}\) is also representable by a subgroup scheme \(\operatorname{\mathbf{Aut}}(X/S,\Delta)\). When \(S=\operatorname{Spec}K\) with \(K\) a field, we use \(\operatorname{\mathbf{Aut}}^{0}(X/S,\Delta)\) to denote the connected component of \(\operatorname{\mathbf{Aut}}(X/S,\Delta)\) that contains the identity element (see [14, SS9.5]), and set \[\operatorname{Aut}^{0}(X/S,\Delta)\coloneqq\operatorname{\mathbf{Aut}}^{0}(X/S,\Delta)(K).\] Let \(X\) be a scheme over a scheme \(S\). Following [14, Definition 9.2.2], we define the relative Picard functor \(\mathcal{P}ic_{X/S}\) to be \[\mathcal{P}ic_{X/S}(T)\coloneqq\operatorname{Pic}(X_{T})/\operatorname{Pic}(T).\] Denote the associated sheaf in the etale topology by \[\mathcal{P}ic_{X/S,(\operatorname{\acute{e}t})}.\] See [12, SS2.3.7] for the sheafification of a functor. By [14, Theorem 9.4.8], if \(X\to S\) is projective Zariski locally over \(S\), and is flat with integral geometric fibers, then \(\mathcal{P}ic_{X/S,(\operatorname{\acute{e}t})}\) is representable by a separated scheme which is locally of finite type over \(S\), denoted by \(\operatorname{\mathbf{Pic}}(X/S)\). When \(S=\operatorname{Spec}K\) with \(K\) a field, we use \(\operatorname{\mathbf{Pic}}^{0}(X/S)\) to denote the connected component of \(\operatorname{\mathbf{Pic}}(X/S)\) that contains the identity element. When there is no risk of ambiguity, \(S\) will be omitted. In particular, for a fibration \(X\to S\), \(\mathcal{P}ic_{X_{\eta}/\eta,(\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm}}}}}}}}}}}}}}}}})}\) and \(\mathcal{P}ic_{X_{\bar{\eta}}/\bar{\eta},(\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ }}}}}}}}}}}}}}} })}\) are representable by \(\mathbf{Pic}(X_{\eta}/\eta)\) and \(\mathbf{Pic}(X_{\bar{\eta}}/\bar{\eta})\), respectively. One should notice that although \[\text{Hom}_{\bar{\eta}}(\bar{\eta},\mathbf{Pic}(X_{\bar{\eta}}/\bar{\eta}))= \mathcal{P}ic_{X_{\bar{\eta}}/\bar{\eta},(\text{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{{ }}}}}}}}}}}}}}})}(\bar{\eta})= \mathcal{P}ic_{X_{\bar{\eta}}/\bar{\eta},(\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{{\rm{ }}}}}}}}}}}}}}}}})}}(\bar{\eta})= \text{Pic}(X_{\bar{\eta}}),\] the \(\text{Hom}_{\eta}(\eta,\mathbf{Pic}_{X_{\eta}/\eta})=\mathcal{P}ic_{X_{\eta}/ \eta,(\text{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{\rm{ \rm{\rm{\rm{{\rm{\rm{{ }}}}}}}}}}}}}}}}})}}(\eta)\) may contain more elements than \(\text{Pic}(X_{\eta})\) due to the sheafification (see [10, Exercise 9.2.4]). The results presented in the rest of this section will be applied to the generic fiber of a Calabi-Yau fibration. Hence, from now on until the end of the section, we work with a projective log pair over a field of characteristic \(0\) (but may not be algebraically closed). The following theorem is base on [1, 1] (see [11, 12] for similar results). **Theorem 4.1** ([11]).: _Let \((X,\Delta)\) be a projective log pair with klt singularities over an algebraically closed field of characteristic \(0\). Assume that the log canonical divisor \(K_{X}+\Delta\sim_{\mathbb{R}}0\). Then_ 1. ([11, theorem 4.5])_. The connected component of the group scheme_ \(\mathbf{Aut}(X,\Delta)\) _containing the identity morphism,_ \(B\coloneqq\mathbf{Aut}^{0}(X,\Delta)\)_, is an abelian variety of dimension_ \(h^{1}(X,\mathcal{O}_{X})\)_._ 2. ([11, Theorem 4.5])_. The Albanese morphism_ \(X\to A\) _is isomorphic to the homogeneous fibration induced by the action of_ \(B\) _on_ \(X\)_, thus_ \(A=B/K\) _for some finite subgroup_ \(K\subset B\)_._ 3. ([11, Theorem 1.13])_. There exists an etale group homomorphism_ \(\mathbf{Aut}^{0}(X,\Delta)\to A(X)\) _(not canonical) such that_ \[X\simeq\mathbf{Aut}^{0}(X,\Delta)\times^{K}F,\] _where_ \(K=\text{Ker}(\mathbf{Aut}^{0}(X,\Delta)\to A(X))\) _and_ \(F\) _is the fiber of the Albanese morphism_ \(\operatorname{alb}_{X}\) _over the identity element. Here_ \(\mathbf{Aut}^{0}(X,\Delta)\times^{K}F\) _denotes the quotient of_ \(\mathbf{Aut}^{0}(X,\Delta)\times F\) _by_ \(K\) _under the action_ \((xg,g^{-1}y)\)_, where_ \([g]\in K\) _and_ \((x,y)\in\mathbf{Aut}^{0}(X,\Delta)\times F\) _are closed points._ 4. ([11, Theorem 3.1 (2)])_. The Albanese morphism_ \(\operatorname{alb}_{X}:X\to A(X)\) _is identified with the natural morphism_ \[\mathbf{Aut}^{0}(X,\Delta)\times^{K}F\to\mathbf{Aut}^{0}(X,\Delta)/K.\] **Remark 4.2**.: _The original statement of [11, Theorem 4.5] is for \(\mathbb{Q}\)-divisors, but it still holds for \(\mathbb{R}\)-divisors. In fact, a klt pair \((X,\Delta)\) satisfying \(K_{X}+\Delta\sim_{\mathbb{R}}0\) implies the existence of a \(\mathbb{Q}\)-divisor \(\Delta^{\prime}\) such that \((X,\Delta^{\prime})\) is klt with \(K_{X}+\Delta^{\prime}\sim_{\mathbb{Q}}0\) and \(\operatorname{Supp}\Delta=\operatorname{Supp}\Delta^{\prime}\)._ The following is a version of a result of Blanchard about holomorphic transformation groups (see [1]). Blanchard's lemma will be used repetitively in the sequel and will be generalized in Lemma 4.7. **Lemma 4.3** (Blanchard's Lemma ([1, Proposition 4.2.1])).: _Let \(f:X\to Y\) be a proper morphism of varieties over \(k\) which is an algebraically closed field of characteristic \(0\). Suppose that \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\). Let \(G\) be a connected group scheme acting on \(X\). Then there exists a unique \(G\)-action on \(Y\) such that \(f\) is \(G\)-equivariant._ _In particular, there exists a natural group homomorphism_ \[\nu:\mathbf{Aut}^{0}(X)\to\mathbf{Aut}^{0}(Y),\quad[g]\mapsto[g_{Y}] \tag{4.0.1}\] _such that \(g_{Y}\circ f=f\circ g\)._ By our convention, if \(f:X\to Y\) is a fibration, then \(f_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\). To simplify the expression, for a map \(\mu:A\to B\), if \(A_{0}\subset A\) or \(\mu(A)\subset B_{0}\subset B\), we still use \(\mu\) to denote the natural restriction maps \(A_{0}\to B\), \(A\to B_{0}\), etc. **Lemma 4.4**.: _Let \((X,\Delta)\) be a projective log Calabi-Yau pair with klt singularities over an algebraically closed field of characteristic \(0\), and \(D\) be a nef and big Cartier divisor on \(X\). Let \(G\subset\mathbf{Aut}^{0}(X)\) be a connected sub-abelian variety, then_ \[\vartheta_{D}:G\to\mathbf{Pic}^{0}(X),\quad[g]\mapsto[g^{*}D-D] \tag{4.0.2}\] _is a homomorphism of algebraic groups whose kernel is finite._ Proof.: By the universal property of the Picard functor, there is a morphism \[\vartheta_{D}:G\to\mathbf{Pic}(X),\quad[g]\mapsto[g^{*}D-D].\] As \(G\) is connected and contains the identity element, \(\vartheta_{D}(G)\subset\mathbf{Pic}^{0}(X)\). As \(\vartheta_{D}([\mathrm{Id}])=[0]\), \(\vartheta_{D}\) is a group homomorphism (see [12, Page 41, Corollary 1]). By the base-point free theorem, \(D\) is semi-ample. Let \(h:X\to Y\) be the birational morphism induced by \(D\). Let \(D=h^{*}H\) with \(H\) an ample Cartier divisor on \(Y\). Moreover, \(K_{X}+\Delta=h^{*}(K_{Y}+\Delta_{Y})\) where \(\Delta_{Y}=h_{*}\Delta\). Thus \((Y,\Delta_{Y})\) is still a projective Calabi-Yau pair with klt singularities. In particular, \(Y\) has rational singularities and thus \(h^{1}(Y,\mathcal{O}_{Y})=h^{1}(X,\mathcal{O}_{X})\). By Blanchard's lemma, there is a morphism \(\mathbf{Aut}^{0}(X,\Delta)\to\mathbf{Aut}^{0}(Y)\). It is straightforward to see that this is an injective map. Let \(G_{Y}\) be the image of \(G\) under this homomorphism. As the base field is of characteristic \(0\), both \(\mathbf{Pic}^{0}(X)\) and \(\mathbf{Pic}^{0}(Y)\) are smooth varieties ([12, Page 95, Theorem]). Hence \(\dim\mathbf{Pic}^{0}(X)=h^{1}(X,\mathcal{O}_{X})=h^{1}(Y,\mathcal{O}_{Y})= \dim\mathbf{Pic}^{0}(Y)\) by [13, Corollary 9.5]. Thus the natural injective morphism of algebraic groups \[\mathbf{Pic}^{0}(Y)\to\mathbf{Pic}^{0}(X),\quad[L]\mapsto[h^{*}L]\] is an isomorphism. We have the following commutative diagram, where \(\vartheta_{H}([g])=[g^{*}H-H]\) with \(H\) an ample divisor on \(Y\): To show that \(\vartheta_{D}\) has a finite kernel, it suffices to show that \(\vartheta_{H}\) has a finite kernel. This can be done similarly to [10, Theorem 4.6]. As \(\vartheta_{kH}=(\vartheta_{H})^{\otimes k}\), it suffices to show that \(\vartheta_{kH}\) has a finite kernel. Assume that \(kH\) is very ample, and \(Y\to\mathbb{P}^{n}\) is the closed embedding induced by \(|kH|\). If \(\vartheta_{kH}([g])=[0]\), then \(g^{*}(kH)\sim kH\). Hence \(g\) induces an isomorphism \(H^{0}(Y,\mathcal{O}_{Y}(g^{*}(kH)))\simeq H^{0}(Y,\mathcal{O}_{Y}(kH))\). Therefore, \(g\) corresponds to an automorphism of \(\mathbb{P}^{n}\) which leaves \(Y\) invariant. In other words, we can identify \(g\) with a closed point in \(\mathbf{Aut}(\mathbb{P}^{n},Y)\), where \(\mathbf{Aut}(\mathbb{P}^{n},Y)\) is the group scheme that represents the automorphisms of \(\mathbb{P}^{n}\) leaving \(Y\) invariant. Let \(j\) be this natural group homomorphism \[j:\vartheta_{kH}^{-1}([0])\to\mathbf{Aut}(\mathbb{P}^{n},Y).\] If \(j([g])=[h]\), then \(h|_{Y}=g\). Thus \(j\) is injective. Since \(\mathbf{Aut}(\mathbb{P}^{n},Y)\) is a subgroup of \(\mathbf{Aut}(\mathbb{P}^{n})\), \(\vartheta_{kH}^{-1}([0])\) is a linear group scheme. On the other hand, \(G_{Y}\) is an abelian variety as \(G\) is an abelian variety. Hence \(\vartheta_{kH}^{-1}([0])\) is a subgroup scheme of an abelian variety. Therefore, \(\vartheta_{kH}^{-1}([0])\) must be a finite group. This completes the argument. Lemma 4.4 is well-known when \(D\) is ample, and in this case, the Calabi-Yau condition is not necessary. We partially generalize Lemma 4.4 to a semi-ample divisor \(D\). **Theorem 4.5**.: _Let \((X,\Delta)\) be a projective klt Calabi-Yau pair over an algebraically closed field of characteristic \(0\). Let \(f:X\to Y\) be a fibration to a normal projective variety. Let \(H\) be a nef and big Cartier divisor on \(Y\)._ 1. _The natural morphism_ \[\vartheta_{H}\circ\nu:\mathbf{Aut}^{0}(X,\Delta)\to\mathbf{Aut}^{0}(Y)\to\mathbf{ Pic}^{0}(Y),\quad[g]\mapsto[g_{Y}]\mapsto[g_{Y}^{*}H-H]\] _is surjective._ 2. _The image of_ \[\vartheta_{f^{*}H}:\mathbf{Aut}^{0}(X,\Delta)\to\mathbf{Pic}^{0}(X),\quad[g] \mapsto[g^{*}(f^{*}H)-(f^{*}H)]\] _is_ \(f^{*}\mathbf{Pic}^{0}(X)\coloneqq\{[f^{*}L]\in\mathbf{Pic}^{0}(X)\mid[L]\in \mathbf{Pic}^{0}(Y)\}\)_._ Proof.: We proceed with the argument in several steps. Step 1. Let \(\operatorname{alb}_{X}:X\to A(X)\) and \(\operatorname{alb}_{Y}:Y\to A(Y)\) be the Albanese morphisms. By the universal property of Albanese morphisms, there exists \(g:A(X)\to A(Y)\) such that the following diagram commutes (4.0.3) By the canonical bundle formula [1], there exists a \(\Delta_{Y}\) such that \((Y,\Delta_{Y})\) is still a klt Calabi-Yau pair. By Theorem 4.1 (2), \(\operatorname{alb}_{X}\) and \(\operatorname{alb}_{Y}\) are both fibrations, and thus \(g\) is also a fibration. By Blanchard's lemma again, there exists a group homomorphism \[\tau:\mathbf{Aut}^{0}(A(X))\to\mathbf{Aut}^{0}(A(Y)).\] We claim that \(\tau\) is surjective. Let \(\beta\in A(Y)\) be a closed point, and \([\beta]\in\mathbf{Aut}^{0}(A(Y))\) be the corresponding morphism \([\beta]:y\mapsto y+\beta\) for each \(y\in A(Y)\). Take a closed point \(y_{0}\in A(Y)\), and let \(x_{0},x_{0}^{\prime}\in A(X)\) be closed points such that \(g(x_{0})=y_{0},g(x_{0}^{\prime})=y_{0}+\beta\), and \(\alpha=x_{0}^{\prime}-x_{0}\). Then we have \(\tau([\alpha])=[\beta]\). In fact, as \(g\) is a homomorphism of abelian varieties, \[g([\alpha](x))=g(x+\alpha)=g(x)+g(x_{0}^{\prime})-g(x_{0})=g(x)+\beta=[\beta]( g(x)).\] Hence \(\tau([\alpha])=[\beta]\) by the construction of \(\tau\). Step 2. By the universal property of the Albanese morphisms, an isomorphism of \(X\) induces an isomorphism of \(A(X)\). Hence there exist group homomorphisms \[\iota_{X}:\mathbf{Aut}^{0}(X)\to\mathbf{Aut}^{0}(A(X)),\quad\iota_{Y}: \mathbf{Aut}^{0}(Y)\to\mathbf{Aut}^{0}(A(Y)).\] Moreover, these morphisms sit in the following diagram (4.0.4) To check the commutativity of this diagram, it suffices to check the commutativity of the diagram below with \(\phi\in\mathbf{Aut}^{0}(X)\), This follows from (4.0.3) and the surjectivity of \(\operatorname{alb}_{X},\operatorname{alb}_{Y}\). By Theorem 4.1 (3), there is an etale group homomorphism \(\mathbf{Aut}^{0}(X,\Delta)\to A(X)\) (not canonical) such that \[X\simeq\mathbf{Aut}^{0}(X,\Delta)\times^{K}F,\] where \(K=\operatorname{Ker}(\mathbf{Aut}^{0}(X,\Delta)\to A(X))\) and \(F\) is the fiber of \(\operatorname{alb}_{X}\) over the identity element. By Theorem 4.1 (4), the Albanese morphism \(\operatorname{alb}_{X}:X\to A(X)\) is identified with the natural morphism \[\mathbf{Aut}^{0}(X,\Delta)\times^{K}F\to\mathbf{Aut}^{0}(X,\Delta)/K.\] Therefore, the natural homomorphism \[\mathbf{Aut}^{0}(X,\Delta)\stackrel{{\iota_{X}}}{{\longrightarrow }}\mathbf{Aut}^{0}(A(X))\simeq\mathbf{Aut}^{0}(\mathbf{Aut}^{0}(X,\Delta)/K) \simeq\mathbf{Aut}^{0}(X,\Delta)/K\] is just the quotient map. In particular, \[\iota_{X}(\mathbf{Aut}^{0}(X,\Delta))=\mathbf{Aut}^{0}(A(X)). \tag{4.0.5}\] By Step 1, \(\tau:\mathbf{Aut}^{0}(A(X))\to\mathbf{Aut}^{0}(A(Y))\) is surjective, hence, by (4.0.4), we have \[\iota_{Y}\circ\nu(\mathbf{Aut}^{0}(X,\Delta))=\tau\circ\iota_{X}(\mathbf{Aut} ^{0}(X,\Delta))=\tau(\mathbf{Aut}^{0}(A(X)))=\mathbf{Aut}^{0}(A(Y)).\] Let \[G_{Y}\coloneqq\nu(\mathbf{Aut}^{0}(X,\Delta))\subset\mathbf{Aut}^{0}(Y).\] It is an abelian variety as \(\mathbf{Aut}^{0}(X,\Delta)\) is an abelian variety by Theorem 4.1 (1). Then \[\dim G_{Y}\geq\dim\mathbf{Aut}^{0}(A(Y))=\dim A(Y).\] Step 3. By Lemma 4.4, the group homomorphism \[G_{Y}\stackrel{{\vartheta_{H}}}{{\longrightarrow}}\mathbf{Pic}^ {0}(Y)\] has a finite kernel. Thus \[\dim\vartheta_{H}(G_{Y})=\dim G_{Y}\geq\dim A(Y)=\dim\mathbf{Pic}^{0}(Y).\] Hence \(\vartheta_{H}\) is surjective. This shows the first claim. It is straightforward to check that the following diagram commutes, Combining it with the first claim, we have the second claim. Recall that for a fibration \(X\to S\), we set \(\eta\in S\) as the generic point and \(X_{\eta}\) as the generic fiber. By our convention, \(\mathbf{Aut}(X_{\eta})=\mathbf{Aut}(X_{\eta}/\eta)\) and \(\mathbf{Pic}(X_{\eta})=\mathbf{Pic}(X_{\eta}/\eta)\), and each \(g\in\operatorname{Aut}(X/S)\) acts on \(L\in\operatorname{Pic}(X/S)\) by \(g\cdot L\coloneqq g_{*}L\), where the pushforward \(g_{*}L\) is the same as \((g^{-1})^{*}L\). **Lemma 4.6**.: _Let \(X\to S\) be a fibration. There exists a natural morphism_ \[\alpha:\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta})\to\mathbf{Pic }(X_{\eta})\] _such that \(\mathbf{Aut}(X_{\eta})\) is a left action on \(\mathbf{Pic}(X_{\eta})\) as an \(\eta\)-group scheme. The same claim also holds for \(\mathbf{Aut}(X_{\bar{\eta}})\) and \(\mathbf{Pic}(X_{\bar{\eta}})\)._ Proof.: We only establish the first claim as the second claim can be shown similarly. Let \(T\) be a locally noetherian \(\eta\)-scheme, then by the definition of \(\mathbf{Aut}(X_{\eta})\) and \(\mathbf{Pic}(X_{\eta})\), we have \[\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta}))\simeq\mathcal{Aut}_{X_{\eta }}(T)=\operatorname{Aut}(X_{T}/T)\] and \[\operatorname{Hom}_{\eta}(T,\mathbf{Pic}(X_{\eta})\simeq\mathcal{P}ic_{X_{ \eta},(\operatorname{\acute{e}t})}(T).\] For \(L\in\mathcal{P}ic_{X_{\eta},(\operatorname{\acute{e}t})}(T)\), by the definition of the sheafification of the Picard functor in the etale topology [23, Definition 2.63], there exist an etale covering \(\{\sigma_{i}:U_{i}\to T\}\) and an \(L_{i}\in\operatorname{Pic}(X_{U_{i}})/\operatorname{Pic}(U_{i})\) such that \(L_{i}=\sigma_{i}^{*}L\). Note that in the above expression, we identify \(\operatorname{Pic}(X_{U_{i}})/\operatorname{Pic}(U_{i})\) as a subset of \(\mathcal{P}ic_{X_{\eta},(\operatorname{\acute{e}t})}(T)\) by [23, Theorem 2.64(iii)] as \(\mathcal{P}ic_{X_{\eta}}\) is a separated functor (see [23, Definition 2.37]) in the etale site. Let \(g\in\operatorname{Aut}(X_{T}/T)\) and \(g_{i}\in\operatorname{Aut}(X_{U_{i}}/U_{i})\) be the pull-back of \(g\). Note that for \(F_{i}\in\operatorname{Pic}(U_{i})\), if \(\pi_{i}:X_{U_{i}}\to U_{i}\) is the natural morphism, then \((g_{i})_{*}(\pi^{*}F_{i})\in\pi_{i}^{*}(\operatorname{Pic}(U_{i}))\) as \(\pi_{i}\) is a fibration. Therefore, \(g_{i}\) naturally acts on \(L_{i}\) by pushing forward the line bundle in \(\operatorname{Pic}(X_{U_{i}})\) that represents \(L_{i}\). By abusing the notation, we write this action by \((g_{i})_{*}L_{i}\). Then \((g_{i})_{*}L_{i}|_{U_{i}\times_{T}U_{j}}=(g_{j})_{*}L_{j}|_{U_{i}\times_{T}U_{ j}}\) for each \(i,j\). As \(\mathcal{P}ic_{X_{\eta},(\operatorname{\acute{e}t})}\) is a sheaf (see [23, Definition 2.37]), there exists a unique \(\tilde{L}\in\mathcal{P}ic_{X_{\eta},(\operatorname{\acute{e}t})}(T)\) such that \(\tilde{L}|_{U_{i}}=(g_{i})_{*}L_{i}\). We write \(g_{*}L\) for \(\tilde{L}\). By the definition of the fiber product, \[\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta}))\times\operatorname{Hom}_ {\eta}(T,\mathbf{Pic}(X_{\eta}))\simeq\operatorname{Hom}_{\eta}(T,\mathbf{Aut }(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta})).\] Therefore, the above discussion gives natural maps \[\begin{split}&\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta}) \times_{\eta}\mathbf{Pic}(X_{\eta}))\\ &\simeq\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta}))\times \operatorname{Hom}_{\eta}(T,\mathbf{Pic}(X_{\eta}))\xrightarrow{\tau} \operatorname{Hom}_{\eta}(T,\mathbf{Pic}(X_{\eta}))\end{split} \tag{4.0.0}\] with \(\tau(g,L)=g_{*}L\). Take \(T=\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta})\), then the identity morphism in \(\operatorname{Hom}_{\eta}(\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{ \eta}),\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta}))\) gives the natural morphism \[\alpha:\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta})\to\mathbf{ Pic}(X_{\eta}/\eta).\] Next, we show that \(\alpha\) is a group scheme action. Let \(m:\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Aut}(X_{\eta})\to\mathbf{Aut}(X_{ \eta})\) be the multiplication morphism for the group scheme \(\mathbf{Aut}(X_{\eta})\), and \(\operatorname{Id}_{A},\operatorname{Id}_{P}\) be the identity morphisms on \(\mathbf{Aut}(X_{\eta}),\mathbf{Pic}(X_{\eta})\), respectively. For a morphism \(\beta:A\to B\) over \(\eta\), we use \(\hat{\beta}:\operatorname{Hom}_{\eta}(T,A)\to\operatorname{Hom}_{\eta}(T,B)\) to denote the morphism obtained by applying \(\operatorname{Hom}_{\eta}(T,-)\) to \(\beta\). For \(g,h\in\operatorname{Aut}(X_{T})\), we have \(g_{*}(h_{*}L)=(g\circ h)_{*}L\). Therefore, \[\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Aut}(X _{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta}))\] is the same as \[\operatorname{Hom}_{\eta}(T,\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Aut}(X _{\eta})\times_{\eta}\mathbf{Pic}(X_{\eta}))\] Let \(T=\mathbf{Aut}(X_{\eta})\times_{\eta}\mathbf{Aut}(X_{\eta})\times_{\eta} \mathbf{Pic}(X_{\eta})\), then the image of \(\operatorname{Id}_{T}:T\to T\) under \(\hat{\alpha}\circ(m\times\operatorname{Id}_{P})^{\wedge}\) is the same as \(\alpha\circ(m\times\operatorname{Id}_{P})\). Similarly, the image of \(\operatorname{Id}_{T}\) under \(\hat{\alpha}\circ(\operatorname{Id}_{A}\times\alpha)^{\wedge}\) is the same as \(\alpha\circ(\operatorname{Id}_{A}\times\alpha)\). Thus \(\alpha\circ(m\times\operatorname{Id}_{P})=\alpha\circ(\operatorname{Id}_{A} \times\alpha)\). Besides, it is straightforward to check that \[\alpha(e,-):\{e\}\times_{\eta}\mathbf{Pic}(X_{\eta})\to\mathbf{Pic}(X_{\eta})\] is an isomorphism, where \(e\) is the identity element of the group scheme \(\mathbf{Aut}(X_{\eta})\). Hence \(\alpha\) is a group scheme action. The following generalizes Blanchard's lemma (see Lemma 4.3) for fibrations over a possibly non-algebraically closed field of characteristic \(0\). In the sequel, we will use \((-)_{F}\) to denote the base extension of an object (schemes, morphisms, etc.) to the field \(F\). **Lemma 4.7**.: _Let \(f:X\to Y/K\) be a proper morphism of schemes over a field \(K\) of characteristic \(0\). Let \(\bar{K}\) be the algebraic closure of \(K\), and \(f_{\bar{K}}:X_{\bar{K}}\to Y_{\bar{K}}\) be the base extension of \(f\) to \(\bar{K}\). Suppose that \(f_{\bar{K}*}\mathcal{O}_{X_{\bar{K}}}=\mathcal{O}_{Y_{\bar{K}}}\), then there exists a natural group homomorphism_ \[\nu:\mathbf{Aut}^{0}(X)\to\mathbf{Aut}^{0}(Y)\] _such that its base extension to \(\bar{K}\) is exactly the group homomorphism (4.0.1) in Blanchard's lemma._ _Moreover, the morphism \(\nu\) commutes with the base extension for an algebraic extension \(F/K\) in the following sense: if_ \[\tau:\mathbf{Aut}^{0}(X_{F})\to\mathbf{Aut}^{0}(Y_{F}) \tag{4.0.7}\] _is the natural group homomorphism, then \(\tau=\nu_{F}\) is the base extension of \(\nu\) to \(F\)._ Proof.: Let \[\mu:\mathbf{Aut}^{0}(X_{\bar{K}})\to\mathbf{Aut}^{0}(Y_{\bar{K}}),\quad[g] \mapsto[g_{Y}]\] be the natural group homomorphism (4.0.1) in Blanchard's lemma. Then \(\operatorname{Gal}(\bar{K}/K)\) naturally acts on both \(\mathbf{Aut}^{0}(X_{\bar{K}})\) and \(\mathbf{Aut}^{0}(Y_{\bar{K}})\). We claim that \(\mu\) is \(\operatorname{Gal}(\bar{K}/K)\)-invariant, that is, for any \([g]\in\mathbf{Aut}^{0}(X_{\bar{K}})\) and \(\sigma\in\operatorname{Gal}(\bar{K}/K)\), we have \(\mu(\sigma\cdot[g])=\sigma\cdot\mu([g])\). Note that we have a commutative diagram where the commutativity of the left square follows from Blanchard's lemma and the commutativity of the right square follows from that \(f\) is defined over \(K\). By the uniqueness in Blanchard's lemma, \(\mu([\sigma\circ g])=[\sigma\circ g_{Y}]\). As \([\sigma\circ g]=\sigma\cdot[g]\) and \([\sigma\circ g_{Y}]=\sigma\cdot[g_{Y}]\), we have \[\mu(\sigma\cdot[g])=[\sigma\circ g_{Y}]=\sigma\cdot[g_{Y}]=\sigma\cdot\mu([g ]).\] This shows that \(\mu\) is \(\operatorname{Gal}(\bar{K}/K)\)-invariant. As \[(\mathbf{Aut}^{0}(X))_{\bar{K}}=\mathbf{Aut}^{0}(X_{\bar{K}})\text{ and }( \mathbf{Aut}^{0}(Y))_{\bar{K}}=\mathbf{Aut}^{0}(Y_{\bar{K}})\] by [10, Lemma 9.5.1 (3)]. By the Galois descent for proper morphisms (See [11, Corollary 11.2.9]), there exists a morphism \[\nu:\mathbf{Aut}^{0}(X)\to\mathbf{Aut}^{0}(Y)\] such that its base extension to \(\bar{K}\) is exactly \(\mu\). Moreover, \(\nu\) is a group homomorphism because \(\mu\) is a group homomorphism. To see that \(\nu\) commutes with an algebraic base extension \(F/K\), without loss of generality, we can assume that \(F\subset\bar{K}\). The same argument as above shows that \(\mu\) is \(\operatorname{Gal}(\bar{K}/F)\)-invariant and thus descents to a morphism \(\mathbf{Aut}^{0}(X_{F})\to\mathbf{Aut}^{0}(Y_{F})\). By \(\mu=\nu_{\bar{K}}=(\nu_{F})_{\bar{K}}\), it must descent to \(\nu_{F}\). Recall that \(\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\coloneqq\mathbf{Aut}^{0}(X_{ \eta},\Delta_{\eta})(\eta)\). The following lemma will be used in Section 5. **Lemma 4.8**.: _Let \(\phi:X\dashrightarrow Y/S\) be a birational map between \(\mathbb{Q}\)-factorial projective varieties that are isomorphic in codimension \(1\). Let \(\Delta\) be a divisor on \(X\). Then there exists an isomorphism of groups_ \[\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\operatorname{Aut}^{0}(Y_{ \eta},\Delta_{Y,\eta}),\quad g\mapsto g_{Y},\] _where \(g_{Y}=\phi\circ g\circ\phi^{-1}\)._ Proof.: First, we show that there exists an isomorphism \[\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\to\mathbf{Aut}^{0}(Y_{\bar{ \eta}},\Delta_{Y,\bar{\eta}}),\quad[\bar{g}]\mapsto[\bar{g}_{Y}],\] where \(\bar{g}_{Y}=\bar{\phi}\circ\bar{g}\circ\bar{\phi}^{-1}\) with \(\bar{\phi}:X_{\bar{\eta}}\to X_{\bar{\eta}}\) induced from \(\phi\). Let \(\bar{H}_{Y}\) be an ample divisor on \(Y_{\bar{\eta}}\) and \(\bar{H}\) be its strict transform on \(X_{\bar{\eta}}\). Then because \(\bar{g}\) and Id lie in the same connected component of \(\mathbf{Aut}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\), we have \(\bar{g}\cdot\bar{H}\equiv\bar{H}\) on \(X_{\bar{\eta}}\) and thus \(\bar{g}_{Y}\cdot\bar{H}_{Y}\equiv\bar{H}_{Y}\) on \(Y_{\bar{\eta}}\). In particular, \(\bar{g}_{Y}\cdot\bar{H}_{Y}\) is ample. As \(\bar{\phi}\) is isomorphic in codimension 1, so is \(\bar{g}_{Y}\). Hence the birational map \(\bar{g}_{Y}\) is an isomorphism. Thus we have a natural map \[\varphi:\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\to\mathbf{Aut}(Y_ {\bar{\eta}},\Delta_{Y,\bar{\eta}}),\quad[\bar{g}]\mapsto[\bar{g}_{Y}].\] As \(\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\) is a connected group variety containing the identity, we have \(\varphi(\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}}))\subset\mathbf{ Aut}^{0}(Y_{\bar{\eta}},\Delta_{Y,\bar{\eta}})\). The same argument on \(\phi^{-1}\) gives the desired isomorphism. By [10, Lemma 9.5.1 (3)], we have \[\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}})=\mathbf{Aut}^{0}(X_{\eta },\Delta_{\eta})_{\bar{\eta}}\text{ and }\mathbf{Aut}^{0}(Y_{\bar{\eta}},\Delta_{Y,\bar{ \eta}})=\mathbf{Aut}^{0}(Y_{\eta},\Delta_{Y,\eta})_{\bar{\eta}}. \tag{4.0.8}\] Hence, for any \(g\in\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\), we have \([\bar{g}]\in\mathbf{Aut}^{0}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\), where \(\bar{g}\) is the base extension of \(g\) on \(X_{\bar{\eta}}\). Thus, \([\bar{g}_{Y}]\in\mathbf{Aut}^{0}(Y_{\bar{\eta}},\Delta_{Y,\bar{\eta}})\) by the previous argument. From this, we claim that the birational map \(g_{Y}=\phi\circ g\circ\phi^{-1}\) is indeed an isomorphism. Let \(H\) be an ample divisor on \(X_{\eta}\) and \(\bar{H}\) be the base extension of \(H\) on \(X_{\bar{\eta}}\). Thus \[(g_{Y}\cdot H)_{\bar{\eta}}=\bar{g}_{Y}\cdot\bar{H}.\] We claim that \(g_{Y}\cdot H\) is also ample. Replacing \(H\) by a multiple, we can assume that \(g_{Y}\cdot H\) is Cartier. If \(\mathcal{F}\) is a coherent sheaf on \(X_{\eta}\) with \(\bar{\mathcal{F}}\) the base extension of \(\mathcal{F}\) on \(X_{\bar{\eta}}\), then as \(\bar{\eta}\to\eta\) is a flat base extension, by [10, III Proposition 9.3], we have \[H^{i}(X_{\eta},\mathcal{F}\otimes\mathcal{O}_{X_{\eta}}(m(g_{Y}\cdot H)))_{ \bar{\eta}}\simeq H^{i}(X_{\bar{\eta}},\bar{\mathcal{F}}\otimes\mathcal{O}_{X_ {\bar{\eta}}}(m(\bar{g}_{Y}\cdot\bar{H}))).\] Therefore, \(g_{Y}\cdot H\) is ample by the cohomological criterion of ampleness (see [10, III Proposition 5.3]). Then \(g_{Y}\) is an isomorphism by the same reason as before. Thus \(g_{Y}\in\operatorname{Aut}(Y_{\eta},\Delta_{\eta})\). Since \([\bar{g}_{Y}]\in\mathbf{Aut}^{0}(Y_{\bar{\eta}},\Delta_{Y,\bar{\eta}})\), we have \(g_{Y}\in\operatorname{Aut}^{0}(Y_{\eta},\Delta_{Y,\eta})\) by (4.0.8). Applying the same argument to \(Y\dashrightarrow X/S\), it is straightforward to see that \(\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\operatorname{Aut}^{0}(Y_{ \eta},\Delta_{Y,\eta})\) is an isomorphism of groups. The following theorem is the main result of this section. **Theorem 4.9**.: _Let \(f:(X,\Delta)\to S\) be a klt Calabi-Yau fiber space, and \(\pi:X\to Y/S\) be a fibration over \(S\). Suppose that \(H\) is a nef and big\(/S\) Cartier divisor on \(Y\), and \(B=\pi^{*}H\). Then for any Cartier divisor \(\xi\) on \(Y_{\eta}\) such that \(\xi\equiv 0\), there exist \(h\in\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\) and \(\ell\in\mathbb{Z}_{>0}\) such that_ \[h\cdot B_{\eta}-B_{\eta}\sim\ell\pi_{\eta}^{*}\xi,\] _where \(\pi_{\eta}\) is the base extension of \(\pi\) to \(\eta\)._ Proof.: As \(\pi:X\to Y\) is a fibration between normal varieties, we have \(\pi_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\). Since \(\bar{\eta}\to S\) is a flat base extension, \(r:Y_{\bar{\eta}}\to Y\) is also a flat morphism. Let \(s:X_{\bar{\eta}}\to X\) and \(\pi_{\bar{\eta}}:X_{\bar{\eta}}\to Y_{\bar{\eta}}\) be the natural morphisms. As cohomology commutes with a flat base extension, we have \[\pi_{\bar{\eta}*}(s^{*}\mathcal{O}_{X})=r^{*}(\pi_{*}\mathcal{O}_{X}).\] By \(s^{*}\mathcal{O}_{X}=\mathcal{O}_{X_{\bar{\eta}}},r^{*}\mathcal{O}_{Y}=\mathcal{ O}_{Y_{\bar{\eta}}}\) and \(\pi_{*}\mathcal{O}_{X}=\mathcal{O}_{Y}\), we have \(\pi_{\bar{\eta}*}\mathcal{O}_{X_{\bar{\eta}}}=\mathcal{O}_{Y_{\bar{\eta}}}\). Hence the assumptions of Lemma 4.7 are satisfied. By Lemma 4.6, we have a natural group scheme action \[\mathbf{Aut}(Y_{\eta})\times_{\eta}\mathbf{Pic}(Y_{\eta})\to\mathbf{Pic}(Y_{ \eta}).\] As \([H_{\eta}]\in\mathbf{Pic}(Y_{\eta})\) is an \(\eta\)-point, we have the corresponding morphism \[\alpha_{H_{\eta}}:\mathbf{Aut}(Y_{\eta})\simeq\mathbf{Aut}(Y_{\eta})\times_{\eta }\left\{[H_{\eta}]\right\}\to\mathbf{Pic}(Y_{\eta}).\] Let \[(+):\mathbf{Pic}(Y_{\eta})\times_{\eta}\mathbf{Pic}(Y_{\eta})\to\mathbf{Pic}(Y_ {\eta})\] be the addition of abelian schemes. As \([-H_{\eta}]\in\mathbf{Pic}(Y_{\eta})\) is an \(\eta\)-point, we have the morphism which is "adding by \([-H_{\eta}]\)" \[(+)_{[-H_{\eta}]}:\mathbf{Pic}(Y_{\eta})\simeq\mathbf{Pic}(Y_{\eta})\times_{ \eta}\left\{[-H_{\eta}]\right\}\stackrel{{(+)}}{{\longrightarrow}} \mathbf{Pic}(Y_{\eta}).\] By Lemma 4.7, we have the natural group homomorphism \[\nu:\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\hookrightarrow\mathbf{Aut}^{0}(X _{\eta})\to\mathbf{Aut}^{0}(Y_{\eta}).\] Composing the above morphisms, we get \[(+)_{[-H_{\eta}]}\circ\alpha_{H_{\eta}}\circ\nu:\mathbf{Aut}^{0}(X_{\eta}, \Delta_{\eta})\to\mathbf{Aut}^{0}(Y_{\eta})\to\mathbf{Pic}(Y_{\eta})\to\mathbf{ Pic}(Y_{\eta}).\] As \([\mathrm{Id}]\mapsto[0]\), and \(\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\) is connected, the image of \(\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\) is contained in \(\mathbf{Pic}^{0}(Y_{\eta})\). We denote this morphism by \[\Theta_{H_{\eta}}:\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\mathbf{Pic}^{0}( Y_{\eta}).\] By [10, Lemma 9.5.1 (3)], we have \[(\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta}))_{\bar{\eta}}=\mathbf{Aut}^{0}(X_{ \bar{\eta}},\Delta_{\bar{\eta}})\text{ and }(\mathbf{Pic}^{0}(Y_{\eta}))_{\bar{\eta}}=\mathbf{Pic}^{0}(Y_{\bar{\eta}}).\] Then it is straightforward to see that \[(\Theta_{H_{\eta}})_{\bar{\eta}}:(\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta}))_{ \bar{\eta}}\to(\mathbf{Pic}^{0}(Y_{\eta}))_{\bar{\eta}},\quad[\bar{g}]\mapsto [\bar{g}^{*}_{Y_{\bar{\eta}}}H_{\bar{\eta}}-H_{\bar{\eta}}]\] with \([g_{Y_{\bar{\eta}}}]=\nu_{\bar{\eta}}([\bar{g}])\) is exactly the morphism in Theorem 4.5 (1). Thus \((\Theta_{H_{\eta}})_{\bar{\eta}}\) surjective by Theorem 4.5 (1). Therefore, there exists a finite Galois extension \(F/K(S)\) such that \[\mathbf{Aut}^{0}(X_{F},\Delta_{F})\cap(\Theta_{H_{F}})^{-1}([\xi_{F}])\] contains \((\mathrm{Spec}\,F)\)-points, where \(X_{F},\Delta_{F},\Theta_{H_{F}}\) and \(\xi_{F}\) correspond to \(X_{\eta},\Delta_{\eta},\Theta_{H_{\eta}}\) and \(\xi_{\eta}\) respectively after the base extension to \(F\). Take any \((\mathrm{Spec}\,F)\)-point \[\alpha\in\left(\mathbf{Aut}^{0}(X_{F},\Delta_{F})\cap(\Theta_{H_{F}})^{-1}([ \xi_{F}])\right)(F).\] In what follows, we will construct a \(g\in\mathrm{Aut}^{0}(X_{\eta},\Delta_{\eta})\) from \(\alpha\). The addition of abelian groups \[(+)_{F}:\mathbf{Aut}^{0}(X_{F},\Delta_{F})\times_{\mathrm{Spec}\,F}\mathbf{Aut} ^{0}(X_{F},\Delta_{F})\to\mathbf{Aut}^{0}(X_{F},\Delta_{F})\] admits a \(\mathrm{Gal}(F/K(S))\)-action. Then \((+)_{F}\) is \(\mathrm{Gal}(F/K(S))\)-invariant because \((+)_{F}\) is obtained by the base extension of \(\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\times_{\eta}\mathbf{Aut}^{0}(X_{\eta}, \Delta_{\eta})\to\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\) to \(F\) which is defined over \(K(S)\). Therefore, \[\sum_{i=1}^{\ell}\sigma\cdot\alpha_{i}=\sigma\cdot\left(\sum_{i=1}^{\ell}\alpha _{i}\right)\in\mathrm{Aut}^{0}(X_{F},\Delta_{F}) \tag{4.0.9}\] for any \(\sigma\in\mathrm{Gal}(F/K(S))\) and \(\alpha_{i}\in\mathrm{Aut}^{0}(X_{F},\Delta_{F})\). We emphasize that (4.0.9) is the summation in the abelian group \(\mathrm{Aut}^{0}(X_{F},\Delta_{F})\). Let \[\tilde{g}\coloneqq\sum_{\sigma\in\mathrm{Gal}(F/K(S))}\sigma\cdot\alpha\in \mathrm{Aut}^{0}(X_{F},\Delta_{F}). \tag{4.0.10}\] Then \([\tilde{g}]\in\mathbf{Aut}^{0}(X_{F},\Delta_{F})\) is a \(\operatorname{Gal}(F/K(S))\)-invariant (\(\operatorname{Spec}F\))-point by (4.0.9). Thus \([\tilde{g}]\) descents to an \(\eta\)-point by the Galois descent (see [10, Proposition 11.2.8]). To be precise, this means that there exists an \(\eta\)-point \([g]\in\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\) such that after the base extension to \(F\), we have \[[\tilde{g}]=[g_{F}]\in\mathbf{Aut}^{0}(X_{F},\Delta_{F})=\mathbf{Aut}^{0}(X_{ \eta},\Delta_{\eta})_{F}.\] As usual, we still use \(g\) to denote the automorphism corresponding to \([g]\in\mathbf{Aut}^{0}(X_{\eta},\Delta_{\eta})\). Because \(H_{\eta}\) is defined over \(\eta\), we have \(\Theta_{H_{F}}([\sigma\cdot\alpha])=\Theta_{H_{F}}([\alpha])=[\xi_{F}]\). Thus, by (4.0.10), \[\Theta_{H_{F}}([\tilde{g}])=[\ell\xi_{F}], \tag{4.0.11}\] where \(\ell=|\operatorname{Gal}(F/K(S))|\). Let \(g_{Y_{\eta}}\in\operatorname{Aut}^{0}(Y_{\eta})\) such that \([g_{Y_{\eta}}]=\nu([g])\) and \(\tilde{g}_{Y_{F}}\in\operatorname{Aut}^{0}(Y_{F})\) such that \([\tilde{g}_{Y_{F}}]=\nu_{F}([\tilde{g}])\). By Lemma 4.7, we have \(\nu_{F}([\tilde{g}])=\nu([g])_{F}\), and thus \[\tilde{g}_{Y_{F}}=(g_{Y_{\eta}})_{F}.\] By (4.0.11), \[\Theta_{H_{F}}([\tilde{g}])=[\tilde{g}_{Y_{F}}^{*}(H_{F})-H_{F}]=[\ell\xi_{F}] \in\mathbf{Pic}^{0}(Y_{F}).\] That is, \[[\tilde{g}_{Y_{F}}^{*}(H_{F})-H_{F}-\ell\xi_{F}]=0\in\mathbf{Pic}^{0}(Y_{F}).\] Because \[[g_{Y_{\eta}}^{*}H_{\eta}-H_{\eta}-\ell\xi]_{F}=[\tilde{g}_{Y_{F}}^{*}(H_{F})- H_{F}-\ell\xi_{F}]=0\in\mathbf{Pic}^{0}(Y_{F})=\mathbf{Pic}^{0}(Y_{\eta})_{F}.\] We see \([g_{Y_{\eta}}^{*}H_{\eta}-H_{\eta}-\ell\xi]=0\in\mathbf{Pic}^{0}(Y_{\eta})\), and thus \[g_{Y_{\eta}}^{*}(H_{\eta})-H_{\eta}\sim\ell\xi.\] As \(\pi_{\eta}\circ g=g_{Y_{\eta}}\circ\pi_{\eta}\) (because it holds after base extension to \(\bar{\eta}\) by Lemma 4.7) and \(B_{\eta}=\pi_{\eta}^{*}(H_{\eta})\), we have \[g^{*}B_{\eta}-B_{\eta}=\pi_{\eta}^{*}(g_{Y_{\eta}}^{*}(H_{\eta})-H_{\eta}) \sim\ell\pi_{\eta}^{*}\xi.\] Note that \(g^{*}B_{\eta}=g^{-1}\cdot(B_{\eta})\) where \(g^{-1}\in\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\), we have \[g^{-1}\cdot B_{\eta}-B_{\eta}\sim\ell\pi_{\eta}^{*}\xi\] as desired. **Remark 4.10**.: _A similar construction of \(g\) appeared in [11] for an ample divisor \(B\). We thank Yong Hu for clarifying the argument in [11]._ ## 5. Relative and generic cone conjectures In this section, we study the relationship between the cone conjecture of a Calabi-Yau fibration and that of its generic fiber. Recall that a polyhedral cone is closed by definition and \(\Gamma_{B}\) is the image of \(\operatorname{PsAut}(X/S,\Delta)\) under the group homomorphism \(\operatorname{PsAut}(X/S,\Delta)\to\operatorname{GL}(N^{1}(X/S)_{\mathbb{R}})\). By Definition 3.1, we set \[\operatorname{Mov}(X/S)_{+}\coloneqq\operatorname{Conv}(\overline{ \operatorname{Mov}}(X/S)\cap N^{1}(X/S)_{\mathbb{Q}}).\] In the sequel, we need the following lemma: **Lemma 5.1**.: _Let \(\xi\) be a Cartier divisor on \(X_{\eta}\). Then \(\xi\) is movable on \(X_{\eta}\) if and only if \(\xi_{\bar{\eta}}\) is movable on \(X_{\bar{\eta}}\)_ Proof.: As \(\bar{\eta}\to\eta\) is a flat base extension, for any \(m\in\mathbb{Z}\), we have the natural isomorphism \(H^{0}(X_{\eta},m\xi)_{\bar{\eta}}\simeq H^{0}(X_{\bar{\eta}},m\xi_{\bar{\eta}})\) by by [10, III Proposition 9.3]. Hence \(|m\xi|_{\bar{\eta}}=|m\xi_{\bar{\eta}}|\) and \[(\operatorname{Bs}|m\xi|)_{\bar{\eta}}=\operatorname{Bs}|m\xi_{\bar{\eta}}|,\] where \(\operatorname{Bs}|-|\) denotes the base locus of the linear system. Then the claim follows from the definition of movable divisors. Now we are ready to prove the first main result of this paper. Proof of Theorem 1.3.: Possibly enlarge \(P_{\eta}\), we can further assume that \(P_{\eta}\) is a rational polyhedral cone. By Lemma 2.6, \(\operatorname{PsAut}(X/S,\Delta)=\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\), and hence we use the same notation to denote the corresponding birational maps. The argument proceeds in several steps. Step 1. In this step, we lift \(P_{\eta}\) to \(\operatorname{Mov}(X/S)\) after some modifications on \(P_{\eta}\). For any \([\xi]\in P_{\eta}\) with \(\xi\geq 0\), there exists a unique \(D\geq 0\) on \(X\) such that \(D_{\eta}=\xi\) and \(\operatorname{Supp}D\) does not have vertical components. We claim that if \(\xi\) is a movable divisor, then \(D\) is also movable on \(X/S\). Possibly replacing \(\xi\) by a multiple, we can assume that \(\operatorname{codim}\operatorname{Bs}|\xi|\geq 2\). If \(\xi\sim\xi^{\prime}\) on \(X_{\eta}\), then \(\xi-\xi^{\prime}=\operatorname{div}(\alpha)\) for some \(\alpha\in K(X)\). If \(D^{\prime}\) is a divisor such that \(D^{\prime}_{\eta}=\xi^{\prime}\), then \(D-D^{\prime}-\operatorname{div}(\alpha)\) is a vertical divisor. Note that if \(E\) is a prime vertical divisor, then there exists a vertical divisor \(E^{\prime}\geq 0\) such that \(-E\sim_{\mathbb{Q}}E^{\prime}/S\). Hence, \(D\sim_{\mathbb{Q}}D^{\prime}+F/S\) with \(F\geq 0\) a vertical divisor. As \(\operatorname{codim}\operatorname{Bs}|\xi|\geq 2\), there exists \(\xi^{i}\geq 0,i=1,\dots,k\) such that \(\xi\sim\xi^{i}\) and \[\operatorname{codim}_{X_{\eta}}(\operatorname{Supp}\xi\cap\operatorname{Supp }\xi^{1}\cap\dots\cap\operatorname{Supp}\xi^{k})\geq 2.\] Then the above construction gives divisors \(D^{i}\geq 0,i=1,\dots,k\) such that \(D^{i}_{\eta}=\xi^{i}\) and \(D\sim_{\mathbb{Q}}D^{i}/S\) (note that \(\operatorname{Supp}D^{i}\) may have vertical components). Because \(\operatorname{Supp}D\) does not have vertical components, \[\operatorname{codim}_{X}(\operatorname{Supp}D\cap\operatorname{Supp}D^{1} \cap\dots\cap\operatorname{Supp}D^{k})\geq 2.\] Hence \(D\) is a movable divisor on \(X/S\), and this shows the claim. By Lemma 3.7 (4), there exists a natural injective map \[\mu:N^{1}(X_{\eta})\to N^{1}(X_{\bar{\eta}}),\quad[\xi]\mapsto[\bar{\xi}] \tag{5.0.1}\] that sends \(\operatorname{Mov}(X_{\eta})\) to \(\operatorname{Mov}(X_{\bar{\eta}})\) (but may not be surjective). Let \(P_{\bar{\eta}}\coloneqq\mu(P_{\eta})\) and \[G\coloneqq\{g_{\bar{\eta}}\mid g\in\operatorname{PsAut}(X_{\eta},\Delta_{\eta} )\}\subset\operatorname{PsAut}(X_{\bar{\eta}},\Delta_{\bar{\eta}})\] be the subgroup. By Lemma 2.5 (2), we can assume that there are effective divisors \(\xi^{i},i=1,\dots,k\) on \(X_{\eta}\) such that 1. each \(\xi^{i}_{\bar{\eta}}\) is movable, 2. \(\Pi_{\bar{\eta}}=\operatorname{Cone}([\xi^{i}_{\bar{\eta}}]\mid i=1,\dots,k) \subset P_{\bar{\eta}}\), and 3. \(G\cdot\Pi_{\bar{\eta}}=(G\cdot P_{\bar{\eta}})\cap\operatorname{Mov}(X_{\bar {\eta}})\). By Lemma 5.1, each \(\xi^{i}\) is also movable. Let \(D^{i}\) be the unique divisor on \(X\) such that \(D^{i}_{\eta}=\xi^{i}_{\eta}\) and \(\operatorname{Supp}D^{i}\) do not have vertical components. Then \(D^{i}\) is movable on \(X/S\) by Step 1. Let \[\Pi\coloneqq\operatorname{Cone}([D^{i}]\mid i=1,\dots,k)\subset\operatorname{ Mov}(X/S) \tag{5.0.2}\] be a rational polyhedral cone. Let \(\Pi_{\eta}\) be the image of \(\Pi\) under the natural map \(N^{1}(X/S)\to N^{1}(X_{\eta})\) (see Lemma 3.7 (1)). We claim that \[\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot\Pi_{\eta}=\operatorname{ Mov}(X_{\eta}). \tag{5.0.3}\] In fact, by \(\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot P_{\eta}\supset\operatorname{ Mov}(X_{\eta})\) and \(G\cdot\Pi_{\bar{\eta}}=(G\cdot P_{\bar{\eta}})\cap\operatorname{Mov}(X_{\bar{ \eta}})\), we have \[G\cdot\Pi_{\bar{\eta}}\supset\mu(\operatorname{Mov}(X_{\eta})).\] As \(\mu\) is injective by Lemma 3.7, we have \(\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\cdot\Pi_{\eta}\supset \operatorname{Mov}(X_{\eta})\). As \(\Pi_{\eta}\subset\operatorname{Mov}(X_{\eta})\) by construction, we have the desired equality. Replacing \(P\) by \(\Pi\), we can assume that \[P=\operatorname{Cone}([D^{i}]\mid i=1,\dots,k)\subset\operatorname{Mov}(X/S). \tag{5.0.4}\] Step 2. Let \(V\subset\operatorname{Eff}(X/S)\) be the vector space generated by vertical divisors. Our goal is to enlarge \(P\) to a rational polyhedral cone \(\Pi\) in such a way that a rational polyhedral subcone \(Q\subset\Pi+V\) fulfills the requirement of the theorem. First, we show that it suffices to construct a rational polyhedral cone \(\Pi\subset\operatorname{Eff}(X/S)\) so that for any \([B]\in\operatorname{Mov}(X/S)_{\mathbb{Q}}\), if \([B_{\eta}]\in P_{\eta}\), then \([B]\in(\operatorname{Aut}^{0}(X/S,\Delta)\cdot\Pi)+V\). Indeed, for any \([D]\in\operatorname{Mov}(X/S)_{\mathbb{Q}}\), by the definition of \(P_{\eta}\), there exists \(g\in\operatorname{PsAut}(X_{\eta},\Delta_{\eta})\) and \([B_{\eta}]\in P_{\eta}\) such that \(g\cdot[B_{\eta}]=[D_{\eta}]\). Thus \([(g^{-1}\cdot D)_{\eta}]\in P_{\eta}\). By the above claim, \([g^{-1}\cdot D]\in(\operatorname{Aut}^{0}(X/S,\Delta)\cdot\Pi)+V\). As \(\operatorname{Aut}^{0}(X/S,\Delta)\subset\operatorname{PsAut}(X/S,\Delta)\) and \(\operatorname{PsAut}(X/S,\Delta)\cdot V=V\), we have \([D]\in\operatorname{PsAut}(X/S,\Delta)\cdot(\Pi+V)\). This shows \(\operatorname{PsAut}(X/S,\Delta)\cdot(\Pi+V)\supset\operatorname{Mov}(X/S)_{ \mathbb{Q}}\). By Lemma 2.5 (1), there exists a rational polyhedral cone \(Q\subset\Pi+V\) such that \[\operatorname{PsAut}(X/S,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}.\] In the subsequent steps, we will proceed to construct a \(\Pi\) to satisfy the desired property in Step 2. Step 3. To begin, we analyze the property in Step 2 for a single effective \(\mathbb{Q}\)-Cartier divisor \(B\) such that \([B]\in\operatorname{Mov}(X/S)_{\mathbb{Q}}\) and \([B_{\eta}]\in P_{\eta}\). We will construct a rational polyhedral cone \(\Pi_{B}^{h_{i}}\) such that if \(D\) is an effective divisor with \(D_{\eta}\equiv B_{\eta}\), then \[[D]\in(\operatorname{Aut}^{0}(X/S,\Delta)\cdot\Pi_{B}^{h_{i}})+V.\] It suffices to do this for an \(mB\) with \(m\in\mathbb{Z}_{>0}\). Hence, replacing \(B\) by a multiple, \(B\) can be assumed to be a Cartier divisor. By Theorem 2.2 and Lemma 2.1, for some \(1\gg\epsilon>0\), \((X/S,\Delta+\epsilon B)\) has a minimal model \(\phi:X\dashrightarrow Y/S\) such that \(\phi\) is isomorphic in codimension \(1\) and \(B_{Y}\) is semi-ample\(/S\). Let \(\tau:Y\to Z/S\) be the contraction induced by \(B_{Y}\). Possibly replacing \(B\) by a multiple, we can further assume that \(B_{Y}=\tau^{*}H\) for an ample divisor \(H\) on \(Z/S\). Because \(D_{Y,\eta}\equiv B_{Y,\eta}\) on \(Y_{\eta}\), take a sufficiently small open set \(U\subset S\), we see that \(D_{Y}|_{U}\equiv B_{Y}|_{U}\) is nef on \(Y_{U}/U\) by Lemma 3.7. Thus, \(D_{Y}|_{U}\) is semi-ample\(/U\) by Theorem 2.2. In particular, \(D_{Y}|U=\tau_{U}^{*}H^{\prime}\) for an ample divisor \(H^{\prime}\) on \(Z_{U}/U\). This implies \[D_{Y,\eta}-B_{Y,\eta}=\tau_{\eta}^{*}(H_{\eta}-H_{\eta}^{\prime}), \tag{5.0.5}\] and \(H_{\eta}-H_{\eta}^{\prime}\equiv 0\) on \(Z_{\eta}\). According to Theorem 4.9, there exist \(g_{Y}\in\operatorname{Aut}^{0}(Y_{\eta},\Delta_{Y,\eta})\) and \(k\in\mathbb{Z}_{>0}\) such that \[g_{Y}\cdot B_{Y,\eta}-B_{Y,\eta}\sim k(D_{Y,\eta}-B_{Y,\eta}).\] As \(X_{\eta}\dashrightarrow Y_{\eta}\) is isomorphic in codimension 1, we have \[\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\simeq\operatorname{Aut}^{0}(Y_{ \eta},\Delta_{Y\eta}),\quad g\mapsto g_{Y}\coloneqq\phi\circ g\circ\phi^{-1}\] by Lemma 4.8. Hence \[g\cdot B_{\eta}-B_{\eta}\sim k(D_{\eta}-B_{\eta}),\] which implies \[g\cdot B-B\sim k(D-B)\mod(\text{vertical divisors}). \tag{5.0.6}\] We claim that under the natural map (see Lemma 3.7 (1)) \[\pi:\operatorname{Eff}(X/S)/V\hookrightarrow N^{1}(X/S)/V\to N^{1}(X_{ \eta}),\] the preimage of \([B_{\eta}]\in N^{1}(X_{\eta})\) is \[\pi^{-1}([B_{\eta}])=[B]+N, \tag{5.0.7}\] where \(N\) is a finite-dimensional vector space. In fact, replacing \(X,B_{\eta}\) by \(Y,B_{Y,\eta}\), respectively, we can assume that \(B_{\eta}\) is semi-ample over \(\eta\), and \(h:X_{\eta}\to Z_{\eta}\) is the contraction defined by \(B_{\eta}\). We will show that \[\pi^{-1}([B_{\eta}])=[B]+\operatorname{Span}_{\mathbb{R}}\{[h^{*}\xi]\mid\xi \equiv 0\text{ on }Z_{\eta}\}/V, \tag{5.0.8}\] where, by abusing notation, \([h^{*}\xi]\) denotes the divisor class \([\Xi]\in N^{1}(X/S)/V\) such that \(\Xi\) is a divisor on \(X\) satisfying \(h^{*}\xi\sim_{\mathbb{Q}}\Xi_{\eta}\). This \([\Xi]\) depends uniquely on \(\xi\). Suppose that \(D\geq 0\) is a \(\mathbb{Q}\)-Cartier divisor with \(D_{\eta}\equiv B_{\eta}\) on \(X_{\eta}\) (i.e., \(\pi([D])=[B_{\eta}]\)). By the same argument as before (see (5.0.5)), we have \([D]\in[B]+\operatorname{Span}_{\mathbb{R}}\{[h^{*}\xi]\mid\xi\equiv 0\text{ on }Z_{\eta}\}/V\). Conversely, any \(\mathbb{Q}\)-Cartier divisor \(D\) satisfying \([D]\in[B]+\operatorname{Span}_{\mathbb{R}}\{[h^{*}\xi]\mid\xi\equiv 0\text{ on }Z_{\eta}\}/V\) is \(\mathbb{Q}\)-linearly equivalent to an effective divisor (as \(B_{\eta}\) is the pullback of an ample divisor from \(Z_{\eta}\)). This shows the claim. Let \(D_{i}\geq 0,i=1,\cdots,t\) be Cartier divisors on \(X\) such that \(D_{i,\eta}\equiv B_{\eta}\) and \[[D_{i}]-[B],\quad i=1,\cdots,t\] generate \(N\) (they may not necessarily be a basis), where \(N\) is defined in (5.0.7). By (5.0.6), let \(h_{i}\in\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\) and \(k_{i}\in\mathbb{Z}_{>0}\) such that \[h_{i}\cdot B-B\sim k_{i}(D_{i}-B)\mod(\text{vertical divisors}),\quad i=1, \cdots,t.\] Define a rational polytope \[Q_{B}^{h_{i}}\coloneqq[B]+\sum_{i=1}^{t}[0,1][h_{i}\cdot B-B]\subset N^{1}(X/ S). \tag{5.0.9}\] Then \(\Pi_{B}^{h_{i}}\) is defined to be the rational polyhedral cone \[\Pi_{B}^{h_{i}}\coloneqq\operatorname{Cone}(Q_{B}^{h_{i}}).\] Let \(\tilde{Q}_{B}^{h_{i}}\) and \(\tilde{\Pi}_{B}^{h_{i}}\subset N^{1}(X/S)/V\) be the images of \(Q_{B}^{h_{i}}\) and \(\Pi_{B}^{h_{i}}\) under the natural quotient map \(N^{1}(X/S)\to N^{1}(X/S)/V\), respectively. We claim that \[\langle h_{i}\mid i=1,\cdots t\rangle\cdot\tilde{\Pi}_{B}^{h_{i}}\supset\pi^{- 1}([B_{\eta}]), \tag{5.0.10}\] where \(\langle h_{i}\mid i=1,\cdots t\rangle\subset\operatorname{Aut}^{0}(X/S,\Delta)\) is the subgroup generated by \(h_{i},i=1,\cdots t\). In particular, (5.0.10) implies that \(\Pi_{B}^{h_{i}}\) satisfies the desired property stated at the beginning of Step 3. As in the proof of Theorem 4.9, there exists a natural group homomorphism \[\operatorname{\mathbf{Aut}}^{0}(Y_{\eta},\Delta_{Y,\eta})\xrightarrow{\vartheta _{B_{Y,\eta}}}\operatorname{\mathbf{Pic}}^{0}(Y_{\eta}).\] The corresponding map on \(\eta\)-points \[\operatorname{Aut}^{0}(Y_{\eta},\Delta_{Y,\eta})\to\operatorname{Pic}^{0}(Y_{ \eta}),\quad h_{Y}\mapsto h_{Y}^{*}B_{Y,\eta}-B_{Y,\eta}\] is also a group homomorphism of abelian groups. As \(\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\simeq\operatorname{Aut}^{0}(Y_ {\eta},\Delta_{Y,\eta})\) by Lemma 4.8, \[\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\operatorname{Pic}^{0}(X_{ \eta}),\quad h\mapsto h^{*}B_{\eta}-B_{\eta}\] is also a group homomorphism of abelian groups. Because \[h\cdot B_{\eta}-B_{\eta}=(h^{-1})^{*}\cdot B_{\eta}-B_{\eta},\] we see that \[\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\operatorname{Pic}(X_{\eta}),\quad h\mapsto h\cdot B_{\eta}-B_{\eta} \tag{5.0.11}\] is still a group homomorphism of abelian groups. If \(\{[D_{i}]-[B]\mid i=1,\cdots,s\}\) is a basis of \(N\), then \(\langle h_{i}\mid i=1,\cdots s\rangle\) acts on \([B]+N\) by translation according to (5.0.11). Hence, the polytope \(\tilde{Q}_{B}^{h_{i}}\) tiles \[[B]+N=\pi^{-1}([B_{\eta}])\] under the action of \(\langle h_{i}\mid i=1,\cdots s\rangle\). See Figure 1 below. This established (5.0.10). In summary, the above argument shows that as long as \[[D_{i}]-[B],\quad i=1,\cdots,t\] generate \(N\), then \[\operatorname{Aut}^{0}(X/S,\Delta)\cdot\tilde{\Pi}_{B}^{h_{i}}\supset\pi^{-1} ([B_{\eta}]). \tag{5.0.12}\] Step 4. In this step, we globalize the construction in Step 3 for all divisors on \(P_{\eta}\) uniformly. For each \(g\in\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\) and a Cartier divisor \(D\) on \(X\), set \[g\star D\coloneqq[g\cdot D-D]\in N^{1}(X/S)_{\mathbb{Z}}/(V\cap N^{1}(X/S)_{ \mathbb{Z}}). \tag{5.0.13}\] This action extends to a linear map \[G_{g}:\bigoplus_{i=1}^{k}\mathbb{Q}\cdot D^{i}\to N^{1}(X/S)_{\mathbb{Q}}/(V \cap N^{1}(X/S)_{\mathbb{Q}}).\] as follows: for \(D\coloneqq\sum_{i=1}^{k}a_{i}D^{i}\) with \(a_{i}\in\mathbb{Q}\), set \[G_{g}(\sum_{i=1}^{k}a_{i}D^{i})=\sum_{i=1}^{k}a_{i}(g\star D^{i})\in N^{1}(X/S)_{ \mathbb{Q}}/(V\cap N^{1}(X/S)_{\mathbb{Q}}).\] As \(\operatorname{Aut}^{0}(X/S,\Delta)=\operatorname{Aut}^{0}(X_{\eta},\Delta_{ \eta})\) is an abelian group, and \[\operatorname{Aut}^{0}(X_{\eta},\Delta_{\eta})\to\operatorname{Pic}(X_{\eta}),\quad h\mapsto h\star D_{\eta} \tag{5.0.14}\] is a group homomorphism for a Cartier divisor \(D\) (see (5.0.11)). We have \[G_{g+h}=G_{g}+G_{h}.\] This follows from \((G_{g+h}(D))_{\eta}=G_{g+h}(D_{\eta})\) and \[G_{g+h}(D_{\eta})=G_{g}(D_{\eta})+G_{h}(D_{\eta})\in\operatorname{Pic}(X_{\eta }).\] Hence \(G_{g+h}=G_{g}+G_{h}\). Let \(\operatorname{Aut}^{0}(X/S,\Delta)_{\mathbb{Q}}\coloneqq\operatorname{Aut}^ {0}(X/S,\Delta)\otimes_{\mathbb{Z}}\mathbb{Q}\) be a \(\mathbb{Q}\)-vector space. For \(\tau=\sum_{j}r_{j}g_{j}\in\operatorname{Aut}^{0}(X/S,\Delta)_{\mathbb{Q}}\), set \[G_{\tau}(D)=\sum_{j}r_{j}(g_{j}\star D)\in N^{1}(X/S)/V.\] We need to show that this is well-defined. In fact, suppose that \(\tau=\sum_{t}r_{t}^{\prime}g_{t}^{\prime}\in\operatorname{Aut}^{0}(X/S,\Delta )_{\mathbb{Q}}\). Choose \(m\in\mathbb{Z}_{>0}\) such that \(mr_{j},mr_{t}^{\prime}\in\mathbb{Z}\) for all \(j,t\). Then \(m\tau\in\operatorname{Aut}^{0}(X/S,\Delta)\). Hence, by the group homomorphism (5.0.14), \[(mr_{j}g_{j})\star D^{i}=mr_{j}(g_{j}\star D^{i}),\quad(mr_{t}^{\prime}g_{t}^ {\prime})\star D^{i}=mr_{t}^{\prime}(g_{t}^{\prime}\star D^{i}),\text{ and}\] \[G_{m\tau}(D)=\sum_{i=1}^{k}a_{i}((m\tau)\star D^{i})=\sum_{i=1}^{k}a_{i}\left( \sum_{j}mr_{j}(g_{j}\star D^{i})\right)=\sum_{i=1}^{k}a_{i}\left(\sum_{t}mr_{t }^{\prime}(g_{t}^{\prime}\star D^{i})\right).\] Therefore, \(G_{\tau}\) is independent of the choice of the expression of \(\tau\). Thus, we obtain a map \[\operatorname{Aut}^{0}(X/S,\Delta)_{\mathbb{Q}}\times\bigoplus_{i=1}^{k} \mathbb{Q}\cdot D^{i}\to N^{1}(X/S)_{\mathbb{Q}}/V,\quad(\tau,D)\mapsto G_{ \tau}(D),\] and the natural corresponding map \[j:\operatorname{Aut}^{0}(X/S,\Delta)_{\mathbb{Q}}\to\operatorname{Hom}_{ \mathbb{Q}}(\bigoplus_{i=1}^{k}\mathbb{Q}\cdot D^{i},N^{1}(X/S)_{\mathbb{Q}} /V)\] is linear. Choose \(g_{1},\cdots,g_{\ell}\in\operatorname{Aut}^{0}(X/S,\Delta)\) such that \(\{G_{g_{1}},\cdots,G_{g_{\ell}}\}\) is a basis of \(j(\operatorname{Aut}^{0}(X/S,\Delta)_{\mathbb{Q}})\). By (5.0.9), for each \(D_{i}\), there is a rational polytope \(Q_{D_{i}}^{g_{j}}\) associated to \(g_{1},\cdots,g_{\ell}\). Let \[\Lambda\coloneqq\sum_{1\leq i\leq k}[0,1]Q_{D_{i}}^{g_{j}}\subset N^{1}(X/S) \tag{5.0.15}\] be a rational polytope. We claim that the rational polyhedral cone \[\Pi\coloneqq\operatorname{Cone}(\Lambda) \tag{5.0.16}\] satisfies the requirement. Recall, this means that for any \([D]\in\operatorname{Mov}(X/S)_{\mathbb{Q}}\), if \([D_{\eta}]\in P_{\eta}\), then \([D]\in\left(\operatorname{Aut}^{0}(X/S,\Delta)\cdot\Pi\right)+V\). Let \[B=\sum_{i}r_{i}D^{i},\quad r_{i}\in[0,1]\cap\mathbb{Q}. \tag{5.0.17}\] By Step 3, for each \(B\), there exists \(\{h_{i}\in\operatorname{Aut}^{0}(X/S,\Delta)\mid 1\leq i\leq t\}\) such that \[\{[h_{i}\cdot B-B]\mid i=1,\cdots,t\}\] generates \(N\). By the choice of \(g_{i}\), we have \[\operatorname{Span}_{\mathbb{Q}}\{G_{g_{i}}\mid 1\leq i\leq\ell\}\supset \operatorname{Span}_{\mathbb{Q}}\{G_{h_{i}}\mid 1\leq i\leq t\}.\] Thus \[\operatorname{Span}_{\mathbb{Q}}\{G_{g_{i}}(B)\mid 1\leq i\leq\ell\}\supset \operatorname{Span}_{\mathbb{Q}}\{G_{h_{i}}(B)\mid 1\leq i\leq t\}.\] That is, \[\{[g_{j}\cdot B-B]\mid j=1,\cdots,\ell\}\] generates \(N\). Therefore, as shown by Step 3, we have \[\operatorname{Aut}^{0}(X/S,\Delta)\cdot\tilde{\Pi}_{B}^{g_{i}}\supset\pi^{-1} ([B_{\eta}]).\] Hence, to show the claim for \(\Pi\), it suffices to show \(\Pi\supset\Pi_{B}^{g_{i}}\) for any \(B=\sum_{i}r_{i}D^{i},r_{i}\in\mathbb{Q}_{\geq 0}\). In fact, if \([D]\in\operatorname{Mov}(X/S)_{\mathbb{Q}}\) satisfies \([D_{\eta}]\in P_{\eta}\), then there exists a \(B=\sum_{i}r_{i}D^{i},r_{i}\in\mathbb{Q}_{\geq 0}\) such that \(D_{\eta}\equiv B_{\eta}\), that is, \([D]\in\pi^{-1}([B_{\eta}])\). By (5.0.17), \[g_{j}\cdot B=\sum_{i}r_{i}(g_{j}\cdot D^{i}),\] and thus for \(0\leq\mu_{j}\leq 1,1\leq j\leq\ell\), we have \[B+\sum_{j}\mu_{j}(g_{j}\cdot B-B)=\sum_{i}r_{i}\left(D^{i}+\sum_{j}\mu_{j}(g_{ j}\cdot D^{i}-D^{i})\right).\] By the construction of \(Q_{B}^{g_{i}}\) (see (5.0.9)) and \(\Lambda\) (see (5.0.15)), we have \(\Pi_{B}^{g_{i}}\subset\Pi\). Finally, by Lemma 2.5 (1), there exists a rational polyhedral cone \(Q\subset\Lambda\cap\operatorname{Mov}(X/S)\) such that \(\operatorname{PsAut}(X,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}\). This completes the proof. **Remark 5.2**.: _Let \(V,W\) be \(\mathbb{Q}\)-vector spaces, and \(O\subset\operatorname{Hom}_{\mathbb{Q}}(V,W)\) be a subspace. Fix an affine space \(L\subset V\). Suppose that for each \(v\in L\), we have \(\operatorname{Span}_{\mathbb{Q}}\{Gv\mid G\in O\}=W\). This does not imply that the same property holds for \(L_{\mathbb{R}}\). That is, there may exist \(v\in L_{\mathbb{R}}-L\) such that \(\operatorname{Span}_{\mathbb{R}}\{Gv\mid G\in O\}\neq W_{\mathbb{R}}\). We have the following example:_ **Example 5.3**.: _Let \(V=W=\mathbb{Q}^{2}\), \(O=\operatorname{Span}_{\mathbb{Q}}\{G_{1}=\begin{pmatrix}1&2\\ 3&4\end{pmatrix},G_{2}=\begin{pmatrix}1&0\\ 0&1\end{pmatrix}\}\), and \(L=\{x=-1\}\). Then one can check_ \[\dim_{\mathbb{R}}\operatorname{Span}_{\mathbb{R}}\{Gv\mid G\in O\}=\left\{ \begin{array}{ll}1&v=(-1,\frac{\pm\sqrt{33}-3}{4}),\\ 2&v\in L\text{ and }v\neq(-1,\frac{\pm\sqrt{33}-3}{4}).\end{array}\right.\] _Thus, we cannot obtain \(\operatorname{Aut}^{0}(X/S,\Delta)\cdot Q=\operatorname{Mov}(X/S)\) from \(\operatorname{Aut}^{0}(X/S,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}\)._ ## 6. Finiteness of contractions, minimal models, and the existence of weak fundamental domains ### Finiteness of contractions and minimal models The contractions of varieties are governed by the nef cones rather than movable cones. Nevertheless, the finiteness of (targets of) the contractions can be derived from the movable cone conjecture as stated in Theorem 1.5. Proof of Theorem 1.5.: Note that \[C:=\operatorname{Cone}\{g^{*}H\mid g:X\to Z/S,H\text{ is an ample}/S\text{ Cartier divisor on }Z\}\subset\operatorname{Mov}(X/S).\] By Lemma 2.5 (1), there exists a rational polyhedral cone \(Q\subset\operatorname{Mov}(X/S)\) such that \[\operatorname{PsAut}(X/S,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}\supset C\cap N^{1}(X/S)_{\mathbb{Q}}.\] Let \(Q=\sqcup_{i=1}^{m}Q_{i}^{\circ}\) be the decomposition as in Theorem 2.4 such that for effective divisors \(B,D\) with \([B],[D]\in Q_{i}^{\circ}\) and \(1\gg\epsilon>0\), the klt pairs \((X,\Delta+\epsilon B)\) and \((X,\Delta+\epsilon D)\) share same minimal models. Let \(\sigma_{i}:X\dashrightarrow Y_{i}\) be a minimal model that corresponds to \(Q_{i}^{\circ}\). By Lemma 2.1, we can further assume that \(\sigma_{i}\) is isomorphic in codimension 1. Let \(Q_{i}=\overline{Q_{i}^{\circ}}\) be the closure of \(Q_{i}^{\circ}\), then the strict transform of the rational polyhedral cone \(Q_{i}\), denoted by \(Q_{i}^{Y_{i}}\), is contained in \(\overline{\operatorname{Amp}}(Y_{i}/S)\). Hence, there are finitely many contractions \(h_{i}^{i_{k}}:Y_{i}\to Z_{i}^{i_{k}}/S\) corresponding to the faces of \(Q_{i}^{Y_{i}}\). We claim that if \(g:X\to Z/S\) is a contraction, then \(Z\) must be isomorphic to one of \(Z_{i}^{i_{k}}\). Let \(H\) be an ample divisor on \(Z\). Then \(g^{*}H\in C\). Hence there exists \(\tau\in\operatorname{PsAut}(X/S,\Delta)\) such that \(\tau_{*}(g^{*}H)\in Q\). Suppose that \(\tau_{*}(g^{*}H)\in Q_{i}^{\circ}\), then \(\sigma_{i*}(\tau_{*}(g^{*}H))\) is nef\(/S\) on \(Y_{i}\). By Theorem 2.2, \(\sigma_{i*}(\tau_{*}(g^{*}H))\) is semi-ample\(/S\). Thus there exists some \(i_{k}\) such that \[Z_{i}^{i_{k}}=\operatorname{Proj}_{S}R(Y_{i},n\sigma_{i*}(\tau_{*}(g^{*}H)))\] for a fixed \(n\in\mathbb{Z}_{>0}\) such that \(n\sigma_{i*}(\tau_{*}(g^{*}H))\) is Cartier. Here \[R(Y_{i},n\sigma_{i*}(\tau_{*}(g^{*}H)))\coloneqq\bigoplus_{m\in\mathbb{Z}_{ \geq 0}}H^{0}(Y_{i},\mathcal{O}_{Y_{i}}(mn\sigma_{i*}(\tau_{*}(g^{*}H)))).\] Similarly, \(Z=\operatorname{Proj}_{S}R(X,ng^{*}H)\). As \[\sigma_{i}\circ\tau:X\dashrightarrow X\dashrightarrow Y_{i}\] is a composition of birational maps that are isomorphic in codimension 1, we have \[R(X,ng^{*}H)\simeq R(X,n\tau_{*}g^{*}H)\simeq R(Y_{i},n\sigma_{i*}(\tau_{*}(g^ {*}H))).\] Therefore, \(Z\simeq Z_{i}^{i_{k}}/S\). **Remark 6.1**.: _Theorem 1.5 is weaker than the consequence of the cone conjecture for nef cones which predicts that the contractions \(\{X\to Z/S\}\) are finite up to isomorphisms on \(X/S\). In the proof of Theorem 1.5, we actually showed that \(\{X\to Z/S\}\) are finite up to pseudo-automorphisms on \(X/S\)._ Proof of Corollary 1.6.: By Theorem 1.3, there exists a rational polyhedral cone \(Q\subset\operatorname{Mov}(X/S)\) such that \(\operatorname{PsAut}(X/S,\Delta)\cdot Q\supset\operatorname{Mov}(X/S)_{ \mathbb{Q}}\). Then the claim follows from Theorem 1.5. Theorem 1.3 does not imply the existence of a weak fundamental domain directly (this will be addressed in Section 6.2 using deep results on the geometry of convex cones). But it at least implies the finiteness of models assuming that good minimal models exist. Proof of Theorem 1.4.: By Theorem 1.3, there exists a rational polyhedral cone \(Q\subset\operatorname{Mov}(X/S)\) such that \(\operatorname{PsAut}(X,\Delta)\cdot Q\supset\operatorname{Mov}(X/S)_{ \mathbb{Q}}\). Let \(Q=\sqcup_{i=1}^{m}Q_{i}^{\circ}\) be the decomposition as in Theorem 2.4 such that for effective divisors \(B,D\) with \([B],[D]\in Q_{i}^{\circ}\) and \(1\gg\epsilon>0\), the klt pairs \((X,\Delta+\epsilon B)\) and \((X,\Delta+\epsilon D)\) share same minimal models. Let \(\sigma_{i}:X\dashrightarrow Y^{i}\) be a minimal model that corresponds to \(Q_{i}^{\circ}\). By Lemma 2.1, there exist a birational map \(g_{i}:X\dashrightarrow W/S\) that is isomorphic in codimension 1 and a morphism \(\nu:W\to Y^{i}\) such that \(\nu\circ g_{i}=\sigma_{i}\). First, we show that for any \(\mathbb{Q}\)-factorial variety \(W\) such that \(X\dashrightarrow W/S\) is isomorphic in codimension 1, then there exists \(i,1\leq i\leq m\) such that \(W\simeq W^{i}\). In fact, let \(H\) be an ample\(/S\) Cartier divisor on \(W\). Let \(H_{X}\) be the strict transform of \(H\) on \(X\). Then there exists \(\tau\in\operatorname{PsAut}(X,\Delta)\) such that \(\tau_{*}H_{X}\in Q\) and thus \(\tau_{*}H_{X}\in Q_{i}^{\circ}\) for some \(i,1\leq i\leq m\). Then by the definition of \(Q_{i}^{\circ}\) and \(W^{i}\), we see that \(g_{i*}(\tau_{*}H_{X})\) is nef\(/S\). Note that the nef\(/S\) divisor \(g_{i*}(\tau_{*}H_{X})\) is also the strict transform of the ample\(/S\) divisor \(H\) through the birational maps \[W\dasharrow X\stackrel{{\tau}}{{\dashrightarrow}}X\stackrel{{ g_{i}}}{{\dashrightarrow}}W_{i}/S\] that are all isomorphic in codimension \(1\). As \(W,W_{i}\) are both \(\mathbb{Q}\)-factorial varieties, we have \(W\simeq W_{i}/S\). Next, as \(g_{i}\) is isomorphic in codimension \(1\), if \(P_{\eta}^{i}\) is the strict transform of \(P_{\eta}\) on \(W_{\eta}^{i}\), then we still have \[\operatorname{PsAut}(W_{\eta}^{i},\Delta_{W^{i},\eta})\cdot P_{\eta}^{i} \supset\operatorname{Mov}(W_{\eta}^{i}).\] Therefore, by Corollary 1.6, \[\{Z\mid W^{i}\to Z/S\text{ is a contraction, }1\leq i\leq m\}\] is a finite set. By the first part, \(Y^{i}\) must belong to this finite set. **Remark 6.2**.: _In practice, for a klt Calabi-Yau fibration \(f:(X,\Delta)\to S\), one can first take a terminalization \((\tilde{X}/S,\tilde{\Delta})\) of \((X/S,\Delta)\). Then because a minimal model of \((X/S,\Delta)\) is also a minimal model of \((\tilde{X}/S,\tilde{\Delta})\), one can apply Theorem 1.4 to \((\tilde{X}/S,\tilde{\Delta})\) to obtain the finiteness of minimal models of \((X/S,\Delta)\)._ This following result generalizes both [10] and [11] where the finiteness of minimal models are established for threefolds with \(\dim(X/S)\leq 2\), and for elliptic fibrations, respectively. **Corollary 6.3**.: _Let \(f:X\to S\) be a canonical Calabi-Yau fibration. Suppose that \(\dim(X/S)\leq 2\), then \(X/S\) has finitely many minimal models._ Proof.: Let \(\nu:\tilde{X}\to X/S\) be a terminalization of \(X\). Then \(X\) is a \(\mathbb{Q}\)-factorial terminal variety and \(K_{\tilde{X}}=\nu^{*}K_{X}\). As a minimal model of \(X/S\) is also a minimal model of \(\tilde{X}/S\), replacing \(X\) by \(\tilde{X}\), we can assume that \(f:X\to S\) is a \(\mathbb{Q}\)-factorial terminal Calabi-Yau fibration (see Remark 6.2). When \(\dim(X_{\eta})\leq 2\), there exists a rational polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that \[\operatorname{PsAut}(X_{\eta})\cdot P_{\eta}=\overline{\operatorname{Mov}}(X_ {\eta})\cap\operatorname{Eff}(X_{\eta})\supset\operatorname{Mov}(X_{\eta}).\] In fact, this automatically holds when \(\dim(X_{\eta})\leq 1\), and follows from the Morrison-Kawamata cone conjecture for surfaces when \(\dim(X_{\eta})\leq 2\) (see [10, Remark 2.2]). Besides, it is known that good minimal models exist for effective klt pairs in dimension \(\leq 3\). Then the claim follows from Theorem 1.4. Corollary 6.3 is also expected to hold for klt pairs. For this purpose, it suffices to show the Morrison-Kawamata cone conjecture for terminal log Calabi-Yau surfaces over a non-algebraically closed field of characteristic \(0\). This case will be addressed elsewhere. Using Corollary 6.3, we can show that if \(\dim X-\kappa(X)\leq 2\), then \(X\) admits only finitely many minimal models. Here \(\kappa(X)\) is the Kodaira dimension of \(X\). Note that such an \(X\) may not necessarily be a Calabi-Yau variety. It is straightforward to generalize this result to the relative setting, however, we just state the absolute case for the sake of simplicity. Proof of Corollary 1.7.: By [1], \(X\) admits a canonical model \(f:X\dasharrow S\) where \(S=\operatorname{Proj}\oplus_{m\in\mathbb{Z}_{\geq 0}}H^{0}(X,mnK_{X})\) for some \(n\in\mathbb{Z}_{>0}\) such that \(nK_{X}\) is Cartier. Moreover, \(\dim S=\kappa(X)\). Let \(p_{1}:W\to X\) be a birational morphism from a smooth variety \(W\) such that \(q_{1}\coloneqq f\circ p_{1}:W\to S\) is a morphism. Moreover, \(q_{1}\) is a fibration by the definition of the canonical model. Then \(K_{W}=p_{*}^{*}K_{X}+E\) with \(E\geq 0\) a \(p_{1}\)-exceptional divisor. If \(X\dasharrow Y\) is a minimal model of \(X\), then it is a standard fact that \(Y\) is also a minimal model of \(W\) under the natural map \(W\to X\dasharrow Y\). As \(\dim(W/S)=\dim X-\kappa(X)\leq 2\), by [11, Theorem 0.2], \(W\) has a good minimal model \(\theta:W\dashrightarrow Y/\operatorname{Spec}\mathbb{C}\). Thus, the semi-ample divisor \(K_{Y}\) induces the natural morphism \(Y\to S\) as \(S\) is also the canonical model of \(Y\). Moreover, \(Y\) has canonical singularities and \(K_{Y}\sim_{\mathbb{Q}}0/S\). Let \(\mu:W\dashrightarrow Y^{\prime}\) be another minimal model of \(W\). Suppose that \(h:T\to W\), \(p:T\to Y\) and \(q:T\to Y^{\prime}\) are birational morphisms such that \[\theta\circ h=p,\quad\mu\circ h=q.\] Then \[h^{*}K_{W}=p^{*}K_{Y}+E,\quad h^{*}K_{W}=q^{*}K_{Y^{\prime}}+F,\] where \(E\geq 0\) and \(F\geq 0\) are \(p\)-exceptional and \(q\)-exceptional divisors, respectively. As \(K_{Y}\) and \(K_{Y^{\prime}}\) are nef, \(E\) is the negative part in the Nakayama-Zariski decomposition of \(p^{*}K_{Y}+E\). Similarly, \(F\) is the negative part in the Nakayama-Zariski decomposition of \(q^{*}K_{Y^{\prime}}+F\). Therefore, \(E=F\) and \[p^{*}K_{Y}=q^{*}K_{Y^{\prime}}. \tag{6.1.1}\] In particular, \(K_{Y^{\prime}}\) is also semi-ample, and it incudes the natural morphism \(Y^{\prime}\to S\). Let \(\tau:\tilde{Y}\to Y\) be a \(\mathbb{Q}\)-factorial terminalization of \(Y\), we claim that \(\tilde{\mu}:\tilde{Y}\dashrightarrow Y^{\prime}\) does not extract divisors. Thus \(Y^{\prime}\) is a minimal model of \(\tilde{Y}\) and also a minimal model of \(\tilde{Y}\) over \(S\). Let \(\tilde{p}:\tilde{T}\to\tilde{Y}\) and \(q^{\prime}:\tilde{T}\to Y^{\prime}\) be birational morphisms from a smooth variety \(\tilde{T}\) such that \(\tilde{\mu}\circ\tilde{p}=q^{\prime}\). By (6.1.1), \[\tilde{p}^{*}K_{\tilde{Y}}=\tilde{p}^{*}(\tau^{*}K_{Y})=q^{\prime*}K_{Y^{ \prime}}. \tag{6.1.2}\] Write \(K_{\tilde{T}}=\tilde{p}^{*}K_{\tilde{Y}}+\tilde{E}\) and \(K_{\tilde{T}}=q^{\prime*}K_{Y^{\prime}}+E^{\prime}\). Then \(\tilde{E}\) and \(E^{\prime}\) are \(\tilde{p}\)-exceptional and \(q^{\prime}\)-exceptional divisors respectively. Moreover, as \(\tilde{Y}\) has \(\mathbb{Q}\)-factorial terminal singularities, \(\operatorname{Supp}E=\operatorname{Exc}(\tilde{p})\). By (6.1.2), \(\tilde{E}=E^{\prime}\). Thus any \(\tilde{p}\)-exceptional divisor is also \(q^{\prime}\)-exceptional. This shows that \(\tilde{\mu}:\tilde{Y}\to Y^{\prime}\) does not extract divisors. It follows from Corollary 6.3 that the terminal Calabi-Yau fibration \(\tilde{Y}\to S\) has only finitely many minimal models. This completes the proof. ### Existence of weak rational polyhedral fundamental domains Finally, we address the existence of weak rational polyhedral fundamental domains for \(\operatorname{Mov}(X/S)_{+}\). Since \(\operatorname{Mov}(X/S)_{+}\not\subset\operatorname{Eff}(X/S)\) in general, the existence of weak rational polyhedral fundamental domains does not imply the finiteness of minimal models by the Shokurov polytope argument. However, when \(R^{1}f_{*}\mathcal{O}_{X}=0\), we do have \(\operatorname{Mov}(X/S)_{+}=\operatorname{Mov}(X/S)\subset\operatorname{Eff }(X/S)\) by [12, Proposition 5.3]. Proof of Theorem 1.8.: Let \(W\) be the maximal vector space in \(\overline{\operatorname{Mov}}(X/S)\). By Proposition 3.8, \(W\) is defined over \(\mathbb{Q}\). For any set \(T\subset N^{1}(X/S)\), let \(\widetilde{T}\) be the image of \(S\) under the quotient map \(N^{1}(X/S)\to N^{1}(X/S)/W\). By Theorem 1.3, there exists a rational polyhedral cone \(Q\subset\operatorname{Mov}(X/S)\) such that \[\operatorname{PsAut}(X,\Delta)\cdot(Q\cap N^{1}(X/S)_{\mathbb{Q}})= \operatorname{Mov}(X/S)_{\mathbb{Q}}.\] Therefore, \[\operatorname{PsAut}(X,\Delta)\cdot\widetilde{Q}\supset\widetilde{ \operatorname{Mov}(X/S)}_{\mathbb{Q}}.\] Note that \(\widetilde{\operatorname{Mov}(X/S)}_{\mathbb{R}}\) is non-degenerate and \(\widetilde{Q}\subset\widetilde{\operatorname{Mov}(X/S)}_{+}\). By Proposition 3.2 (3), we see that \((\widetilde{\operatorname{Mov}(X/S)}_{+},\tilde{\Gamma}_{B})\) is of polyhedral type, where \(\tilde{\Gamma}_{B}\) is the image of \(\operatorname{PsAut}(X/S,\Delta)\) in \(\operatorname{GL}(N^{1}(X/S)_{\mathbb{R}}/W)\). By Proposition 3.6, there is a rational polyhedral cone \(\Pi\subset\operatorname{Mov}(X/S)_{+}\) such that \(\Gamma_{B}\cdot\Pi=\operatorname{Mov}(X/S)_{+}\), and for each \(\gamma\in\Gamma_{B}\), either \(\gamma\Pi\cap\operatorname{Int}(\Pi)=\emptyset\) or \(\gamma\Pi=\Pi\). Thus, \(\Pi\) is a weak rational polyhedral fundamental domain under the action of \(\Gamma_{B}\) **Corollary 6.4**.: _Let \(f:X\to S\) be a terminal Calabi-Yau fibration. Suppose that \(\dim(X/S)\leq 2\), then \(\operatorname{Mov}(X/S)_{+}\) admits a weak rational polyhedral fundamental domain under the action of \(\Gamma_{B}\)._ Proof.: As mentioned in the proof of Corollary 6.3., when \(\dim(X_{\eta})\leq 2\), there exists a rational polyhedral cone \(P_{\eta}\subset\operatorname{Eff}(X_{\eta})\) such that \[\operatorname{PsAut}(X_{\eta})\cdot P_{\eta}=\overline{\operatorname{Mov}}(X_ {\eta})\cap\operatorname{Eff}(X_{\eta})\supset\operatorname{Mov}(X_{\eta}).\] Besides, it is known that good minimal models exist for effective klt pairs in dimension \(\leq 3\). Then the claim follows from Theorem 1.8.
2309.14460
Online Active Learning For Sound Event Detection
Data collection and annotation is a laborious, time-consuming prerequisite for supervised machine learning tasks. Online Active Learning (OAL) is a paradigm that addresses this issue by simultaneously minimizing the amount of annotation required to train a classifier and adapting to changes in the data over the duration of the data collection process. Prior work has indicated that fluctuating class distributions and data drift are still common problems for OAL. This work presents new loss functions that address these challenges when OAL is applied to Sound Event Detection (SED). Experimental results from the SONYC dataset and two Voice-Type Discrimination (VTD) corpora indicate that OAL can reduce the time and effort required to train SED classifiers by a factor of 5 for SONYC, and that the new methods presented here successfully resolve issues present in existing OAL methods.
Mark Lindsey, Ankit Shah, Francis Kubala, Richard M. Stern
2023-09-25T18:48:36Z
http://arxiv.org/abs/2309.14460v1
# Online Active Learning for Sound Event Detection ###### Abstract Data collection and annotation is a laborious, time-consuming prerequisite for supervised machine learning tasks. Online Active Learning (OAL) is a paradigm that addresses this issue by simultaneously minimizing the amount of annotation required to train a classifier and adapting to changes in the data over the duration of the data collection process. Prior work has indicated that fluctuating class distributions and data drift are still common problems for OAL. This work presents new loss functions that address these challenges when OAL is applied to Sound Event Detection (SED). Experimental results from the SONYC dataset and two Voice-Type Discrimination (VTD) corpora indicate that OAL can reduce the time and effort required to train SED classifiers by a factor of 5 for SONYC, and that the new methods presented here successfully resolve issues present in existing OAL methods. Mark Lindsey, Ankit Shah, Richard M. Stern+ Carnegie Mellon University 5000 Forbes Ave, Pittsburgh, PA 15213 Francis Kubala + Footnote †: This work is supported by a corporate gift from Probity, Inc. Active learning, online learning, sound event detection, data drift ## 1 Introduction Data annotation has always been a bottleneck for supervised training of machine learning models. This issue is particularly prevalent for tasks like Sound Event Detection (SED), which can be cognitively taxing to annotate. The annotation requirement also prohibits potential practitioners from using the model while data collection is still in process. Even after enough data has been collected to achieve reasonable base performance, additional data annotation is required to adapt the model to specific environments. Various alternative paradigms have been developed to address the issues posed by data annotation. These paradigms include self-supervised training, unsupervised methods, and active learning. Active Learning (AL) reduces the number of annotations required to train a model by employing an algorithm that actively identifies data points that would be the most informative if the label were known [1]. While AL can reduce the absolute amount of annotation required for training, typical AL approaches are not designed to start training before data collection is complete or to facilitate adaptation. However, a subfield of AL, referred to as Online Active Learning (OAL), takes AL a step further by adding an online learning component (see Fig. 1). This addition allows training to begin before all the data has been collected. Thus, OAL serves as a method to reduce even more of the data annotation bottleneck than AL. The addition of online learning poses new challenges that do not exist for AL. Possibly the most significant of these problems is handling data drift over time. Data drift in online learning scenarios requires the classifier to adapt in order to maintain reasonable performance. For OAL, the query selection strategy must also be aware of or robust to data drift, as query selection is crucial to classifier adaptation. Data drift also poses a particular challenge for detection tasks where it is critical to avoid missed detections. Many of such detection tasks use the Detection Cost Function (DCF) as the evaluation metric. DCF is a weighted combination of False Negative Rate (FNR) and False Positive Rate (FPR) where FN errors are more costly than FP errors (three times greater in this work). Typical loss functions, like cross-entropy loss, attempt to optimize overall classification accuracy regardless of the types of errors the classifier makes. Such loss functions do not automatically take into account class imbalance or weighted error types and must be manually adjusted based on a prior knowledge of the problem. This work introduces an OAL training scheme that specifically seeks to reduce the required annotation for SED. This work also presents new loss functions intended to handle varying class distributions in all paradigms, including OAL, to optimize the DCF. SED experiments are performed on the SONYC dataset and two Voice-Type Discrimination (VTD) corpora, showing that OAL can both reduce Figure 1: AL (top) vs. OAL (bottom). For OAL, the current session is updated every step. the required annotations by a factor of 5 and start training much earlier while achieving comparable DCF. Experiments performed with the new loss functions show that it can reduce the DCF and FNR for fully-supervised and AL training by up to 30% relative to the cross-entropy loss results. However, the same pattern does not hold when the loss functions are used in OAL training. ## 2 Related Work Work on OAL is relatively sparse but is gaining traction. Much of the research for batch-based OAL is focused on using drift detection algorithms [2] to determine when to make adjustments to the model or the AL training parameters [3]. These adjustments include increasing or decreasing AL query density [4], weighting long-term and short-term models [5], or introducing completely new models [6] based on perceived data drift. Despite the temporal nature of audio, most published OAL methods are not applied to audio tasks, likely, in part, because of a lack of online audio datasets. The authors believe that this work is the first to apply OAL to SED. Methods for training with imbalanced data are represented well in the literature. Examples include focal loss [7], which down-weights samples that are easily classified in training, and losses that use dynamic reweighting based on known or estimated class distributions [8, 9]. Some AL methods also address class imbalance because of the strong effect it can have on performance [10]. All of these approaches differ from the loss functions presented here, which directly optimize the DCF regardless of class distribution. ## 3 Methodology This paper includes two primary methodological contributions: 1) the application of OAL to SED, and 2) loss functions that optimize the DCF for imbalanced class distributions. ### OAL for SED Typical datasets are not structured in a way that is usable for OAL training. This section describes how to convert pre-existing datasets with temporal information into OAL datasets, as well as the basic steps to apply OAL to the SED task. Datasets with spatial markers should be organized into samples from the same _environment_. In the case of audio data, an environment refers to a sensor in a fixed location. Within each environment, data should be put in chronological order based on time of occurrence. From there, the samples can be grouped into _seessions_ (or batches) of \(L\) abutting samples. Additionally, a _bootstrap corpus_ is formed to initialize the classifier. A bootstrap corpus of size \(N\) is composed of the first \(N/2\) occurrences of each class. With the data organized in this manner, Algorithm 1 can be applied to simultaneously train a classifier and make predictions about the data from each environment using OAL. These steps are also illustrated and contrasted with regular AL [11] in Fig. 1. ``` classifier \(\Theta\), query selection strategy \(\Phi\), bootstrap corpus \(C\), OAL sessions \(\mathcal{S}\), query budget \(B\), adaptation data pool \(\mathcal{A}\) Initialize \(\Theta\) with all samples from \(C\) Add all samples from \(C\) to \(\mathcal{A}\) while not all sessions in \(\mathcal{S}\) have been seen do Load new session \(\mathcal{S}_{i}\) from \(\mathcal{S}\) Run \(\Phi\) on \(\mathcal{S}_{i}\) to select \(B\) most informative data \(X_{j}\) Obtain the labels \(y_{j}\) for the queried samples Add the labeled data \((X_{j},y_{j})\) to \(\mathcal{A}\) Update \(\Theta\) with all samples from \(\mathcal{A}\) Predict \(\hat{y}_{j}\) for the unlabeled samples in \(\mathcal{S}_{i}\) endwhile ``` **Algorithm 1** Online Active Learning for SED ### DCF-based Loss Functions New loss functions based on DCF are presented here to address the challenges of class imbalance and weighted error metrics, which are common in OAL. The error rates that define DCF are typically calculated using a non-differentiable argmax operation, so the traditional formulation of DCF itself is not a viable neural network loss function. The following approximations for FNR and FPR were made to ensure that the DCF is differentiable and directly optimizable by a neural network. Here, \(y_{i}\) is the label (0 or 1) and \(\hat{y}_{i}\) is the posterior probability of sample \(i\) being part of the target class. \[\hat{p}_{fn}=\frac{\sum_{i}(1-\hat{y}_{i})\cdot y_{i}}{\sum_{i}y_{i}} \tag{1}\] \[\hat{p}_{fp}=\frac{\sum_{i}\hat{y}_{i}\cdot(1-y_{i})}{\sum_{i}(1-y_{i})} \tag{2}\] Eqs. 1 and 2 replace the argmax function with expectations to calculate the error rates in a differentiable manner. As such, this loss function is referred to as the expected DCF (e-DCF) loss. To closer approximate the true DCF, a differentiable argmax function (d-argmax) can be used [12]. D-argmax is implemented as shown in Eq. 3 with Softmax function \(\sigma\) and multiplier \(\lambda\). Here, \(x\) refers to a class index and \(f(x)\) is the posterior probability that a given sample belongs to that class. In the limit, the term \(\sigma(\lambda f(x))\) reduces to 0 if \(x\) is not the most likely class, or 1 if it is the most likely. \[\text{d- argmax}_{x}\,f(x)=\lim_{\lambda\rightarrow\infty}\sum_{x}x\sigma( \lambda f(x))\approx\sum_{x}x\sigma(\lambda f(x)) \tag{3}\] For the purpose of implementation, a large-valued \(\lambda\) can be used in place of the infinite limit as an approximation. With this adjustment, the \(\hat{y}_{i}\) terms in Eqs. 1 and 2 can be replaced with \(\sigma(\lambda\hat{y}_{i})\), i.e., the Softmax of the posterior probability of the target class scaled by \(\lambda\). Compared to \(\hat{y}_{i}\), this term will be much closer to 1 or 0, depending on the predicted class. This implementation of the DCF loss is referred to as differentiable DCF (d-DCF). ## 4 Experiments ### Datasets All experiments were performed on data from the SONYC Urban Sound Tagging (SONYC-UST) dataset [13] and the SRI and Lionbridge (LB) VTD corpora. #### 4.1.1 SONYC Dataset The SONYC dataset1 consists of 18,515 audio clips from 61 microphones placed in different locations around New York City. A predefined split reserves 13,538 of these samples for training, 4,308 for validation, and 669 for testing. Each clip is 10 seconds in duration and recorded with a sample rate of 48 kHz. Annotations for eight coarse-grained classes and 23 fine-grained classes are provided for each sample. Only the coarse-grained classes are considered in this work. Information regarding the time and place of recording and sensor ID is also provided as part of the annotations. #### 4.1.2 VTD Corpora VTD is the task of detecting live speech, i.e., speech that is produced spontaneously within the recording environment. As such, VTD can be viewed as SED with a single target class--live speech. The SRI Corpus comprises 1,617 hours of speech, recorded in 4 different rooms by microphones in 5 locations in each room. One microphone from each room was used in this work. Individual recordings range from 3.5 to 14 hours in duration. 11.8% of the audio is target audio, and the distractors consist of TV, radio, traffic, room noises, and pre-recorded audio from the Linguistic Data Consortium. The LB Corpus contains 2,966 hours of audio, recorded in 3 rooms by 7 microphones in each room. Again, only one microphone from each room was used. Individual recording durations range from 1.8 to 8.3 hours. Target live speech is present in 23.8% of the audio. Distractors include TV, podcasts, and ambient noise. Annotations for these two corpora are provided as start and stop timestamps for target class audio. For the purposes of this work, the annotations are reduced to a binary indication of the presence of target audio for every abutting five-second audio frame. Both VTD corpora will be released publicly in the coming months. ### Classifier Architecture and Hyperparameters The neural network used for all experiments is the contrastive classifier depicted in Fig. 2. As indicated, it is trained with a combined contrastive loss and classification loss. The only difference between the classifier used on the SONYC data and the VTD data is the input features. The input features for the SONYC data are either Wav2CLIP2 or CLAP1 embeddings (512 dimensions each) [14, 15, 16] or a concatenation of the two. In comparison, the VTD data is represented by WavLM4 embeddings (1024 dimensions) [17], so the classifier used for VTD is larger in most cases. Footnote 2: [https://github.com/descriptinc/lyrebird-wav2clip](https://github.com/descriptinc/lyrebird-wav2clip) Footnote 3: [https://huggingface.co/laion/clap-htsat-fused](https://huggingface.co/laion/clap-htsat-fused) Footnote 4: [https://huggingface.co/microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) All other hyperparameters used during training are the same across experiments. The learning rate is fixed at \(10^{-4}\) using an Adam optimizer, weight decay is set at \(10^{-5}\), and early stopping after five epochs of no improvement. The contrastive loss [18] is described by Eq. 4, where \(x_{1}\) and \(x_{2}\) are the embedding vectors of the input samples being contrasted, \(y\) denotes whether the samples come the same class (1) or different classes (0), \(d\) is the Euclidean distance between the sample embeddings, and \(m\) is the margin (set to 1 here). The total loss is equal to the average of the contrastive loss and the classification loss. \[\mathcal{L}_{c}(\mathbf{x_{1}},\mathbf{x_{2}},y)=y\cdot d^{2}+(1-y)\cdot\max(m -d,0)^{2} \tag{4}\] ### Preliminary Experiments Before testing the new methods presented in this work, the basic experimental setup was compared to existing methods to ensure that performance was in a reasonable range. The systems submitted to the DCASE2020 Challenge [19] serve as benchmarks. The evaluation metric used for these experiments is the Area Under the Precision-Recall Curve (AUPRC), since this is the metric reported by the challenge. The default data splits are also retained to make a viable comparison to the challenge baseline. The performance of the contrastive classifier was evaluated using Wav2CLIP, CLAP, and both embeddings concatenated as input features. Only fully-supervised training was used for these experiments. The input features with the best overall AUPRC are used in the subsequent experiments. ### OAL Experiments The OAL experiments are run as explained in Section 3.1 and compared to fully supervised training. Experiments are performed on the SONYC dataset. The fully-supervised experiments are performed in the same way as described for the preliminary experiments. For the OAL experiments, an environment was defined as a single sensor that contributes 10 minutes or more audio to the dataset. This criterion results in 47 valid environments in the dataset. A session in each of these environments was defined as 30 consecutive ten-second samples or 5 minutes of audio. In each session, a budget of 5 labeled samples was allowed to be queried for classifier training, and the bootstrap corpus for each environment was defined to contain 8 samples total. The query strategy of choice was uncertainty sampling based on negative energy [20, 21]. The target class for these experiments is human speech. These experiments are intended to show that the OAL setup can achieve similar performance as fully-supervised training. These experiments are also intended to show the extent to which required data collection and annotation can be reduced. The DCF is used to evaluate the performance of the systems in these experiments. Note that the DCFs will not be exactly comparable across supervised and OAL experiments since the OAL experiments do not use the same test split as the other experiments. However, the DCF should still provide a rough estimate of model performance. The reduction in required data is measured in two ways: 1) the number of labeled samples used to train the classifier and 2) the number of samples required to be in the collection before training can begin. ### Loss Function Experiments To evaluate the DCF-based loss functions, the contrastive classifier was trained with a cross-entropy loss and both DCF loss functions using fully-supervised training, AL, and OAL. The DCF loss was compared to unweighted cross-entropy, and cross-entropy with a weighting ratio of 4:1 for the target class compared to the non-target class. The fully-supervised experiments were performed on the default data splits of the SONYC dataset. The AL experiments Figure 2: The contrastive classifier architecture. also used the same test splits, but the training set was adjusted to be used as the unlabeled data pool from which samples were queried every AL step. The AL experiments basically followed the same procedure as illustrated in the top panel of Fig. 1. All supervised experiments and AL experiments were repeated 5 times. The OAL experiments were run one time on both datasets in the same manner as outlined in Section 4.4. All loss function experiments are evaluated in terms of DCF, FNR, and FPR. All AL and OAL experiments were done with the negative energy-based query strategy. Again, the target class for the experiments using SONYC data is human speech, which is present in 36% of the dataset. ## 5 Results and Discussion ### Preliminary Results The results of the contrastive classifier with the three sets of input features is compared to the challenge baseline in Table 1. Note that these numbers represent the average AUPRC across all 8 coarse-grained classes. This number is referred to as "Macro-AUPRC" in the DCASE2020 Challenge. It is clear from the table that the best-performing system is the contrastive classifier with CLAP embeddings. In light of these results, CLAP embeddings are used as the input feature of choice for the OAL and loss function experiments. ### OAL Results Table 2 compares the results of OAL and fully-supervised training for SONYC. While fully-supervised training shows better performance across all error-related metrics, OAL training achieves competitive results with far fewer labeled samples. This suggests that OAL could be a practical approach for reducing the required annotations, even if it comes at the cost of somewhat higher error rates. In this case, OAL used only 3,291 samples, which equates to 18% of the training and validation set used for fully-supervised training. The most significant advantage of OAL is made evident by the last metric in the table. For fully-supervised training, all 18,515 samples comprising the training, validation, and test sets must be collected, and all training and validation samples must be labeled before classifier training begins. This starkly contrasts the mere 30 unlabeled samples required to start training the same classifier with OAL. Results on the VTD data are also encouraging. Using only 40 seconds of labeled audio per hour (i.e., 1.1% of the data), a DCF of 0.0718 is achieved (see Table 4). Note that model training can begin after only one hour of collected data. ### Loss Function Results Table 3 shows the advantage of DCF-based loss functions over cross-entropy for fully supervised and AL training. This advantage can be seen most prominently in terms of FNR, which was reduced by 30.6% for full supervision and 26.3% for AL when d-DCF loss replaces cross-entropy. Such an improvement in FNR over the weighted cross-entropy indicates that DCF-based loss functions do indeed optimize DCF well, even when classes are imbalanced. However, Table 4 shows that DCF-based loss functions do not have the same advantage in all cases. For OAL on either SONYC or VTD data, cross-entropy is better than the DCF loss functions in every error metric. It is unclear why this is the case, but it may be a consequence of dealing with very small amounts of data in the under-represented SONYC environments. ## 6 Conclusions This work addresses the problem of reducing the cost of data annotation for SED by training classifiers using OAL. New loss functions intended to handle class imbalance and weighted error metrics like DCF are also introduced and evaluated. Experiments on the SONYC dataset show that OAL can effectively reduce the number of annotations required by a factor of 5 and allow training to begin after collecting only 30 samples instead of the whole dataset. OAL for the VTD dataset can begin after one hour of data is collected and achieves a DCF of 0.0718 while only requiring labels from 1.1% of the whole dataset. The DCF-inspired loss functions yield major reductions in DCF (up to 20% relative) and FNR (up to 30% relative) for fully-supervised and AL training. Future work might include making improvements to the OAL setup or developing loss functions that improve performance for OAL. \begin{table} \begin{tabular}{c|c} Classifier & AUPRC \(\uparrow\) \\ \hline DCASE2020 baseline & 0.5100 \\ \hline Wav2CLIP & 0.4147 \\ CLAP & **0.5208** \\ Wav2CLIP+CLAP & 0.5158 \\ \end{tabular} \end{table} Table 1: Macro-AUPRC comparison for fully-supervised training \begin{table} \begin{tabular}{c|c|c|c} Dataset & Loss Fn & DCF \(\downarrow\) & FNR \(\downarrow\) & FPR \(\downarrow\) \\ \hline & XENT (4:1) & **0.2117** & **0.1983** & **0.2517** \\ SONYC & e-DCF & 0.2129 & 0.1996 & 0.2530 \\ & d-DCF & 0.3126 & 0.3018 & 0.3451 \\ \hline & XENT (4:1) & **0.0718** & **0.0884** & **0.0219** \\ VTD & e-DCF & 0.0934 & 0.1159 & 0.0260 \\ & d-DCF & 0.1166 & 0.1423 & 0.0397 \\ \end{tabular} \end{table} Table 4: Error rates for weighted cross-entropy (XENT), e-DCF, and d-DCF losses for OAL experiments on the SONYC and VTD datasets \begin{table} \begin{tabular}{c|c|c|c} Dataset & Loss Fn & DCF \(\downarrow\) & FNR \(\downarrow\) & FPR \(\downarrow\) \\ \hline & XENT (4:1) & **0.2117** & **0.1983** & **0.2517** \\ SONYC & e-DCF & 0.2129 & 0.1996 & 0.2530 \\ & d-DCF & 0.3126 & 0.3018 & 0.3451 \\ \hline & XENT (4:1) & **0.0718** & **0.0884** & **0.0219** \\ VTD & e-DCF & 0.0934 & 0.1159 & 0.0260 \\ & d-DCF & 0.1166 & 0.1423 & 0.0397 \\ \end{tabular} \end{table} Table 4: Error rates for weighted cross-entropy (XENT), e-DCF, and d-DCF losses for OAL experiments on the SONYC and VTD datasets \begin{table} \begin{tabular}{c|c|c|c} Dataset & Loss Fn & DCF \(\downarrow\) & FNR \(\downarrow\) & FPR \(\downarrow\) \\ \hline & XENT (1:1) & 0.1946 & 0.2169 & **0.1277** \\ Supervised & XENT (4:1) & 0.1714 & 0.1661 & 0.1871 \\ & e-DCF & **0.1467** & 0.1329 & 0.1883 \\ & d-DCF & 0.1557 & **0.1153** & 0.2768 \\ \hline & XENT (1:1) & 0.2077 & 0.2332 & 0.1311 \\ AL & XENT (4:1) & 0.1999 & 0.2280 & **0.1154** \\ & e-DCF & 0.1720 & 0.1805 & 0.1468 \\ & d-DCF & **0.1602** & **0.1681** & 0.1367 \\ \end{tabular} \end{table} Table 3: Error rates for unweighted and weighted cross-entropy (XENT), e-DCF, and d-DCF losses for fully-supervised and AL experiments on the SONYC dataset, averaged over five runs
2309.12011
Origin of electrical noise near charge neutrality in dual gated graphene device
This letter investigates low frequency 1/ f noise in hBN encapsulated graphene device in a dual gated geometry. The noise study is performed as a function of top gate carrier density (nT G) at different back gate densities (nBG). The noise at low nBG is found to be independent of top gate carrier density. With increasing nBG, noise value increases and a noise peak is observed near charge inhomogeneity of the device. Further increase in nBG leads to decrease in noise magnitude. The shape of the noise is found to be closely related to charge inhomogeneity region of the device. Moreover, the noise and conductivity data near charge neutrality shows clear evidence of noise emanating from combination of charge number and mobility fluctuation
Aaryan Mehra, Roshan Jesus Mathew, Chandan Kumar
2023-09-21T12:28:32Z
http://arxiv.org/abs/2309.12011v1
# Origin of electrical noise near charge neutrality in dual gated graphene device ###### Abstract This letter investigates low frequency \(1/f\) noise in hBN encapsulated graphene device in a dual gated geometry. The noise study is performed as a function of top gate carrier density (\(n_{TG}\)) at different back gate densities (\(n_{BG}\)). The noise at low \(n_{BG}\) is found to be independent of top gate carrier density. With increasing \(n_{BG}\), noise value increases and a noise peak is observed near charge inhomogeneity of the device. Further increase in \(n_{BG}\) leads to decrease in noise magnitude. The shape of the noise is found to be closely related to charge inhomogeneity region of the device. Moreover, the noise and conductivity data near charge neutrality shows clear evidence of noise emanating from combination of charge number and mobility fluctuation. graphene, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum quantum dots, quantum dots, quantum dots, quantum dots, quantum quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum quantum dots, quantum dots, quantum quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots, quantum dots
2309.11518
Ad-load Balancing via Off-policy Learning in a Content Marketplace
Ad-load balancing is a critical challenge in online advertising systems, particularly in the context of social media platforms, where the goal is to maximize user engagement and revenue while maintaining a satisfactory user experience. This requires the optimization of conflicting objectives, such as user satisfaction and ads revenue. Traditional approaches to ad-load balancing rely on static allocation policies, which fail to adapt to changing user preferences and contextual factors. In this paper, we present an approach that leverages off-policy learning and evaluation from logged bandit feedback. We start by presenting a motivating analysis of the ad-load balancing problem, highlighting the conflicting objectives between user satisfaction and ads revenue. We emphasize the nuances that arise due to user heterogeneity and the dependence on the user's position within a session. Based on this analysis, we define the problem as determining the optimal ad-load for a particular feed fetch. To tackle this problem, we propose an off-policy learning framework that leverages unbiased estimators such as Inverse Propensity Scoring (IPS) and Doubly Robust (DR) to learn and estimate the policy values using offline collected stochastic data. We present insights from online A/B experiments deployed at scale across over 80 million users generating over 200 million sessions, where we find statistically significant improvements in both user satisfaction metrics and ads revenue for the platform.
Hitesh Sagtani, Madan Jhawar, Rishabh Mehrotra, Olivier Jeunen
2023-09-19T09:17:07Z
http://arxiv.org/abs/2309.11518v2
# Ad-load Balancing via Off-policy Learning in a Content Marketplace ###### Abstract. Ad-load balancing is a critical challenge in online advertising systems, particularly in the context of social media platforms, where the goal is to maximize user engagement and revenue while maintaining a satisfactory user experience. This requires the optimization of conflicting objectives, such as user satisfaction and ads revenue. Traditional approaches to ad-load balancing rely on static allocation policies, which fail to adapt to changing user preferences and contextual factors. In this paper, we present an approach that leverages off-policy learning and evaluation from logged bandit feedback. We start by presenting a motivating analysis of the ad-load balancing problem, highlighting the conflicting objectives between user satisfaction and ads revenue. We emphasize the nuances that arise due to user heterogeneity and the dependence on the user's position within a session. Based on this analysis, we define the problem as determining the optimal ad-load for a particular feed fetch. To tackle this problem, we propose an off-policy learning framework that leverages unbiased estimators such as Inverse Propensity Scoring (IPS) and Doubly Robust (DR) to learn and estimate the policy values using offline collected stochastic data. We present insights from online A/B experiments deployed at scale across over 80 million users generating over 200 million sessions, where we find statistically significant improvements in both user satisfaction metrics and ads revenue for the platform. ## 1. Introduction Ad-load balancing plays a critical role in content marketplaces, including prominent platforms like Facebook, Instagram, Sharechat and Youtube, where we need to determine the optimal ad-load during a user session. On one hand, maximizing user satisfaction is crucial to provide a positive user experience and ensure long-term engagement. On the other hand, advertising revenue is a key factor for the sustainability and profitability of the platform. The challenge lies in finding the right balance between these objectives. In this paper, we propose a novel approach to ad-load balancing that leverages off-policy learning using unbiased estimators. In our approach, we define the problem as determining the optimal ad-load for a particular feed fetch, considering the conflicting objectives of user satisfaction and ads revenue. However, the complexities of the problem go beyond the mere trade-off between these two metrics. User heterogeneity is an important factor to consider, as different cohorts of users may have varying ad-tolerance levels. Some users may be more accepting of ads, while others may be more sensitive to excessive ad exposure. Furthermore, the impact of other contextual signals adds another layer of complexity. The satisfaction drop caused by increased ad-load may vary depending on _where_ in the session the user finds themselves. Even though this problem is at the heart of online content marketplaces, it has received little attention in the research literature so far. Indeed, publicly available work does not discuss how advances in machine learning can be leveraged for this common use-case. To address this gap, we propose an off-policy bandit formulation for ad-load balancing in the context of a social media platform. Our approach leverages user, content, and ads-related features as contextual information within the bandit framework. We explore different ways to model the action space, going from mere ad _volume_ to the _position_ of the ads in the feed. The contextual bandit paradigm allows us to make informed decisions that balance user satisfaction and advertising revenue, considering user heterogeneity and session context. We leverage counterfactual optimization techniques on offline collected stochastic data, making use of unbiased estimators based on Inverse Propensity Scoring (IPS) and Doubly Robust (DR) methods (Share and Youtube, 2018). This enables us to evaluate the effectiveness of various ad-load allocation policies in improving both user satisfaction and ads revenue reliably, from offline data. Finally, we present insights from A/B experiments into the behavior of the selected policies. We observe directional alignment between our off- and online experiments, in that the top candidates based on offline policy value estimation lead to statistically significant improvements in _both_ user satisfaction metrics and advertising revenue compared to homogeneous and static allocation policies, highlighting the value that personalisation can bring. In summary, the contributions of our work include the following: 1. We introduce the "_ad-load balancing_" problem, which is at the heart of online content marketplaces but has received little attention in the research literature. 2. We motivate the problem and its nuances through insights obtained from real-world data (SS3). 3. We propose an off-policy contextual bandit framework to tackle the problem via counterfactual optimisation (SS4). 4. We present results from live experiments that highlight the effectiveness of our approach, leading to online improvements in both user- and advertiser-focused objectives (SS5). ## 2. Related Work _Ad load personalisation_. Online advertising is at the heart of the modern web, and a very lively research area as a result (Bradbury et al., 2017). The majority of existing work typically focuses on the advertising stack itself, from bidding (Bradbury et al., 2017; Share and Youtube, 2018; Share and Youtube, 2019) and response modeling (Share and Youtube, 2019; Share and Youtube, 2019) to auctions (Share and Youtube, 2019; Share and Youtube, 2019). Even though the trade-offs and tensions between advertising load and other platform objectives are widely reported and apparent, works that directly tackle this problem are scarce. Works exist to tackle ad fatigue from repeated impressions due to the same ad (Bradbury et al., 2017), or general "banner blindness" as a result from ad overload (Sagtani et al., 2017). Other seminal work leverages counterfactual reasoning techniques to gain insights into search engine advertising (Bahdan et al., 2017). To the best of our knowledge, our work is the first that provides a practical and effective method to deal with personalised ad load balancing in online content marketplaces. _Multi-objective optimisation._ Advertising problems in content marketplaces inherently deal with multiple stakeholders that each have multiple, possibly conflicting, objectives. Abdollahpouri et al. provide an overview of the problems that typically arise in multi-stakeholder recommender systems, and common approaches to solving them (Abollahpouri et al., 2017). Zheng and Wang discuss multi-objective optimisation methods proposed in the research literature (Zheng and Wang, 2018). Mehrotra et al. propose a bandit-based multi-objective optimisation method geared towards music streaming platforms (Mehrotra et al., 2019), adopting a Generalised Gini Index (GGI) aggregation function to balance multiple objectives (Bahdan et al., 2017). Our work complements this existing research literature by showing how simple scalarisation techniques, which are commonly used in multi-objective contextual bandit use-cases, can be effective in solving the personalised ad load balancing problem. _Counterfactual learning and evaluation._ As machine learning usecases in modern web platforms move from making _predictions_ to making _decisions_, a parallel shift is happening from _supervised_ learning approaches towards those that leverage _bandit_ feedback (Heng et al., 2018). Due to the challenges that arise with real-time learning, these systems typically operate in a batch fashion (Zheng and Wang, 2018). This line of work is closely related to the offline Reinforcement Learning (RL) literature (Zheng and Wang, 2018), but practical applications often rely on the _bandit_ assumption (i.e. the Markov Decision Process consists of a single timestep or, analogously, there is no _state_) (Bahdan et al., 2017; Li et al., 2018; Li et al., 2018). We leverage recent advances in this field to our use-case, and provide a more detailed overview in Section 4.3. ## 3. Nuances of ad-LOAD BALancing In this section, we provide a motivating analysis of the ad-load balancing problem and highlight the challenges it entails: starting with the trade-off between user satisfaction and ads objectives, then discussing heterogeneity among users towards ads tolerance. To gain an understanding of these challenges, we consider metrics such as retention, user engagement, time spent, ad impressions, and ad clicks to quantify the trade-off in our proposed framework. ### Data Context We collected user data from ShareChat, a widely popular multilingual social media application with over 180 million monthly active users in India, supporting 18 regional languages. Our dataset consists of feed-level data from a randomly selected sample of 5 million users and approximately 250 million feeds. Typically, a social media platform presents consecutive feed fetches with 10 posts. In order to reduce the action space for off-policy modeling (and hence, the variance of the estimators), we propose to treat a single feed as multiple independent smaller feeds. This strategy is further elaborated in Section 4.2.1. Specifically, we suggest treating a single feed of 10 posts as two independent feeds, each with 5 posts, with the possibility of having 0, 1, or 2 ads. ### Trade-Off between ads and user satisfaction To better motivate the need for ad-load balancing, we analyze the global trade-off between ads and user satisfaction metrics. We consider _satisfaction_ metrics such as engagement, video play (a binary metric indicating whether a particular video type has been played beyond a specific threshold value), feed depth scrolled (representing the number of successive feed fetches by a user), as well as user _dissatisfaction_ metrics like feed abandonment and session abandonment. In terms of advertising, we examine metrics related to ad views and clicks. Figure 1 shows how such short-term engagement signals are correlated to one another - and how user satisfaction and dis-satisfaction signals are negatively correlated. **Trade-off with respect to the different ad positions**: Figure 2 presents the normalized values of satisfaction, dissatisfaction, and Figure 1. Heatmap visualising Pearson’s correlation coefficient among the user-focused objectives we consider. Figure 2. Visualising the inverse relation between user satisfaction & ads objectives, w.r.t. advertisement positions. advertising metrics for different ad-slots in a single feed fetch. We observe that increasing the number of ads from 0 to 2 results in decreased satisfaction, increased user dissatisfaction, but also higher ads impressions, clicks, and ultimately (short-term) revenue. Similar effects are observed within ad-slots of 1 ad, when a change in its position from 6 to 2 leads to variations in user metrics due to early exposure to ads at the top of the feed. This phenomenon arises because the view probability decreases for posts further down the feed. To further support these arguments, we present the abandonment analysis in the subsequent section. **Analysing Feed abandonment**: A key user dis-satisfaction metric is given by _abandonment_ events. Indeed, if a user leaves the feed, we can interpret this as a negative signal. Note that this would not always be the case - as all users, including satisfied ones, leave the feed at some point. Nevertheless, for the purposes of this analysis we can interpret it as such. To observe the causal effect that advertisements have on abandonment probabilities, we ran an online test with a static advertising policy and uniformly randomly ranked content. Indeed, this ensures that the effects of enjoyable content are uniform over the positions, and any observed effects come from other sources (we effectively perform an _intervention_(Zhou et al., 2017)). As such, if we observe increased abandonment probabilities at the fixed advertisement positions in the feed, this indicates a negative _causal_ effect of advertisements on user satisfaction. Figure 3 visualises insights from this experiment, with the position on the feed on the x-axis and the abandonment probability on the y-axis. Note that we present _normalized_ values of probabilities in the figure (by dividing the actual probability with some constant) to remove commercially sensitive information. We observe clear negative effects that we would wish to alleviate with more intelligent allocation policies. ### Heterogeneity across users In the previous section, we observed the global trade-off between user satisfaction (SAT) and ads metrics. However, it is important to acknowledge that different user cohorts have varying tolerance levels for ad-load, resulting in user heterogeneity. Figure 4 illustrates the changes in SAT and ads metrics when the ad-load is increased for users in every alternate feed fetch by 1 additional ad. We primarily consider language and fatigue score (Zhou et al., 2017) to cohort users. The fatigue score was proposed as a surrogate metric for user inactivity on the platform. A lower fatigue score indicates highly active users, while a higher fatigue score suggests inactive users. For both user cohorts, we observe an increase in ads impressions after increasing the ad-load in alternate feed fetches. The left section of Figure 4 demonstrates that the decrease in user satisfaction metrics is less pronounced for Tamil and Telugu users compared to Hindi and Kannada users. Similarly, for users with lower fatigue scores, the decrease in user satisfaction metrics is less significant compared to users with higher fatigue scores. Additionally, we note that ads impressions _decrease_ as fatigue score increases, as more active users tend to perform more feed scrolling, resulting in increased impression opportunities. These findings highlight the heterogeneity in ad-tolerance among user cohorts and emphasize the importance of incorporating user context to determine the appropriate ad-load. ## 4. Problem Formulation Contextual and personalised ad-load balancing is an important and common problem - but data-driven approaches have not been reported in the research literature. As we effectively aim to maximise cumulative _rewards_ by assigning the right _actions_ to _contexts_, the problem setting matches the contextual bandit paradigm. On-policy bandit approaches that learn online are ill-suited for our setting, as they might show unpredictable behaviour once deployed (Zhou et al., 2017). Instead, the logged data mentioned in Section 3.1 allows us to leverage advances in off-policy learning for our use-case. We leverage three well-known families of methods: the Direct Method (DM), Inverse-Propensity-Weighted (IPW) empirical risk minimisation (ERM), and a doubly robust variant (Bahdan et al., 2017; Li et al., 2017; Li et al., 2017; Li et al., 2017). In what follows, we introduce these methods, and explore various options when designing the action space and the reward function. ### Propensity score validation We collect data from a uniform random logging policy in the form (\(x\), \(a\), \(r_{a}\), \(p_{a}\)), where \(x\sim D(x)\) is the observed context, \(a\in A\) is the action drawn using uniform random distribution over the action space, \(r_{a}\) is the corresponding reward we observe and \(p_{a}\) is the propensity score of selecting an action. Off-policy evaluation and learning approaches that rely on importance sampling or IPW require _full support_ of the logging policy (Li et al., 2017). This implies that the probability with which the logging policy selects actions should be non-zero for every possible contextation pair: \(p_{a}>0\;\forall a\in A\). This requirement is easily verified for the uniform random policy, as \(p_{a}=1/|A|\) where \(A\) is the action space. Aside from this explicit assumption - practical applications often prohibit effective unbiased reward estimation because due to a variety of reasons (Li et al., 2017; Li et al., 2017). Mehrotra et al. (Mehrotra et al., 2017) and Li et al. (Li et al., 2017) propose two simple tests that validate the logged propensity scores, which we explicitly adopt here. **Arithmetic Mean Test**: To assess the accuracy of the randomized data collection process, we examine the frequency of a specific action \(a\) in the data and compare it to the expected number of occurrences based on the logged propensity scores. Our analysis reveals that the observed difference is not statistically significant, suggesting that there are no errors in the randomized data collection process. Figure 3. Static advertisement positions lead to increased feed abandonment probabilities (normalised at position 1) right after the ads. **Harmonic Mean Test:** We verify the mean of the random variable in Equation 1 below is close to \(2\). \[\mathbb{E}_{a}\left[\frac{\mathbb{I}(a=a^{*})}{p_{a^{*}}}+\frac{\mathbb{I}(a\neq a ^{*})}{1-p_{a^{*}}}\right]\approx 2 \tag{1}\] Note that the control variate test proposed by London and Joachims is not directly applicable in our setting as it requires a fixed target policy, which we are yet to learn (London and Joachims, 2018). Additionally, note that this type of logging policy is devoid of either observed or unobserved confounding by design (London and Joachims, 2018). ### Off-policy bandit formulation #### 4.2.1. **Designing the action space** Two natural choices arise when considering _actions_ in the ad-load balancing problem: **Volume of advertisements**. We can consider fixed ad positions in the feed, and model the number of ads we wish to show in a given feed fetch: \(0,1,2\). A clear advantage of this approach is its simplicity - but it lacks the expressiveness to explicitly tackle the position effects we observe in Figure 2. **Volume and position of advertisements**. To overcome the limitation of the first approach, we model an action as _both_ the number of ads as well as the position of the ads in the feed fetch. This accounts for the heterogeneous distribution in the expected rewards for different ad positions. One challenge with this approach is the combinatorial explosion of the size of the action space. In general, for a feed of length \(n\) and fixed number of \(i\) ads in the feed for \(i\leq n\), the total number of arms is given by the binomial coefficient \(\binom{n}{i}\). As such, the total number of possible combinations for all possible ad loads and positions are given by \(\sum_{i=0}^{n}\binom{n}{i}=2^{n}\). With exponentially many actions, the propensity of each action becomes \(1/2^{n}\) when following a uniform logging policy. Although the reward estimates from IPW estimators are still unbiased, their variance quickly becomes problematic. To deal with this effectively, we include two further considerations: 1. The standard length of a feed fetch \(n\) is \(10\), which would imply an action space of \(1024\). We treat a feed of \(10\) length as two independent feeds of \(5\) length each, allowing us to reduce the size of the action space by a factor \(2^{5}=32\) to \(32\). 2. We analyzed the effect of increasing the gap between consecutive ads on user satisfaction metrics. When the gap between consecutive ads was \(\leq 3\) or an ad was placed at \(1^{st}\) position, we observed significant satisfaction losses, which are undesirable for the platform. Therefore, we removed such combinations from the action space. #### 4.2.2. **Designing the reward function** The ad-load balancing problem is inherently multi-stakeholder and multi-objective. Indeed, _users_ and _advertisers_ have their own objectives which can be conflicting. We define an explicit reward metric for both: **User Satisfaction** is not straightforward to measure from implicit feedback. The core hypothesis many platforms make, is that _retention_ reflects users' satisfaction with the system. We model this with a binary label: "_do users come back to the system tomorrow?_" and call it _D1 Retention_. Because retention is a long-term and delayed signal, adopting it directly as a reward can be challenging. As such, we consider feed-level user (dis-)satisfaction signals mentioned in table 1 to model a reward that serves as a proxy metric for retention. Let \(\delta_{1}\) be user's _D1 Retention_, \(\phi(x,a)\) be the user SAT metric, and \(x_{i}\) be the value of \(t^{th}\) (dis-)satisfaction signal with weight \(w_{i}\). We adopt linear scalarization to estimate _D1 retention_ from these different signals using Equation 2, where \(\rho\) indicates Pearson's correlation: \[\phi^{\star}=\operatorname*{arg\,max}_{\phi}\rho(\delta_{1},\phi(x,a)),\text { with }\phi(x,a)=\sum_{l}w_{i}\cdot x_{i}. \tag{2}\] The objective in Equation 2 is maximised by learning a linear model via gradient ascent on empirical logged data. The optimal weights obtained for each signal \(x_{i}\) in the user SAT metric \(\phi(x,a)\) are provided in Table 1, along with corresponding signal descriptions. Notably, positive signals hold positive weights, while negative signals carry negative weights in the final user SAT reward. Finally, \begin{table} \begin{tabular}{l c l} \hline **Signal** & **Weight** & **Description** \\ \hline _User (dis-)satisfaction signals_ & & \\ \hline Engagements & 0.5995 & User’s engagement signals such as likes, sharing on other platforms \& downloads \\ Video play & 0.6235 & A binary metric indicating whether a particular video type has been played \\ Percentage video watch & 0.3464 & A continuous variable indicating the fraction of the video watched by the user. \\ Feed depth scrolled & 0.3213 & The number of successive feed fetches by a user \\ Video skip & -0.1432 & A binary label, positive if a video watch is less than 2 seconds \\ Discounted feed abandonment & -0.3742 & Did the user quit the feed, but enter another feed without quitting the session after the ad? \\ Discounted Session abandonment & -1.2345 & Did the user quit the session entirely after the ad? \\ \hline _Ads objective signals_ & & \\ \hline Impression & 0.2234 & Did the user see the ad in case of CPM campaign? \\ Clicks & 0.5135 & Did the user click on the ad in case of CPC campaign? \\ Install & 0.7823 & Did the user install the app in case of CPI campaign? \\ \hline \end{tabular} \end{table} Table 1. Optimally learned weights for user (dis-)satisfaction metrics and advertising objectives in our reward function. Figure 4. Visualising user-level heterogeneity to advertising effects, across language and fatigue score. we also emphasize certain considerations to keep in mind while designing the reward for abandonment signals below. _Discounting & Attribution of Abandonment Dissatisfaction Signal:_ Usually, users scroll through feeds continuously until they either switch to a different feed or exit the session. We define \(rank_{i}\) as the user's current feed fetch count in a series of consecutive feed fetches and \(rank_{d}\) is the total number feeds scrolled consecutively where \(rank_{i}<=rank_{d}\), then user abandons the feed at \(rank_{d}\). As users leave a sequence of feeds at varying points, dissatisfaction from leaving the first feed is more pronounced than leaving, say, the \(10^{th}\) feed. This is because users have already consumed around 50 posts by the \(10^{th}\) feed. So, the abandonment cost should be lower at higher feed depths. Additionally, some of the abandonment cost at a particular feed depth may or may not be attributed to the previous feed depths. If \(\lambda\) is the final feed abandonment signal, we account for both discounting and attribution as: \[\lambda =\text{Discounted Signal}\cdot\text{Signal Attribution} \tag{4}\] \[=\frac{1}{\log(1+rank_{d})}*\alpha^{rank_{d}-rank_{i}} \tag{3}\] where \(0<\alpha=1\), is a hyper-parameter controlling the degree of attribution. If \(\alpha=1\), full cost is attributed to the previous feeds, and if \(\alpha\to 0^{+}\), almost no cost is attributed to the previous feed fetches. Similar to feed abandonment, we apply a discount to the session abandonment signal depending on the amount of time the user has already spent in the session. **Advertising objectives** can be manifold but are typically focused on revenue. Nevertheless, similar to true user satisfaction, advertising revenue is a long-term metric that is not easily plugged in directly as a reward. As short-term proxies, we adopt a scalarised version of advertising impressions, clicks and installs that maximises the person correlation with revenue. Let \(\mathit{Rev}\) be _revenue_, \(\psi(x,a)\) be the ads-objective, and \(x_{i}\) be the value of \(i^{th}\) ads objective signal with weight \(w_{i}\). We adopt linear scalarization to estimate _revenue_ from these different signals using Equation 5, where \(\rho\) indicates Pearson's correlation coefficient: \[\psi^{\star}=\operatorname*{arg\,max}_{\psi}\rho(\text{Rev},\psi(x,a)),\text{ with }\psi(x,a)=\sum w_{i}*x_{i}. \tag{5}\] Similar to user SAT optimization, Equation 5 is maximised by learning a linear model via gradient ascent on empirical logged data. The optimal weights obtained for each signal \(x_{i}\) in the ads objective \(\psi(x,a)\) are provided in Table 1, along with corresponding signal descriptions. **Final reward** Let \(\phi^{\star}(x,a)\) and \(\psi^{\star}(x,a)\) be the optimal user SAT metric and advertising objective respectively obtained from methods above, then the final reward is a weighted combination of both as follows: \[R=\beta*\phi^{\star}(x,a)+(1-\beta)*\psi^{\star}(x,a),\] \[\text{with }\beta\in[0,1]. \tag{6}\] #### 4.2.3. **Designing a context representation** We use user, content and advertisement attributes as contextual signals. Some of the features along with the description are listed in Table 2. As part of the feature pre-processing we removed the features with high multicollinearity. Figure 5 visualises a heatmap of correlation of important features with our user SAT reward signals. Note, _Like_, _Shares & favs_ (download) are the different types of engagement signal users can do with a post. Dot features highlighted in the correlation heatmap are the dot product between the pre-trained embedding of users and post on a particular signal and per view attributes are the counter attributes for a user/post in a specific time window. For eg: \(UserLike/View\_1day\) mean the total number of likes / total number of views of a user on the platform in 1 day window. From the plot, we observe that these features have high correlation with individual reward signals for engagements (e.g. 0.18 for "likes") and hence utility of including such features in our dataset. \begin{table} \begin{tabular}{l l l} \hline \hline **Attribute Type** & **Name** & **Description** \\ \hline \multirow{8}{*}{User (Dynamic)} & Current Interactions & Counters of hourly and daily interactions like engagement, time spent, and views. \\ & Activity & Login counters like average inactivity last week, logins yesterday, etc. \\ & Historical Interactions & Counters over the last 3, 5 and 7 day window. \\ & Embeddings & Dot product between pre-computed user embedding and average of post embedding in the feed. \\ & Fatigue Score & Described in section 3.3 – proposed by (Kumar et al., 2017) \\ \hline \multirow{4}{*}{User (Static)} & Platform Age & Number of days since user signed up on the platform. \\ & Language & One-hot encoding of the user language. \\ & Location & One-hot encoding of state, district, and city. \\ \hline \multirow{3}{*}{Content} & Genre & A genre affinity score and the number of posts in the feed. \\ & Distinct Genres & Taken as proxy for diversity, as the number of distinct genres in the feed. \\ & Post Age & Average age of posts in the feed. \\ \hline \multirow{8}{*}{Advertisements} & Previous Ad Slots & Ad-slots in the previous feed request. \\ & Ad Gap & Number of posts between the last ad in the previous feed and the first ad in the current feed. \\ \cline{1-1} & Average Ad Load & Average number of ads per feed in last 3, 5 feed fetches \\ \cline{1-1} & Total Ad Impressions & Number of ad impressions for this user in the current session. \\ \cline{1-1} & IAB Category & Interactive Advertising Bureau (IAB) content taxonomy relating to the ad. \\ \cline{1-1} & Clicks \& Impressions & The user advertising impressions and clicks in several time windows. \\ \hline \hline \end{tabular} \end{table} Table 2. Type and description of some of the most important features used to represent contextual signals in our bandit setup. Figure 6 illustrates the correlation between advertisement features and both the advertising objective and the user dis-satisfaction signal. Alongside traditional counter features (impressions/clicks on an advertisement in various time windows) and IAB category features, real-time features such as ad-slots displayed in the last feed fetch and the average ad-load in the current session are important and their significance is highlighted by their strong correlation with clicks. A high positive correlation between the number of ads, representing an ad-heavy feed, leads to higher feed- and session-abandonment, but also more advertising clicks. This makes the trade-off inherent to our problem apparent. ### Off-policy learning and evaluation The research literature has conventionally focused on so-called _on-policy_ bandit approaches, where a policy is deployed and allowed to learn and update in real-time. Although the advantages of this approach are apparent - it should be clear that it brings significant challenges. Indeed, not only does it require significant engineering bandwidth to set up the proper infrastructure, we essentially would have no way of properly vetting the ad load balancing policy before deployment. For these reasons, _off-policy_ bandits are generally preferred in real-world applications (Beng et al., 2017; Chen et al., 2018). Off-policy learning is often called _counterfactual_ learning - as it aims to optimise a policy for a counterfactual estimate of the reward that policy would have collected. A _policy_ is a contextual probability distribution, often obtain through a parametric model. We will denote such a parametric policy with shorthand notation \(\pi_{\theta}(a|x)\equiv\mathsf{P}(A=a|X=x;\Pi=\pi_{\theta})\). We wish to learn the parameters \(\theta^{\star}\) that maximise the estimated expected value we obtain under the policy \(\pi_{\theta^{\star}}\) on logged data \(\mathcal{D}\): \[\theta^{\star}=\operatorname*{arg\,max}_{\theta\in\Theta}V(\pi_{\theta}, \mathcal{D}). \tag{7}\] Several families of estimators exist for \(V\). _The Direct Method (DM)._ So-called _value_-based methods leverage supervised learning methods for reward estimation.1 In the direct method, the parameters \(\theta\) are used to learn a model for the reward an action yields, given a context: \(\widehat{r}(a|x)\approx\mathbb{E}[R|X=x,A=a]\). The reward model \(\widehat{r}(a|x)\) can then be used to estimate the policy value: Footnote 1: These are often referred to as Q-learning in the reinforcement learning literature. \[\widehat{V}_{\text{DM}}(\pi_{\theta},\mathcal{D})=\sum_{(x,a,r_{a},\rho_{a}) \in\mathcal{D}}\sum_{a^{\prime}\in A}\pi_{\theta}(a^{\prime}|x)\cdot\widehat{ r}(a^{\prime}|x). \tag{8}\] From Eq. 8, we can observe that the optimal policy for a given reward regressor \(\widehat{r}_{\theta}(a|x)\) is the deterministic policy that chooses the action with the highest reward estimate: \(\operatorname*{arg\,max}_{a^{\prime}\in A}\widehat{r}(a^{\prime}|c)\). As this foregoes the need of sampling from a policy, they are widely used in practice. Then, the parameters \(\theta\) are used for \(\widehat{r}\), as \(\pi\) reduces to the simple decision rule laid out above. Although value-based methods generally have low variance, they are biased estimators. Advances exist in the research literature to improve on such methods, typically by adopting some form of _pessimism_ in the reward model (Hendle et al., 2017; Chen et al., 2018; Chen et al., 2018). _Inverse Propensity Weighting (IPW). Policy_-based methods directly learn a parametric policy, foregoing the need to learn an explicit reward model. Inverse Propensity or Weighting (IPW) is the workhorse behind this line of work (Chen et al., 2018). Using the logged propensities \(p_{a}\), we can obtain an unbiased estimator of the reward that a new policy \(\pi\) would have obtained on a logged dataset \(\mathcal{D}\): \[\widehat{V}_{\text{IPW}}(\pi_{\theta},\mathcal{D})=\sum_{(x,a,r_{a},\rho_{a}) \in\mathcal{D}}r_{a}\cdot\frac{\pi_{\theta}(a|x)}{p_{a}}. \tag{9}\] Although it is easy to see that Equation 9 is unbiased, it typically leads to excessive variance. Extensions of this estimator typically aim to handle that directly: by capping (Hendle et al., 2017) or self-normalising (Zhou et al., 2018; Chen et al., 2018) the weights, or adding regularisation terms to the objective (Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018; Chen et al., 2018). Similarly to the value-based family, these extensions can be interpreted as mathematical _pessimism_(Chen et al., 2018). Recent work has shown that _gradient boosting_ techniques and models (such as Gradient Boosted Decision Trees (GBDTs) (Kipper and Ghahramani, 2017)) are amenable to such policy learning objectives (Chen et al., 2018). _Doubly Robust (DR)._ The value- and policy-based families both have their own advantages and characteristics. Dudik et al. introduce an estimator that leverages _both_ a reward model \(\widehat{r}\)_and_ the logging propensities \(p_{a}\): the Doubly Robust estimator (Dudik et al., 2017). It derives its name from a desirable property, in that it is unbiased if _either_ Figure 5. Heatmap of Pearsons’ correlation coefficients between features and satisfaction signals. Figure 6. Heatmap of Pearson’s correlation coefficients between features and dissatisfaction / advertising signals. the logged propensities, or the reward model are. \[\widetilde{V}_{\text{DR}}(\pi_{\theta},\mathcal{D})=\] \[\sum_{(x,a,r_{a},\theta_{a})\in\mathcal{D}}\left((r_{a}-\widehat{r} (a|x))\cdot\frac{\pi_{\theta}(a|x)}{\rho_{a}}+\sum_{a^{\prime}\in A}\pi_{\theta} (a^{\prime}|x)\cdot\widehat{r}(a^{\prime}|x)\right) \tag{10}\] DR extends DM with an IPW-term that weights the error of the reward regressor. Extensions exist that learn \(\widehat{r}\) to minimise DR's variance (Dwork et al., 2018), optimise the DM-IPS balance (Zhu et al., 2019), or introduce further transformations that bound expected error (Zhu et al., 2019). Other work has shown that DR can still be outperformed by either IPW or standalone DM (Zhu et al., 2019). In other words, there is _"no free lunch"_. ## 5. Experimentation In order to empirically validate the effectiveness of our proposed framework through experimentation, we would require stochastic logged data with user and advertising context, advertising positions with associated propensity values, as well as user feedback on both ads and non-ads content. To the best of our knowledge and at the time of writing, no such datasets are publicly available. Hence, we need to resort to proprietary datasets. We note that our proposed framework is general, and makes few assumptions about the nature of the problem setting. As a result, we believe that our introduced framework and results can extend to other platforms with similar or different feed models (Beng et al., 2019; Zhu et al., 2019). ### Policy Design To achieve a balanced trade-off between user satisfaction and ads objectives, we propose various policy designs, including both baseline and counterfactual-based approaches. #### 5.1.1. **Baseline Policies** We begin by defining a range of baseline policies, including heuristic-based and learned strategies to jointly optimize user satisfaction (SAT) and ads objectives. 1. **Optimizing User SAT**: The first policy we investigate optimises only user satisfaction by setting \(\beta=1\) in equation \(6\). Optimizing such a policy leads to no ads for every feed fetch, aligning with offline observations where ad-free slots yield highest user SAT metrics (Figure 2). 2. **Optimizing Ads Objective**: We solely optimize the ads objective by setting \(\beta=0\) in equation 6. While optimizing for the user SAT gives no ads at all, this approach maximize the ads with minimal user SAT. Our experiments consistently yield an ads position of "2 and 6", in line with offline ad reward patterns (Figure 2). 3. **Random Policy**: We move from single-objective optimization to policies considering both user SAT and ads objectives. We start with a random policy which selects an arm from the action space using a uniform random distribution. 4. **Static Policy**: We define static policy as a function of offset and _post_gap_ (post_gap is the number of non-ad posts between consecutive ads), wherein offset is the position of the ad on the very first feed fetch by the users after which ads are displayed with a fixed _post_gap_. For instance, a static policy of \((3,5)\) positions the first ad at "3", maintaining a consistent gap of 5 non-ad posts. 5. **User Fatigue Based Policy**: The baselines we have looked so far are either hard coded heuristics w.r.t. advertising _load_ and _positions_, or they optimize for single objective. We leverage a method based on the fatigue score (learned using GBDT model) (Zhu et al., 2019) to optimize both the user SAT and ads-objective. Let \(\phi_{u}\) represents the user's fatigue score, \(\theta\) represents the default number of ads shown to users in a feed on the platform, and \(\theta_{u}\) represents the updated number of ads shown to the user. Then, the fatigue score-based policy can be formulated as: \[\theta_{u}=\begin{cases}\theta+1,&\text{if }\phi_{u}<\alpha\\ \theta,&\text{if }\alpha<\phi_{u}<\beta\\ \theta-1,&\text{if }\phi_{u}>\beta\end{cases}\] where \(0<\alpha<\beta<1\) are the thresholds for low and high fatigue users because of the ads respectively. Users with fatigue scores because of ads below \(\alpha\) experience an increased ad load, while scores above \(\beta\) result in a reduced ad load. As this baseline optimises both user SAT and advertising objectives in a personalised manner, it is a much stronger baseline than (1)-(4). This is reflected in our empirical results. #### 5.1.2. **Policies based on counterfactual estimators** In addition to the aforementioned baselines, we optimize policies parameterised by Multi-Layer Perceptrons (MLPs) and Gradient-Boosted Decision Trees (GBDTs) using the unbiased estimators (IPW and DR, see Section 4.3) from logged data. We fine-tune the reward function from equation 6 across various \(\beta\) values. Additionally, alongside propensity-based methods, we train MLPs and GBDTs to predict the reward function, known as the Direct Method. ### Offline Experiments The Open Bandit Pipeline library (Zhu et al., 2019) was used for training and evaluating these policies. For hyperparameter tuning of the supervised models, we performed a randomized grid search over various combinations and selected the parameters that yielded the best performance on the validation set. We report the optimal hyperparameters for our GBDT and MLP models in the supplementary material. Figure 7 illustrates the percentage loss in Satisfaction (SAT) and ads objectives for a particular policy compared to the optimal policy described under section 5.1.1 for each objective in isolation. The losses are calibrated on a scale of 0-100 for ads and 100-0 for no ads. The _offset_ of the static policy are highlighted in the figure and have a "post gap"\(=5\). From the plot, we observe that the baseline policies exhibit higher SAT and ads loss compared to the counterfactual policies, indicating their Pareto inefficiency. Among the baselines, the fatigue score-based policy performs best, exhibiting the lowest loss in both SAT and ads reward. This result highlights the potential of personalization in attaining better outcomes over static policies while simultaneously optimizing both objectives. Among the counterfactual based policies, by varying the \(\beta\) value (from equation 6) from 0.7 to 0.9 we observe a decrease in user SAT loss and an increase in ads objective loss. This suggests that \(\beta\) serves as a trade-off parameter between SAT and ads objectives, with all points lying on the same Pareto front. For \(\beta=0.8\), both GBDT and MLP models exhibit similar losses in both SAT and ads objectives. Additionally, we observed that policies trained with the DM objective were Pareto inefficient compared to policies trained with the IPW and DR objectives. The convergence of IPW and DR indicates the reliability of our propensity scores in estimating rewards. ### Deployment & Online A/B Experiment Based on the offline results, we conducted A/B tests for three policies as outlined in Table 3. Online experiments were conducted over a span of two weeks, involving approximately 5 million daily active users. At the start of the experiment, a uniform random policy was applied to 1% of randomly selected active users in the variant, with their data logged to retrain the offline learned policies retraining. For the remaining users, the learned policies were deployed. To gradually transition to the learned policies for all users, the percentage of users exposed to the random policy was progressively reduced over the course of several days. The re-trained model was stored in a cloud storage bucket, assigned an incremented version, and deployment in service that loaded the updated model. The learned policy models were deployed in a Kubernetes cluster with each pod having around 64 vCPUs, 240GB of RAM and NVIDIA T4 GPUs with horizontal auto-scaling enabled based on the request rate & cpu utilization of pod. The online p99 inference latency was approximately 25 milliseconds during peak traffic. All the variants employed GBDT models to parameterise the learned policy. All the variant policies used GBDTs to parameterise the learned policy, and the doubly robust estimator as their objective. The control group was shown a personalized policy that only considers the "fatigue score" to increase or lower the ad-load (_Fatigue_ baseline in Figure 7(Friedman et al., 2017)). We tracked a number of user SAT and revenue metrics including Retention, Time spent, video plays, ads impressions, ads clicks. variant-1, with \(\beta=0.7\), exhibited negative satisfaction metrics but demonstrated a high ads objective, indicating a trade-off that favored ads and was not suitable for the platform. In contrast, both variant-2 and variant-3 showed positive SAT and ads objectives, suggesting that optimizing for multiple objectives based on context enabled us to capture user heterogeneity and achieve gains in all objectives without adversely affecting others. When comparing variant-2 and variant-3, we observed that increasing the \(\beta\) value from 0.8 to 0.9 resulted in higher user satisfaction but a lower ads objective. The analysis of variant-2 provides valuable insights into the effectiveness of the policy. By examining the distribution of ad-slots based on ad-position and user features, we gained a deeper understanding of its performance. In terms of feed depth, we observed that for the first feed, the policy shifted nearly half of the ads to the \(4^{th}\) position, while distributing the remaining ads between positions "3" and "2". This adjustment resulted in a reduced ad-load from "3" to "4" for most cases and ultimately led to an improvement in user satisfaction (SAT). For the next 3-4 feeds, the ad-load increased compensating for the revenue loss in the first feed. Another interesting finding emerged when analyzing the policy's behavior across consecutive feeds. It exhibited an alternating pattern of high and low ad-loads. For instance, if the user had no ads in the last feed, the policy predominantly displayed first ad at \(2^{nd}\) position. Conversely, if the ads count in the last feed were 2, the ads were primarily shown at positions "5", "6" or no ads at all. However, when there was only 1 ad in the last feed, the policy displayed ads of various positions, indicating a mixed approach. Additionally, we examined the relationship between ad display and fatigue score. Notably, as the fatigue score increased, the policy reduced the frequency of ad displays and shifted towards lower ad-loads. Conversely, for users with very high fatigue scores (\(\geq 0.85\)), the policy displayed higher ad-loads. Further analysis revealed that users with high fatigue scores tended to have a higher churn rate, meaning they were more likely to discontinue using the platform. As a result, the policy optimized the ads objective by increasing ad exposure to these users, as their SAT improvement was minimal. Taking into account the insights gained from the A/B test and the analysis of variant-2, the policy was deployed in production serving 100% traffic cross 180 million monthly active users. ## 6. Discussion & Future Work In conclusion, we have presented the "_ad-load balancing_" problem, emphasizing the trade-off between user satisfaction and ads objectives, as well as user-level heterogeneity. Through the use of an \begin{table} \begin{tabular}{l l l l l l l l l l l} \hline \hline & & \multicolumn{6}{c|}{SAT Metrics} & \multicolumn{3}{c}{Ads Metrics} \\ Variant & \(\beta\) & Objective & D1 Retention & Time spent & Views & Engagements & Video play & Impressions & Clicks & Revenue \\ \hline variant-1 & 0.7 & \(DR\) & -0.01\({}^{*}\) & -0.01\({}^{*}\) & -0.1 & -0.24 & -0.07\({}^{*}\) & 1.14 & 1.91 & 1.65 \\ variant-2 & 0.8 & \(DR\) & 0.04\({}^{*}\) & 0.14 & 0.02\({}^{*}\) & -0.09\({}^{*}\) & 0.19 & 0.79 & 2.05 & 1.45 \\ variant-3 & 0.9 & \(DR\) & 0.08 & 0.22 & 0.15 & -0.02\({}^{*}\) & 0.31 & 0.28 & 0.52 & 0.2 \\ \hline \hline \end{tabular} \end{table} Table 3. Online A/B: % change in user SAT and ads metrics w.r.t. control. All results are statistically significant (2 tailed t-test at p-0.05 after Bonferroni correction) except for those marked by\({}^{+}\). Figure 7. Offline evaluation, comparing % loss in satisfaction and advertising objectives w.r.t. baseline policies. off-policy learning framework and unbiased estimators, we have successfully learned effective policies to tackle this challenge. Our approach has resulted in improvements in both user satisfaction and ads revenue metrics for the platform. While linear scalarization helps decide the trade-off between user satisfaction and ads-objectives, we envision future extensions to research and include more sophisticated functions to have more fine grained control over these objectives.
2309.16277
Collisionless Shock Acceleration of protons in a plasma slab produced in a gas jet by the collision of two laser-driven hydrodynamic shockwaves
We recently proposed a new technique of plasma tailoring by laser-driven hydrodynamic shockwaves generated on both sides of a gas jet [J.-R. Marqu\`es et al., Phys. Plasmas 28, 023103 (2021)]. In the continuation of this numerical work, we studied experimentally the influence of the tailoring on proton acceleration driven by a high-intensity picosecond-laser, in three cases: without tailoring, by tailoring only the entrance side of the ps-laser, or both sides of the gas jet. Without tailoring the acceleration is transverse to the laser axis, with a low-energy exponential spectrum, produced by Coulomb explosion. When the front side of the gas jet is tailored, a forward acceleration appears, that is significantly enhanced when both the front and back sides of the plasma are tailored. This forward acceleration produces higher energy protons, with a peaked spectrum, and is in good agreement with the mechanism of Collisionless Shock Acceleration (CSA). The spatio-temporal evolution of the plasma profile was characterized by optical shadowgraphy of a probe beam. The refraction and absorption of this beam was simulated by post-processing 3D hydrodynamic simulations of the plasma tailoring. Comparison with the experimental results allowed to estimate the thickness and near-critical density of the plasma slab produced by tailoring both sides of the gas jet. These parameters are in good agreement with those required for CSA.
J. -R. Marquès, L. Lancia, P. Loiseau, P. Forestier-Colleoni, M. Tarisien, E. Atukpor, V. Bagnoud, C. Brabetz, F. Consoli, J. Domange, F. Hannachi, P. Nicolaï, M. Salvadori, B. Zielbauer
2023-09-28T09:19:53Z
http://arxiv.org/abs/2309.16277v1
Collisionless Shock Acceleration of protons in a plasma slab produced in a gas jet by the collision of two laser-driven hydrodynamic shockwaves. ###### Abstract We recently proposed a new technique of plasma tailoring by laser-driven hydrodynamic shockwaves generated on both sides of a gas jet [J.-R. Marques _et al._, Phys. Plasmas **28**, 023103 (2021)]. In the continuation of this numerical work, we studied experimentally the influence of the tailoring on proton acceleration driven by a high-intensity picosecond-laser, in three cases: without tailoring, by tailoring only the entrance side of the ps-laser, or both sides of the gas jet. Without tailoring the acceleration is transverse to the laser axis, with a low-energy exponential spectrum, produced by Coulomb explosion. When the front side of the gas jet is tailored, a forward acceleration appears, that is significantly enhanced when both the front and back sides of the plasma are tailored. This forward acceleration produces higher energy protons, with a peaked spectrum, and is in good agreement with the mechanism of Collisionless Shock Acceleration (CSA). The spatio-temporal evolution of the plasma profile was characterized by optical shadowgraphy of a probe beam. The refraction and absorption of this beam was simulated by post-processing 3D hydrodynamic simulations of the plasma tailoring. Comparison with the experimental results allowed to estimate the thickness and near-critical density of the plasma slab produced by tailoring both sides of the gas jet. These parameters are in good agreement with those required for CSA. ## I Introduction Collisionless shocks are ubiquitous in astrophysical environments [1; 2] such as the Earth's bow shock, solar flares, interplanetary traveling shocks, or supernova remnants (SNRs) [3; 4]. They are believed to be responsible for non-thermal particles [5; 6] and gamma ray bursts [7]. Using scaling laws [8], the physics of collisionless magnetized shocks in SNRs has been investigated experimentally using laser-produced plasmas [9; 10]. Their capability to accelerate particles has been demonstrated [11]. The formation of a collisionless electrostatic shock requires the creation of a localized region of higher pressure within a plasma with electron temperature \(T_{e}\) much larger than the ion temperature \(T_{i}\). As this region of high pressure (defined as the downstream region) expands, it can drive a shock wave into the surrounding lower-pressure plasma (defined as the upstream region). The shock wave front can accelerate upstream ions by reflecting them to twice the shock velocity if the shock potential is larger than the kinetic energy of the incoming ions in the shock-rest frame. Since they are efficient accelerators of particles, there has been a growing interest in exploring laser-driven shocks as compact particle accelerators [12; 13; 14; 15]. Energetic ion beams from compact laser-produced plasmas have potential applications in many fields of science and medicine, such as particle physics [16], fast ignition of fusion targets [17; 18; 19], material science [20], proton radiography [21; 22; 23], radiotherapy [24; 25; 26; 27], and isotope generation for medical applications [28; 29]. Several schemes for laser-driven ion acceleration have been proposed: Target Normal Sheath Acceleration [30; 31], Radiation Pressure Acceleration [32; 33; 34; 35], Breakout Afterburner Acceleration [36; 37], that use over-dense targets (solid density foils, liquid jets), or Magnetic Vortex Acceleration [16; 38; 39] and Collisionless Shock Acceleration (CSA), that use near-critical density (NCD) targets (exploded foils, gas jets). The production, at high repetition rate, of high-energy ion beams with a narrow energy spread and a low divergence still remains a challenge, and CSA could potentially offer these properties. Moreover, NCD plasmas can be generated using gas jets, offering several advantages such as to avoid the constraints of target replacement and realignment between the consecutive shots, to avoid the target debris that usually spoil the surrounding optics, or to allow the production of pure proton beams (impurity free, using H\({}_{2}\) gas). Proton acceleration by CSA in a hydrogen gas jet was first demonstrated [15; 40] using CO\({}_{2}\) lasers, the low critical density (\(n_{c}\approx 10^{19}\) cm\({}^{-3}\)) associated with their long wavelength (\(\lambda_{0}=10\)\(\mu\)m) allowing to exploit regular-pressure, mm-scale gas jets. A proton beam of \(\sim\)MeV energy was produced with narrow energy spread (\(\sigma\sim 4\%\)) and low normalized emittance (\(<8\) nm.rad). Until today, the maximum proton energy produced by CSA in a gas jet [41] is 20 MeV. It was obtained from a CO\({}_{2}\) laser composed of several picosecond (ps) pulses, the early pulses serving to ionize the gas jet and to steepen the plasma density profile on the front side of the target by the radiation pressure of the laser, so that the following pulses interacted with a step-like density profile. Despite this promising proton acceleration, the pulse train, inherent to the laser system, varied from shot to shot, making the interactions challenging to reproduce. Instabilities such as laser filamentation and hosing from the leading pulses result in variable density profiles, which in turn lead to fluctuations in the resultant ion beam. A solution to generate a steep plasma density profile in a more stable manner is to tailor the near-critical gas target by a hydrodynamic shockwave (HSW) excited before the arrival of the main ps-pulse. This HSW can be excited by a low energy laser prepulse that can be focused inside the gas jet [42], or on a solid target placed at the entrance side of the gas jet [43]. In both cases it has been demonstrated that for long density profiles (\(>40\)\(\mu\)m), ion beams with broadband energy spectrum were produced, while for shorter plasma lengths (\(<20\)\(\mu\)m), quasimonoenergetic acceleration of protons was observed. Experiments using more widespread solid state ps-lasers (\(\lambda_{0}\approx 1\)\(\mu\)m, \(n_{c}\approx 10^{21}\) cm\({}^{-3}\)) have been performed using high-pressure narrow gas jets tailored by a HSW [44; 45], or exploded \(\mu\)m-size solid foils [46; 47]. Compared to the proof-of-principle experiments with CO\({}_{2}\) lasers, the number of accelerated protons was substantially higher (\(\sim 10^{4}\times\)) thanks to the larger \(n_{c}\) and vector potential of the laser field (\(a_{0}\sim 4\) to 25, versus \(\sim 1\) on the CO\({}_{2}\) experiments). Proton beams with peaks at energies up to 5.5 MeV from tailored gas jets and 54 MeV from exploded foils were obtained. Despite the large differences in terms of laser or target types, this set of experiments showed that the shape and maximum energy of the ion distributions strongly depend on the profile and peak value of the plasma density. Ion beams with peaked energy distribution and similar velocity of ion species with different charge-to-mass ratios, consistent with CSA, were observed only for plasmas with a steep density gradient at the laser entrance side, and of NCD. To obtain a narrow energy spread by CSA [12; 13; 14] it is crucial to have a uniform shock velocity and ion reflection, which implies a uniform electron temperature profile, only achieved by a quick recirculation of the heated electrons due to the space-charge fields at the front and at the back of the target. Therefore, the plasma width \(L\) should be limited, which for a moderate Mach number shocks (\(M\gtrsim 1\)), reads \(L_{opt}\sim\lambda_{0}(m_{i}/m_{e})^{1/2}\). Considering a hydrogen plasma and \(\lambda_{0}=1\)\(\mu\)m yields \(L_{opt}\sim 40\)\(\mu\)m, and therefore relatively sharp gradients on both sides of the target. To generate such thin NCD plasmas, we recently proposed a new tailoring method [48; 49] based on a narrow gas jet coupled with two parallel nanosecond (ns) heating-lasers that propagate perpendicular to the main ps-laser. These ns-beams are focused on both sides of the gas jet. The sudden and localized ionization and heating occuring at their foci generates two cylindrical HSWs. The expansion of each wave has two effects: i) the part that expands towards vacuum expels the wing of the gas jet, ii) while the part propagating towards the center of the jet compresses the plasma. The collision at the jet center of the two waves generates a thin plasma slab with steep gradients. Our hydrodynamic and ion Fokker-Planck simulations indicated that with present high-density gas jets [50], this method allows the production of a thin (\(\gtrsim 10\)\(\mu\)m) plasma slab, with an adjustable density up to \(2\times 10^{22}\) cm\({}^{-3}\) (\(20\)\(n_{c}\) for a \(1\)\(\mu\)m laser). This tailoring scheme can be implemented at high repetition rate, in a debris free environment, and with different types of gas. Such thin-NCD targets attracted a lot of attention recently not only for use in CSA or other ion acceleration mechanisms [51], but also for brilliant gamma-ray and electron-positron pair production [52; 53; 54]. In this paper we present experimental results on proton acceleration in a H\({}_{2}\) gas jet laser-tailored by the technique recently proposed in our previous numerical work [48; 49]. We study the influence of plasma tailoring on the energy and angular distributions of the accelerated protons, for three cases: without tailoring, by tailoring only the entrance side of the ps-beam, or both sides of the gas jet. We observed the transition from a transverse (to the ps-beam direction) acceleration with a low energy broad spectrum produced by Coulomb explosion, to a forward acceleration with a higher energy peaked spectrum, in good agreement with CSA. The spatio-temporal evolution of the plasma profile was characterized with optical shadowgraphy of a probe laser beam. These shadowgraphy were simulated by a post-processing (ray-tracing) of three-dimensional hydrodynamic simulations of the plasma tailoring, and compared with the experimental observations to estimate the size and density of the resulting plasma slab. ## II Experimental setup The work was performed on two laser facilities, PHELIX at GSI (Germany) and PICO2000 at LULI (France), both delivering laser pulses at a fundamental wavelength \(\lambda_{0}\sim 1\)\(\mu\)m. The plasma was generated from a narrow, high density, supersonic jet of hydrogen gas expelled from a 250 \(\mu\)m output diameter nozzle. The two parallel ns-beams for plasma tailoring were focused 300 \(\mu\)m from each sides of the jet center. The ps-laser beam driving the proton acceleration was propagating at \(90^{\circ}\) from the ns-beams and focused at the center of gas jet. This experimental arrangement is sketched in Ref. [48](Fig. 1). The output of the gas nozzle was placed 400 \(\mu\)m above the propagation plane of the ps and ns laser beams. The gas density profile in this plane was \(n_{atom}(r)\) (cm\({}^{-3}\).bar\({\rm{-}1}\)) \(\sim 6.2\times 10^{17}[e^{-(r/186)^{2}}+0.63e^{-(r/132)}]\). The backing pressure was adjustable up to 1000 bars allowing, without laser tailoring, to reach \(n_{e}\sim n_{c}\) (the critical plasma density, \(n_{c}({\rm{cm}^{-3}})\sim 1.11\times 10^{21}\lambda_{0}^{-2}(\mu{\rm{m}})\)). The ps-beam was focused to a Gaussian-like focal spot of 10 \(\mu\)m FWHM (in intensity). The temporal profile was Gaussian with a Full Width at Half Maximum (FWHM) of 0.5 ps at GSI and 1 ps at LULI, and a contrast on the ns scale (Amplified Stimulated Emission, ASE) of \(10^{11}\) and \(10^{7}\) respectively. The peak intensity was respectively \(8\times 10^{19}\) and \(2\times 10^{19}\) W/cm\({}^{2}\), corresponding to a normalized vector potential of \(a_{0}\sim 8\) and 4. The Rayleigh length of the ps-beam was \(z_{R}\gtrsim 250\)\(\mu\)m, of the order or greater than the width of the gas jet. The ns-beams had a square-like temporal profile of \(\sim 1\) ns duration. Each ns-beam was focused to a spot of FWHM \(\sim\) 67 \(\mu\)m (GSI) and 42 \(\mu\)m (LULI) containing up to 15 J and 6 J respectively. On both experiments the hydrodynamics evolution of the plasma was controlled by two-dimensional shadowgraphy and interferometry of a probe laser beam of 0.5 \(\mu\)m wavelength propagating at \(90^{\circ}\) from the ps-beam, parallel to the ns-beams. This probe beam was collimated, with a diameter covering the entire interaction region. At GSI it was generated by the leakage of a dielectric mirror in the ps-beam path, sent to a delay-line and frequency doubled, leading to a jitter-free probe of 0.5 ps duration. Its transverse intensity (shadowgraphy) and phase (interferometry) profiles at the exit of the plasma was recorded on 16-bit CCD cameras. The probe beam at LULI had a Gaussian temporal profile of FWHM \(\sim\) 7 ns. The shadowgraphy and interferometry at a given time (snap-shot) were recorded on Gated Optical Intensifier (GOI) coupled to 16-bit CCD cameras, with an integration window of 100 ps. The shadowgraphy profile along the ps-beam axis was imaged on the entrance slit of a streak camera. The long duration of the probe beam allowed to follow the propagation of the shockwaves, from their creation to their collision, and thus to check and adjust the synchronization of HSWs collision with the arrival of the ps-beam. The shadowgraphy and interferometry diagnostics used the same collecting lens, with an angular aperture of \(\pm\) 3.5\({}^{\circ}\) at LULI and \(\pm\) 2\({}^{\circ}\) at GSI, leading to a spatial resolution better than 10 \(\mu\)m. The focal spots of the ns-beams were recorded on a 12-bit CCD, allowing before shot to adjust them on both sides of the gas jet, as well as control their positions, shapes and transmitted energies on shot. The position of the ps-beam was controlled before shot on a 8-bit CCD. In addition to these "side-view" diagnostics (shadowgraphy, interferometry, focal spots), the interaction region was also imaged from the "bottom" on a 12-bit CCD. The second harmonic light (0.5 \(\mu\)m) emitted by the ps-beam along its propagation in the plasma was recorded on the "side-view" and "bottom-view" diagnostics, allowing to identify the regions where the density gradients or the laser intensity are high, or where beam refraction or self-focusing could occur. The energy spectrum of the protons accelerated from the H\({}_{2}\) plasma were recorded on image plates (Fuji BAS-MS type) coupled to magnetic spectrometers positioned at \(0^{\circ}\), \(30^{\circ}\) and \(70^{\circ}\) from the ps-laser axis on the LULI experiment, and at \(25^{\circ}\), \(37^{\circ}\) and \(53^{\circ}\) on the GSI experiment. ## III Interaction without plasma tailoring Figure 1 shows two-dimensional (2D) space-resolved shadowgraphies (snapshot) of the probe beam from the plasma generated by the interaction of the ps-beam with the high density H\({}_{2}\) gas jet, at different backing pressures, times, and ps-beams (LULI and GSI). At 800 bars (Fig. 1 a) and c)) the expected peak density on the laser axis, assuming full ionization, is \(n_{e}=8\times 10^{20}\) cm\({}^{-3}\) (0.8 \(n_{c}\)). Fig. 1a) shows that the plasma at \(t=0.2\)\(\pm\) 0.1 ns after the ps-pulse arrival is much larger than the laser focal volume and extends on the whole gas jet volume, on a millimeter scale, along the laser axis (\(x\)) as well as transversely (\(z\)). Shadowgraphies of very similar size and shape are observed at 450 bars (Fig. 1 b)), on the LULI experiment, as well as on the GSI experiment (Fig. 1 d)) for which the probe pulse is shorter (0.5 ps) and the pump pulse has a \(10^{4}\) times better ASE temporal contrast. The millimeter scale of these shadowgraphies is significantly larger than the FWHM of the gas jet, \(\sim\) 270 \(\mu\)m at \(z\) = 0, which will be discussed in section VI. The prompt and localized energy deposition of the ps-pulse along its propagation leads to strong ponderomotive and thermal pressures which transversely expel the plasma from the laser axis, as observed in Fig. 1 c), 1.7 ns after the ps-pulse. Time-resolved (streak camera) shadowgraphies along one dimension in space, the ps-beam axis (\(x\), for \(z=0\)), are presented in Fig. 2 for two gas jet pressures. Despite the factor two in pressure, these shadowgraphies are very similar. For \(t<0\) the probe beam is undisturbed, indicating that the ASE contrast is high enough to avoid ionization of the gas before the ps-pulse arrival. For 0 \(<\)\(t<\) 1 ns, the shadowgraphy extends symmetrically from the focal position up to \(x\sim\pm\) 400 \(\mu\)m, as observed on the snapshot in Fig 1a) and b). At these edges, for the 800 bars case (Fig. 2-a)), the plasma density is \(\sim 2\times 10^{19}\) cm\({}^{-3}\) (0.02 \(n_{c}\)). The bright line observed in Fig. 2a) at \(t=0\), near the gas jet center (-200 \(<x<\) +100 \(\mu\)m), is second harmonic emission produced by the ps-pulse, usually occurring in regions of strong laser intensity and/or density gradients. For 0 \(<t<\) 1.5 ns, the longitudinal expansion of the right edge of the plasma (\(x>\) 500 \(\mu\)m) can be observed. At \(t\gtrsim\) 1 ns the plasma expulsion induced by the ponderomotive and thermal pressures has lowered the density on the laser axis, and parts of probe beam start to be transmitted again. As in Fig. 1c), at \(t\sim\) 1.7 ns the plasma has been expelled from the laser axis and the probe beam is not disturbed anymore. Proton spectra from the LULI experiment, measured at 0\({}^{\circ}\), 30\({}^{\circ}\), and 70\({}^{\circ}\) from the laser axis, are presented in Fig. 3 for the same laser-plasma parameters as Fig. 1 and 2. By directly focusing the laser on the gas jet, without plasma tailoring (no ns-beams), the acceleration along the laser axis is very weak (if not null), is larger at 30\({}^{\circ}\), and much more efficient at 70\({}^{\circ}\), almost perpendicularly to the laser axis. The spectrum has two components: an exponential part at low energy, followed by a plateau, extending up to 2 MeV at 70\({}^{\circ}\).Such an energy distribution could be the result of "Coulomb explosion": ions are accelerated by electrostatic forces caused by charge separation induced by the laser ponderomotive pressure in the underdense part of the plasma [55; 56]. The maximum energy that can be gained is the relativistic ponderomotive energy \(U_{p}=m_{e}c^{2}(\gamma-1)\), where \(\gamma=(1+a_{0}^{2}/2)^{1/2}\) is the relativistic factor of the electron quiver motion in the laser field. For the LULI experiment \(a_{0}\sim\) 4, leading to \(U_{p}\sim\) 1 MeV, too low to explain the high energy part of the spectra. In addition, the ponderomotive energy depends only on the laser intensity, while the maximum energies observed at 0\({}^{\circ}\), 30\({}^{\circ}\), and 70\({}^{\circ}\) seem to increase with the plasma density. The plateau structure, the maximum energy value and its increase with the plasma density could be the signature of an acceleration by multiple collisionless shocks formed at high density [12; 57]. ## IV Interaction with a front face tailored plasma The 1D time-resolved (streak camera) and 2D space-resolved (GOI snapshot) shadowgraphy of the plasma tailored by one ns-beam, at the entrance side of the ps-beam, is presented in Fig. 4-a) and -b) respectively, for a gas pressure of 200 bars. The ns-pulse was sent in the gas jet \(\sim\) 2.7 ns before the ps-pulse. It was propagating along the \(y\) axis (perpendicular to the image plane) and focused at \(x\) = -300 \(\mu\)m, \(z\) = 0 (red circle in Fig. 4-b)). The hydrodynamic evolution (on the ps-beam axis) of the plasma can be followed in Fig. 4-a). For \(t<\) -2.5 ns the probe beam is fully transmitted, indicating no pre-plasma. At \(t\sim\) -2.7 ns the ns-beam promptly ionizes and heats the edge of the gas jet. A hydrodynamic shockwave is generated, which starts to push the plasma out of the ns-beam axis. At \(-2.7<t<-2.3\) ns one can observe the plasma motion towards vacuum (\(x<\) -300 \(\mu\)m) and towards the jet center (\(x>\) -300 \(\mu\)m). At later times, the plasma moving towards vacuum becomes too teneous to induce shadowgraphy, while the HSW propagating towards the jet center stays dense and can easily be followed. In contrast with the ps-pulse interaction that ionizes very quickly the whole gas jet volume (Fig. 1), the ionization by the ns-beam concerns only its focal region, the rest of the jet staying in the gaseous state until the arrival of the HSW (the right side of the jet Figure 1: _Probe beam shadowgraphies from the plasma generated by the interaction of the ps-beam with the high density \(H_{2}\) gas jet, at different backing pressures, different times from the ps-pulse arrival, and for the LULI (a, b, c) or the GSI (d) ps-beam. On top of the images (z \(<\) -400 \(\mu\)m) is the shadow of the nozzle. The ps-beam propagates at \(z=0\) from left to right along the \(x\)-axis (horizontal white dashed line), and is focused (red dashed lines) at the center of the jet (x = 0, y=0)._ Figure 3: _Proton spectra from the LULI experiment, without plasma tailoring, measured at 0\({}^{\circ}\), 30\({}^{\circ}\), and 70\({}^{\circ}\) from the ps-beam axis, and for different gas jet backing pressures. The dashed lines are the noise level of each spectrum. Same laser-plasma parameters as in Fig. 1 and 2._ stays undisturbed). The shockwave reaches the jet center at \(t=0\). At that time the ps-pulse arrives, crosses the plasma created by the HSW, and then interacts with the high-density non-ionized part of the gas jet (at \(x=0\)), which induces strong second harmonic emission that saturates the camera. The 2D plasma extension at that time can be observed on Fig. 4-b). The focal spot position of the ns-beam is indicated by the red circle. The time integration window of the GOI was \(\sim\) 120 ps, centered on the ps-pulse arrival, so that the snapshot shows the plasma created by both the ns-beam and the ps-beam, as well as the strong second harmonic emission of the ps-pulse. Partly hidden by this flash, a croissant-like shape can be distinguished, which is the plasma generated by the expansion of the HSW. This initially cylindrical wave produces a circular shadowgraphy that has evolved to a croissant-like shape because \(\sim\) half of the HSW expelled the plasma towards vacuum (left on the image), while the part propagating towards the center of the jet ionized and compressed this high density region. The horizontal extension of the croissant is larger on the bottom part of the image (\(z>0\)) because of the conical shape of the gas jet (larger far from the nozzle exit). The blue dashed-lines in Fig. 4-b) show the edges of the shadowgraphy obtained for the case without tailoring (ps-beam only, Fig. 1-a), -b), -d)). On the ns-beam side the plasma edge has been pushed towards the jet center. Proton spectra associated with the shadowgraphy of Fig. 4 are presented in Fig. 5. Tailoring the gas jet on the input side of the ps-beam improved the forward acceleration. The maximum proton energy at \(0^{\circ}\) and \(30^{\circ}\) both increased, while the transverse (\(70^{\circ}\)) acceleration became less efficient. In addition to the thermal exponential, the spectrum at \(0^{\circ}\) (Fig. 3) has a peaked component centered near 1.35 MeV. This indicates the appearance of another acceleration mechanism: Collisionless Shock Acceleration. No acceleration (proton) was measured without the ps-beam. ## V Interaction with a front and back face tailored plasma An example of shadowgraphies of the plasma tailored on the two opposite sides is presented in Fig. 6. The ns-beams arrive 2.5 ns before the ps-pulse. At this time we can see in Fig. 6-a) the prompt generation of the plasmas on both sides of the jet center, at the focal positions of the ns-beams (\(x=\pm\) 300 \(\mu\)m), followed by the propagation of the HSWs. Only the dense parts converging towards the jet center are visible, the parts expanding towards vacuum are too teneous to generate a significant shadowgraphy. The central part of the jet stays in the gaseous state at least until \(t\sim\) -1.4 ns, when the shadowgraphies of each HSW start to overlap. The waves collide at the jet center at \(t\sim\) -100 ps, leading to a shadowgraphy with a minimum width of \(\lesssim\) 300 \(\mu\)m, slightly before the ps-pulse arrival (\(t=0\)). This width is much smaller than the one observed in Fig. 2 without plasma tailoring (\(\sim\) 800 \(\mu\)m). As observed in Fig. 2-a) and 5-a), the ps-pulse generates second harmonic light at its arrival in the plasma. After that time, the collision of the HSWs has ended and the plasma starts to expand. At \(t\sim\) 0.6 ns the strong ponderomotive and thermal pressures driven by the ps-pulse have expelled the plasma from the laser axis, and the probe beam is transmitted again. The 2D space-resolved shadowgraphy of the HSWs just before the ps-beam arrival is presented in Fig 6-b). As already observed in Fig. 4-b), the initially cylindrical expansion of each HSW from the axis of its driving ns-beam (red circles) has led to a croissant-like plasma of high density. The blue dashed-lines in the figure indicate the edges of the shadowgraphy measured without plasma tailoring (Fig. 1). It shows that tailoring the gas jet from the two opposite sides allowed to i) significantly reduce the plasma density at the edge of the gas jet and, ii) generate a high-density narrow (\(<\) 300 \(\mu\)m) plasma slab along the ps-beam axis. This is also illustrated in Fig. 7 at a slightly higher pressure (300 bars), at two different times: before the collision of the HSWs (Fig. 7-a), and just af Figure 5: _Proton spectra from the LULI experiment, associated with the shadowgraphy of Fig. 4, for a plasma tailored at the entrance side of the ps-beam._ Figure 4: _a) Time-resolved along the ps-beam axis (streak camera) and b) 2D space-resolved (GOI snapshot) shadowgraphy of the plasma tailored by one ns-beam, at the entrance side of the ps-beam, for a gas jet backing pressure of 200 bars. The ns-pulse propagates along the y axis (perpendicular to the image plane) and focuses at \(x=\) -300 \(\mu\)m, \(z=\) 0 (red circle). It arrives in the gas jet \(\sim\) 2.7 ns before the ps-pulse. The ps-beam propagates at \(z=0\) from left to right along the \(x\)-axis (horizontal white dashed line in b)), and is focused (red dashed lines in b)) at the center of the jet (x = 0, y=0). The time integration window of the snapshot in b) was \(\sim\) 120 ps, centered on the ps-pulse arrival._ ter their collision and the arrival of the ps-pulse (Fig. 7-b). Before the ps-pulse arrival the plasma is visible only at the location of the HSWs, while after its arrival the region of the gas jet where the HSWs did not travel (bottom of the image, \(z>\) 400 \(\mu\)m) is promptly ionized, leading in this region to a broad shadowgraphy comparable to the one observed without plasma tailoring (Fig. 1). On the opposite, around the ns-beam axis, where the plasma has been expelled by the HSWs, the ps-pulse does not create any additional perturbation of the shadowgraphy, indicating that it propagates almost freely, up to the HSWs colliding region where it encounters a high-density plasma slab and generates second harmonic emission (near \(x\sim\) -100 \(\mu\)m). Examples of proton spectra obtained by tailoring the gas jet from the two opposite sides are presented in Fig. 8. The maximum energy and the shape of the spectra at 70\({}^{\circ}\) is quite similar to the ones obtained without tailoring (Fig. 3). However, despite the shot-to-shot fluctuations, the tendency observed when tailoring only the entrance side (Fig. 5) is enhanced when tailoring the two sides: the acceleration in the forward direction is improved and the energy distribution becomes more peaked. At 30\({}^{\circ}\) the portion of the spectra induced by CSA moved towards higher energies and is now clearly separated from the low energy thermal part. The maximum energy is more than doubled, reaching 3.2 MeV. The number of protons accelerated above 1 MeV is also significantly increased at 0\({}^{\circ}\) and 30\({}^{\circ}\). The comparison with the single HSW case (Fig. 5) demonstrates that the acceleration improvement is not only the result of a better propagation of the ps-pulse at the plasma entrance. Compared to the case without tailoring at 800 bars (Fig. 3), the number of protons accelerated at 70\({}^{\circ}\) is similar or larger despite a lower backing pressure, 200-300 bars. This indicates that the collision of the HSWs has allowed to compensate the initially lower plasma density (the expected total compression factor [48] is \(\gtrsim\) 8), and enabled to reach a value at least equivalent to the one at 800 bars, close to \(n_{c}\). It not only increases the number of protons available in the plasma, but helps to increase the ps-pulse absorption [58; 59], thus the velocity of the collisionless shockwave and the maximum proton energies. Let us note that the lower backing pressure used for the tailoring cases was chosen to avoid the ns-beam refraction at the jet entrance and a better plasma expulsion by the HSW, while preserving the high density at the jet center thanks to the collision of the HSWs [48]. Very similar results were obtained on the GSI experiment. A 2D shadowgraphy at the collision of the two HSWs is presented in Fig. 9-a). The croissant-like shape of the HSWs is more square. This is due to the square shape of the wings of the ns-beam focal spots. Despite this difference, the sharpness and the width (\(\sim\) 200 \(\mu\)m) of the plasma slab are very similar to the LULI experiment. The proton spectrum measured at 53\({}^{\circ}\) and associated to the interaction of the ps-pulse with this plasma slab is presented in Fig. 9-b). Like on the LULI experiment, the spectrum had a low energy exponential distribution without tailoring (\(<\) 2 MeV), evolving to a peaked distribution of higher energy, up to 5.5 MeV, when two-sided tailoring is applied. Compared to the LULI experiment, the laser intensity was a factor \(\sim\) 4 larger on GSI (\(a_{0}\sim\) 8 instead of 4), leading as expected to a factor \(\sim\) 2 increase on the maximum energy of the protons. Figure 8: _Proton spectra from 4 laser shots on the LULI experiment, in the case of a plasma tailored on both sides (as in Fig. 6), and a gas jet backing pressure of 300 bars on shot 1 and 200 bars on shots 2 to 4. The dashed lines are the noise level of each spectrum._ Figure 6: _a) Time-resolved along the ps-beam axis (streak camera) and b) 2D space-resolved (GOI snapshot) shadowgraphy of the plasma tailored on the two opposite sides, for a gas jet backing pressure of 200 bars. The energy in each ns-beam is 4.5 J. The time integration window of the snapshot in b) was \(\sim\) 120 ps, ending just before the ps-pulse arrival. The blue dashed-lines indicate the edges of the shadowgraphy measured without plasma tailoring (Fig. 1)._ Figure 7: _2D space-resolved (GOI snapshot) shadowgraphies of the plasma tailored on the two opposite sides, for a gas jet backing pressure of 300 bars, at two different times: a) before the collision of the HSWs, and b) just after their collision and the arrival of the ps-pulse. The blue dashed-lines indicate the edges of the shadowgraphy measured without plasma tailoring (Fig. 1)._ Table 1 summarizes the number of accelerated protons and the energy of the proton beam for the LULI spectra of Fig. 3, 5 and 8 associated with the three tailoring scenario. Despite the relatively strong shot-to-shot fluctuations, it demonstrates that tailoring the plasma from the two opposite sides improves the acceleration towards the laser direction, in terms of number of protons as well as of energy of the proton beam. The optimum acceleration in the forward direction also corresponds to the minimum transverse acceleration (last line in Table 1). Only one shot is presented for the one-side case. Nevertheless, several shots with the one-side tailoring were performed on the GSI experiment. They confirm the evolution of the acceleration observed on the LULI experiment between zero, one, or two sides tailoring. The shot-to-shot fluctuations observed on both experiments result mainly from variations on the plasma profile at the ps-pulse arrival, which can significantly affect the quality (uniformity, angle, Mach number) of the collisionless shockwave. These variations originate mainly from fluctuations on the dynamics of the HSWs which are sensitive to i) the energy in the wing of the ns-beam focal spot (see following section) and ii) the exact position of the focal spot on the edge of the gas jet density profile. For example, from the expression of \(n_{atom}(r)\) given in section II, the density is a factor 3.6 different between \(r=250\) and \(r=350\)\(\mu\)m. This induces an order of magnitude difference in the inverse bremsstrahlung absorption (\(K_{BI}\propto n_{e}^{2}\)) and thus on the amplitude and velocity of each HSW, leading to variations on the time of collision as well as on the position, density, width and shape of the final plasma slab encountered by the ps-pulse. Despite these fluctuations, all the improvements observed in proton spectra are in good agreement with laser-plasma conditions becoming closer from the main criteria for efficient CSA [13; 14] which are i) a near-critical plasma density to get an efficient laser absorption of the ps-pulse and so a strong plasma heating (MeV), ii) a narrow plasma length to favor uniform heating, leading to a uniform shock velocity and thus the production of monoenergetic protons, iii) an exponentially decreasing output density gradient to get a uniform sheath field which preserves the monoenergetic distribution as protons are reflected by the shock. The proton energy expected from CSA [13; 14] is of the order of \(E_{k}[\mathrm{MeV}]\sim 2M^{2}T_{e}[\mathrm{MeV}]\), where \(M\) is the Mach number of the collisionless wave, \(T_{e}[\mathrm{MeV}]\sim 0.078\frac{\pi}{\alpha}a_{0}\tau(\mathrm{ps})/L(\mathrm{mm})\), \(\tau\) the duration of the laser pulse, \(\eta\) its absorption efficiency, and \(\alpha\) is 3/2 for non-relativistic plasmas (\(T_{e}\ll m_{e}c^{2}\)) and 3 in the relativistic case. The absorption efficiency depends on \(a_{0}\) and \(n_{e}\). In our conditions, it is expected [14; 49; 58; 59] to be between 0.25 and 0.5. Taking \(\alpha=2.2\) and assuming \(0.05<L(\mathrm{mm})<0.1\) (smaller than the shadowgraphy size, see following section), \(T_{e}\) is expected to be of the order of 0.35 to 1.4 MeV on both the LULI and the GSI experiments (same \(a_{0}\tau\)). Taking for \(M\) the minimum value required for the shock to reflect protons [14; 60], \(M=M_{cr}\sim\)1.6 leads to expected proton energies of \(1.8<E_{k}(\mathrm{MeV})<7.2\), in good agreement with our measurements. As discussed in section I, the maximum proton energy from CSA driven by a \(\lambda_{0}=1\)\(\mu\)m laser is expected [13; 14] to occur for a plasma length close to \(L_{opt}\sim\) 40 \(\mu\)m. The width of the plasma observed in figures 6, 7 and 9-a) appears to be \(\sim\) 200 to 300 \(\mu\)m. However, the width of the shadowgraphy could significantly overestimate those of the density profile. For example, the gas jet FWHM is \(\sim\) 270 \(\mu\)m, while on figures 1 and 2 the full width of the shadowgraphy is \(\sim\) 600 to 800 \(\mu\)m, representing the very low wing of the density profile, 0.08 to 0.025 of the density at the jet center. The plasma slab at the collision of the HSWs can thus be much narrower than its apparent size on the shadowgraphy. \begin{table} \begin{tabular}{|c||c|c|c||c|c||c||} \cline{2-7} \multicolumn{1}{c||}{} & \multicolumn{2}{c||}{**dN/d\(\Omega\)**} & \multicolumn{2}{c||}{**dE/d\(\Omega\)**} & \multicolumn{2}{c||}{**P(bars)**} \\ \hline \multicolumn{1}{|c||}{**Angle (\({}^{\circ}\))**} & \(\mathbf{0}\) & \(\mathbf{30}\) & \(\mathbf{70}\) & \(\mathbf{0}\) & \(\mathbf{30}\) & \(\mathbf{70}\) & / \\ \hline \hline **No tailoring** (Fig. 3) & 0 & 4.1 & 28 & 0 & 4 & 36 & 800 \\ \cline{2-7} & 0 & 0 & 181 & 0 & 0 & 220 & 800 \\ \cline{2-7} & 0 & 0 & 15 & 0 & 0 & 18 & 600 \\ \cline{2-7} & 0 & 0 & 73 & 0 & 0 & 91 & 450 \\ \cline{2-7} & 0 & 0 & 97 & 0 & 0 & 109 & 400 \\ \hline \hline **One side** (Fig. 5) & 2.4 & 6.4 & 5.6 & 2.9 & 7.1 & 6.6 & 200 \\ \hline \hline **Two sides** (Fig. 8) & 0.44 & 5.2 & 131 & 0.25 & 9 & 146 & 300 \\ \cline{2-7} & 1.7 & 19 & 162 & 1.9 & 32 & 203 & 200 \\ \cline{2-7} & 1.1 & 33 & 516 & 1.6 & 64 & 645 & 200 \\ \cline{2-7} & 65 & 5.7 & 99 & 70 & 7.5 & 114 & 200 \\ \hline \end{tabular} \end{table} Table 1: _Number of protons and total energy of the spectra for \(E_{k}>\) 1 MeV, measured at 0\({}^{\circ}\), 30\({}^{\circ}\) or 70\({}^{\circ}\), and for the different configurations of plasma tailoring. Obtained by integrating the spectra of Fig. 3, 5 and 8 between \(E_{k}\) = 1 and 3.5 MeV. These values are expressed in \(10^{8}\) sr\({}^{-1}\) for \(dN/d\Omega\), and \(10^{8}\) MeV sr\({}^{-1}\) for \(dE/d\Omega\). Acceleration in the forward direction is highlighted with colored cells._ Figure 9: _Typical result of the GSI experiment: a) 2D space-resolved (GOI snapshot) shadowgraphy of the plasma tailored on both sides, b) associated proton spectrum measured at 50\({}^{\circ}\). The gas jet backing pressure was 100 bars. The shadowgraphy is 40 ps before the ps-pulse arrival._ Plasma Tailoring - Comparison with Numerical Simulations The CSA strongly depends on the plasma length \(L\). Our previous numerical study [48] indicated that the width of the density profile at the collision of the two HSWs was below or of the order of tens of microns while, as previously said, the dark part of the shadowgraphy images in Fig. 6 and 9-a) has a width of the order of 200-300 \(\mu\)m. This dark part originates from light rays that are so refracted by the density gradient that they are not collected by the lens of the imaging system. This width thus depends on the aperture of the diagnostic (\(3.5^{\circ}\) on the LULI experiment, \(2^{\circ}\) on GSI) and can be significantly larger than the FWHM of the density profile. To estimate this difference we performed 3D hydrodynamics simulations, and post-processed the probe beam propagation (refraction and inverse bremsstrahlung absorption) to simulate the shadowgraphy diagnostic. As in our previous study [48], we used the radiation-hydrodynamics code TROLL [61], but in a three-dimensional geometry, allowing to use as inputs the experimentally measured 3D density profile of the gas jet. We also used the experimental temporal profile of the ns-beams. Figure 10-a) shows the temporal evolution of the density profile along the ps-beam axis. As already detailed in Ref. [48], each of the two ns-beams, focused 300 \(\mu\)m from both sides of the gas jet center (\(x=0\)) generates a HSW. Only the part of the cylindrical HSW that converges towards the gas jet center leads to a high density jump, while the part traveling towards vacuum is not visible because of its quick spread and decrease in amplitude. The initial maximum plasma density (at \(x=0\)) is \(n_{e}^{0}=2\times 10^{20}\) cm\({}^{-3}\) (\(n_{e}/5\)), as for the shot presented in Fig. 6-a). The spatial intensity profile of the ns-beams is a Gaussian of 25 \(\mu\)m radius. Fig. 10-b) is the simulation of the shadowgraphy diagnostic. Compared to Fig. 6-a), after 2.5 ns the HSWs still have not reached the gas jet center. Also, at the HSW creation, the experimental shadowgraphy has a large "horizontal" shape while the simulated one is much more localized (narrow). In the experiment, the laser intensity profile was in fact composed not only of the narrow high intensity part, but also of a lower intensity wing: \(I(r)=3\times 10^{14}[e^{-(r/25)^{2}}+0.0172e^{-(r/110)^{2}}]\) W/cm\({}^{2}\). Despite its lower intensity, this large wing contained \(\sim\) 25 % of the laser energy, enough to contribute to the HSW excitation. Fig. 11 shows the evolution of the density profile and of the associated shadowgraphy when the wing of the ns-beams is taken into account, and for the same laser energy (4.5 J in each beam) as the shot in Fig. 6. The simulated shadowgraphy is very similar to the experimental one (Fig. 6-a)), reproducing the initial "horizontal" part followed by two large traces that converges when the two HSWs collide after \(\sim\) 2.3 ns, at the jet center. The effect of the wing of the focal spot on the HSW evolution is also illustrated in Fig. 12, that compares the profiles of a) the laser focal spot, b) the electron temperature, c) the electron density for the case without (left) or with (right) the wing. Even if the intensity is higher without the wing, because the energy deposition is more localized, the HSWs start with a smaller diameter and further from the jet center, avoiding them to collide. With the wing, the HSWs are generated with a larger diameter, enabling them to reach faster the jet center. A point to underline is that the density digging downstream the HSW seems nevertheless more efficient without wing (more localized energy deposition), as can also be observed comparing Fig. 10-a) and 11a). Fig. 12-c) also shows that the simulations reproduce very well the "croissant-like" shape observed experimentally (Fig. 4, 6, 7 and 9). The influence of the wing is also observed in Fig. 9-a) where the square shape of the GSI ns-beams is retrieved in the shape of the HSWs. Let us note that, since hydrogen is very quickly ionized, a full ionization at the beginning of the simulation was imposed and, to avoid an early gas jet expansion, an initial temperature of 300 K was imposed. This has no effect on the plasma evolution induced by the ns-beams. However, in the regions not affected by these beams or by the HSWs they produce, where the medium should still be in its gas state, its refraction index as well as the inverse bremsstrahlung absorption of the low-intensity probe beam are strongly overestimated. To avoid this artifact, in the post-processing of the shadowgraphy diag Figure 10: _a): Temporal evolution of the density profile along the ps-beam axis from 3D TROLL simulations. The two ns-beams are focused 300 \(\mu\)m from both sides of the gas jet center. The initial maximum plasma density (at \(x\) = 0) is \(n_{e}^{0}=2\times 10^{20}\) cm\({}^{-3}\) (\(n_{e}/5\)). The spatial intensity profile of the ns-beams is \(I(r)=5.4\times 10^{14}e^{-(r/25)^{2}}\), and the energy in each beam is 6 J. b) Post-processing of the shadowgraphy diagnostic from the TROLL outputs. The cyan line-outs are density and shadowgraphy profiles at \(t\) = 2.5 ns._ Figure 11: _Same as Fig. 10 but for a spatial profile of the ns-beams with a low-intensity wing: \(I(r)=3\times 10^{14}[e^{-(r/25)^{2}}+0.0172e^{-(r/110)^{2}}]\) W/cm\({}^{2}\), and 4.5 J in each beam. The cyan line-outs are density and shadowgraphy profiles at \(t\) = 2.3 ns._ nostics the ionization state was corrected using the Saha ionization equation. The density map presented in Fig. 10-a) includes this correction. Without this correction, the shadowgraphy at the early times is completely dark at the jet center, while the gas is still not ionized and should fully transmit the probe beam (as experimentally observed and mentioned in section IV and V). The normalized line-outs in Fig 11-a) and -b) show respectively the density and shadowgraphy profiles at the collision time (\(t=2.3\) ns). The width of the shadowgraphy is \(\sim 150\)\(\mu\)m, much larger than the width of the plasma slab, \(\lesssim 20\)\(\mu\)m. Applying this same factor difference on the experimental shadowgraphies would indicate that the plasma slab at the collision time had a thickness of the order of 30 to 40 \(\mu\)m, similar to the predicted optimal value \(L_{opt}\) for CSA (section I). From Fig. 11-a), the peak density at the collision time is \(\sim 3n_{c}\). By conservation of the number of particles, if the experimental plasma slab is of the order or more than 2 times thicker than in the simulations, the peak density should be lower by this same factor, of the order or less than 1.5 \(n_{c}\). ## VII Discussion Even if the thickness of the plasma slab is close to optimum, its density needs also to be adjusted to an optimum value which depends on the laser intensity. To drive a strong collisionless shock and accelerate ions at high energies, a high laser absorption is required (see section V), which implies \(n_{e}\sim\gamma n_{c}\). With \(a_{0}\sim 4\) at LULI and 8 at GSI, this leads to \(n_{e}/n_{c}\sim 3\) and 5.7 respectively. Our previous numerical study [49] also showed that the maximum proton energy drops quickly below this optimum value. The density of the plasma slab was thus probably too low for an optimum proton acceleration. An easy way to produce plasma slabs of higher density is to increase the backing pressure of the gas jet, as long as the residual density in the wing of the slab does not disturb the propagation of the ps laser beam. The efficiency of the acceleration depends on the intensity of the ps-laser, but also on the size of its focal spot [14]. The shock width (which is close to the laser spot size \(W_{0}\)) has to be large enough such that the plasma, expanding transversely at the sound speed, does not leave the shock region before the acceleration occurs. Assuming an isothermal expansion, this condition yields \(W_{0}\gtrsim L/M\). Taking for \(M\) the minimum value required for the shock to reflect protons, \(M_{cr}=1.6\), and \(L=40\)\(\mu\)m leads to \(W_{0}\gtrsim 25\)\(\mu\)m. The focal spot on the LULI and GSI experiments (FWHM \(\sim 10\)\(\mu\)m) was thus probably too small for an optimum acceleration. With a larger focal spot the shape of the shock will be more flat, favoring forward acceleration, at higher energy. The larger surface might also increase the number of accelerated protons. The shot-to-shot fluctuations of the ns-beams, in terms of position, spatial distribution and intensity, generate spatial and temporal fluctuations on the HSWs, and thus on the resulting plasma slab. Coupled to the temporal jitter between the ns and ps pulses (between 100 and 200 ps in our experiment), significant differences in laser-plasma coupling and heating can be induced, which is thought to be at the origin of shot-to-shot fluctuations of the proton beam. It also indicates that the condition for efficient CSA are still not fully reached. Hosing instability [62] could also modify the direction of propagation of the ps-pulse, ultimately leading to fluctuations in the direction of the collisionless shock and of the proton beam. This affects more significantly the detection in the forward direction, mainly sensitive to CSA. The clearly peaked spectra observed with the two-sides tailoring indicate that even if the plasma slab is relatively narrow, the density gradient at its backside is not very sharp, avoiding TNSA, and present a "slowly" decreasing profile with a small and constant sheath electric field in favor of monoenergetic acceleration [13]. ## VIII Conclusions We recently proposed a new technique of plasma tailoring by laser-driven hydrodynamic shockwaves generated on both sides of a gas jet. In the continuation of this numerical work, we studied experimentally the influence of the tailoring on proton acceleration driven by a high-intensity ps-laser, in three cases: without tailoring, by tailoring only the entrance side of the ps-laser, or both sides of the gas jet. Without tailoring the acceleration is transverse to the laser axis, with a low energy exponential spectrum, produced by Coulomb explosion. When the front side of the gas jet is tailored, a forward acceleration appears, that is significantly enhanced when both the front and back sides of the plasma are tailored. This forward acceleration produces higher energy protons (up to 5.5 MeV), with a peaked spectrum, and is in good agreement with the mechanism of Collisionless Shock Acceleration. The total number of accelerated protons is also enhanced by this two-sides tailoring. Figure 12: _Spatial profiles of a) the laser focal spot, b) the electron temperature, c) the electron density for the case without the wing in the intensity profile (left, Fig. 10) or with (right, Fig. 11). a) et b) are at 0.6 ns, the maximum of the laser, c) is at 2.3 ns (time at which the HSWs collide in the case with the wing)._ The spatio-temporal evolution of the plasma profile was characterized by optical shadowgraphy of a probe beam. The refraction and absorption of this beam was simulated by post-processing 3D hydrodynamic simulations of the plasma tailoring. The dynamics of the hydrodynamic shockwaves was well reproduced only if the low-intensity wing in the focal spot of the tailoring beams was taken into account. Comparison of the simulated shadowgraphy with the experimental ones allowed to estimate the thickness (\(\sim\) 30-40 \(\mu\)m) and density (\(\sim\) 1.5 \(n_{c}\)) of the plasma slab produced by tailoring both sides of the gas jet. These values are close to those required to trigger the CSA. However, the shot-to-shot fluctuations of the forward proton beam indicate that the set of criteria for an optimum CSA was still not reached. Improving the focal spot quality of the tailoring ns-beams, increasing the backing pressure of the gas jet as well as the focal spot size of the ps-beam should enable to reach a more stable CSA, generating a proton beam with a larger number of protons, at higher energy and of lower spectral width and emittance. ###### Acknowledgements. The authors would like to thank the LULI staff and the GSI-PHELIX staff for their contribution. This work has received funding from the Federation de recherche PLAS@PAR. Results presented here are partially based on the experiment P189, which was performed at the PHELIX infrastructure at GSI Helmholtzzentrum fuer Schwerionenforschung, Darmstadt (Germany) in the context of FAIR Phase-0. The research leading to the PHELIX-GSI results has received funding from the European Union's Horizon 2020 research and innovation program under Grant Agreement No. 871124 Laserlab-Europe, and by Grant ANR-17-CE30- 0026-Pinnacle from Agence Nationale de la Recherche. ## Conflict of interest The authors have no conflicts to disclose. ## Data availability The data that support the findings of this study are available from the corresponding author upon reasonable request.
2302.14712
Generating Accurate Virtual Examples For Lifelong Machine Learning
Lifelong machine learning (LML) is an area of machine learning research concerned with human-like persistent and cumulative nature of learning. LML system's objective is consolidating new information into an existing machine learning model without catastrophically disrupting the prior information. Our research addresses this LML retention problem for creating a knowledge consolidation network through task rehearsal without retaining the prior task's training examples. We discovered that the training data reconstruction error from a trained Restricted Boltzmann Machine can be successfully used to generate accurate virtual examples from the reconstructed set of a uniform random set of examples given to the trained model. We also defined a measure for comparing the probability distributions of two datasets given to a trained network model based on their reconstruction mean square errors.
Sazia Mahfuz
2023-02-28T16:23:18Z
http://arxiv.org/abs/2302.14712v1
# Generating Accurate Virtual Examples For Lifelong Machine Learning ###### Abstract Lifelong machine learning (LML) is an area of machine learning research concerned with human-like persistent and cumulative nature of learning. LML system's objective is consolidating new information into an existing machine learning model without catastrophically disrupting the prior information. Our research addresses this LML retention problem for creating a knowledge consolidation network through task rehearsal without retaining the prior task's training examples. We discovered that the training data reconstruction error from a trained Restricted Boltzmann Machine can be successfully used to generate accurate virtual examples from the reconstructed set of a uniform random set of examples given to the trained model. We also defined a measure for comparing the probability distributions of two datasets given to a trained network model based on their reconstruction mean square errors. Keywords:artificial intelligence, machine learning, lifelong machine learning, Restricted Boltzmann Machine ## 1 Introduction and Background Humans learn new knowledge as they grow older while retaining prior knowledge. Similarly, LML systems can retain prior tasks' knowledge over a long time while integrating, or consolidating new task's knowledge periodically [1]. Just like living beings, _consolidation_ enables the integration of new information into the existing learning system. The main challenge in consolidation is _catastrophic forgetting_ and overcoming the _stability-plasticity dilemma_. The process of retaining the old information and yet being able to integrate the new information is known as the _stability-plasticity dilemma_. A neural network learning mechanism capable of stability and plasticity can rehearse examples of prior knowledge to maintain functional stability while slowly changing its representation to accommodate new knowledge [2]. _Catastrophic forgetting_ can be defined as the disruption or loss of the prior training information while integrating new information to a trained model [3]. The challenge is to reduce the affect of catastrophic forgetting, which can be done by the rehearsal of a subset of the old information forcing the learning system to retain the structure of the old information [3]. One approach is to store a set of examples from the training set for each task. But the space complexity of this approach will grow linearly with the number of tasks, each time lenging the training time for a new task. An alternate approach is to use the concept of sweep pseudorehearsal discussed by Robins [4]; where examples are created by passing randomly created input vector through the learning system, the generated outputs are recorded for that particular input vector, and these are randomly included in the training session of the new task. From these reconstructed examples if we select only those examples which adhere to the probability distribution of the training data, then those selected examples are referred to as virtual examples (VEs) [5] in our research. The focus of this research is to develop a method by which to generate VEs of prior tasks from an existing neural network model such that the example(s) adhere to the probability distribution of those prior tasks. Our approach was to investigate feasible approaches for generating accurate VEs, and then evaluate the selected methods based on the VEs' based on their adherence to the prior task distribution. The research aimed to generate VEs adhering to the input variables' distributions. ## 2 Related Work Research by Kirkpatrick et al. [6] presented a novel algorithm, elastic weight consolidation (EWC) by decreasing the weight plasticity which avoided the catastrophic forgetting of old training information during the integration of new information. He et al [7] proposed a variant of the backpropagation algorithm, "concept-aided backprop", where conceptors were used to protect the gradients from degradation of prior trained tasks. Compared with the above approaches, our method used the Robins' pseudorehearsal approach to handle catastrophic forgetting. ## 3 Generating VEs Based on a trained Restricted Boltzmann Machine (RBM) Reconstruction Error An accurately trained Restricted Boltzmann Machine (RBM) reconstructs the training data with a low error. This observation led us to investigate that after one oscillation, which is just passing the data to the trained model and recording the reconstruction, of feeding a uniform random set of examples into the model, the examples that are closer to the training data distribution are more accurately reconstructed than the other examples. This finding suggested that we can select the VEs based on the reconstruction error. Because all of the generated examples from the uniform random set of examples do not adhere to the training data distribution after one oscillation, an approach was taken to select the examples such that a tolerance level was satisfied. Mean Squared Error (MSE) for the training data reconstruction was used as the initial tolerance level. If the sum of squared error for the uniform random input example and its corresponding reconstruction fell below the defined tolerance value, then the example was considered a VE. **Selecting the tolerance level** For one, two, four-dimensional input data, the tolerance level had been selected using the MSE between the training data and its corresponding reconstruction. **Success Criterion** To evaluate the success of the adherence of the virtual example distribution to the training data distribution, that is to evaluate the accuracy of the VEs, we defined a measure using the reconstruction error from a trained model, called the _Autoencoder-based Divergence Measure_. We considered unsupervised autoencoder approach to measure the difference between two probability distributions. The idea was that if our model had been accurately trained, we could measure the relative degree of similarity between an example set and the original training set's probability distributions. We defined the measure called the Autoencoder-based Divergence Measure (ADM) as follows: Let, \(MSE_{TRN}\) = the MSE of the training data on the RBM model and \(MSE_{TST}\) = the MSE of the test data given to the trained RBM model, then \[ADM=\frac{MSE_{TST}}{MSE_{TRN}} \tag{1}\] We verified that, "\(0<ADM<=1\)" signifies same probability distribution as the training data; "\(1<ADM<2\)" signifies similar or partial space of the training data; "\(ADM\geq 2\)" signifies increasingly different probability distribution than the training data. **Experiment - Four-dimensional Data:** The four-dimensional synthetic input data had 1000 examples. The input data was selected in such a way that there are various numbers of distinct regions in each of the dimensions within the range of 0 and 1. For this experiment, the tolerance level measured by MSE was 0.000395, which was the MSE between the training data and its corresponding reconstruction after one oscillation. Thus 80 virtual examples were selected from the reconstructed set of 5000 uniform random examples passed to the model trained on the training examples \(x_{1(1T)}\), \(x_{2(1T)}\), \(x_{3(1T)}\), \(x_{4(1T)}\). We calculated the value of ADM as ADM = \(\frac{MSE_{VE}}{MSE_{TRN}}\) = \(\frac{0.000228}{0.000395}\) = 0.5772. This value signified that the virtual examples are from the same probability distribution as the training data. In Figure 1, the blue probability density functions represent the training data. The red probability density functions represent the virtual examples selected from the reconstructed set of 5000 uniform random set of examples after one oscillation using the trained model. **Discussion:** The following discusses the merits of the measure based on the experimental results. For one, two, four-dimensional data, tolerance level measured by MSE had been successful in generating virtual examples validated by the results of the autoencoder-based divergence measure. The success criterion resulted that the selected examples adhere to the training data distribution. So we can consider these examples as VEs for our purpose, and we can use this measure for generating accurate VEs. ## 4 Conclusion The important findings of this research are as follows: 1. We investigated approaches for generating VEs from the reconstruction of a uniform random set of examples passed through a generative RBM model. Empirically, we found that we can generate accurate VEs validated by autoencoder-based divergence measure from uniform random examples based on the reconstruction error measured by MSE. 2. We defined and successfully verified a metric called autoencoder-based divergence measure for comparing the probability distributions of two given datasets using the reconstruction MSE from the trained RBM model. ## Acknowledgement I would like to express my sincerest gratitude towards my supervisor Dr. Daniel L. Silver for his consistent, constant support and patience throughout my graduate studies and his valuable comments on writing this summary.
2309.10606
A Novel Hybrid Algorithm for Optimized Solutions in Ocean Renewable Energy Industry: Enhancing Power Take-Off Parameters and Site Selection Procedure of Wave Energy Converters
Ocean renewable energy, particularly wave energy, has emerged as a pivotal component for diversifying the global energy portfolio, reducing dependence on fossil fuels, and mitigating climate change impacts. This study delves into the optimization of power take-off (PTO) parameters and the site selection process for an offshore oscillating surge wave energy converter (OSWEC). However, the intrinsic dynamics of these interactions, coupled with the multi-modal nature of the optimization landscape, make this a daunting challenge. Addressing this, we introduce the novel Hill Climb - Explorative Gray Wolf Optimizer (HC-EGWO). This new methodology blends a local search method with a global optimizer, incorporating dynamic control over exploration and exploitation rates. This balance paves the way for an enhanced exploration of the solution space, ensuring the identification of superior-quality solutions. Further anchoring our approach, a feasibility landscape analysis based on linear water wave theory assumptions and the flap's maximum angular motion is conducted. This ensures the optimized OSWEC consistently operates within safety and efficiency parameters. Our findings hold significant promise for the development of more streamlined OSWEC power take-off systems. They provide insights for selecting the prime offshore site, optimizing power output, and bolstering the overall adoption of ocean renewable energy sources. Impressively, by employing the HC-EGWO method, we achieved an upswing of up to 3.31% in power output compared to other methods. This substantial increment underscores the efficacy of our proposed optimization approach. Conclusively, the outcomes offer invaluable knowledge for deploying OSWECs in the South Caspian Sea, where unique environmental conditions intersect with considerable energy potential.
Hossein Mehdipour, Erfan Amini, Seyed Taghi Naeeni, Mehdi Neshat
2023-09-19T13:30:17Z
http://arxiv.org/abs/2309.10606v1
A Novel Hybrid Algorithm for Optimized Solutions in Ocean Renewable Energy Industry: Enhancing Power Take-Off Parameters and Site Selection Procedure of Wave Energy Converters ###### Abstract Ocean renewable energy, particularly wave energy, has emerged as a pivotal component for diversifying the global energy portfolio, reducing dependence on fossil fuels, and mitigating climate change impacts. This study delves into the optimization of power take-off (PTO) parameters and the site selection process for an offshore oscillating surge wave energy converter (OSWEC). The intricate interplay between waves and the energy conversion device results in nonlinear and transient responses in wave energy converters (WECs). Hence, there's a pressing need to optimize PTO parameters, ensuring that the device maximizes power absorption while evading potential damage or instability. However, the intrinsic dynamics of these interactions, coupled with the multi-modal nature of the optimization landscape, make this a daunting challenge. Addressing this, we introduce the novel Hill Climb - Explorative Gray Wolf Optimizer (HC-EGWO). This new methodology blends a local search method with a global optimizer, incorporating dynamic control over exploration and exploitation rates. This balance paves the way for an enhanced exploration of the solution space, ensuring the identification of superior-quality solutions. Further anchoring our approach, a feasibility landscape analysis based on linear water wave theory assumptions and the flap's maximum angular motion is conducted. This ensures the optimized OSWEC consistently operates within safety and efficiency parameters. Our findings hold significant promise for the development of more streamlined OSWEC power take-off systems. They provide insights for selecting the prime offshore site, optimizing power output, and bolstering the overall adoption of ocean renewable energy sources. Impressively, by employing the HC-EGWO method, we achieved an upswing of up to 3.31% in power output compared to other methods. This substantial increment underscores the efficacy of our proposed optimization approach. Conclusively, the outcomes offer invaluable knowledge for deploying OSWECs in the South Caspian Sea, where unique environmental conditions intersect with considerable energy potential. keywords: Ocean renewable energy, oscillating surge wave energy converter, power take-off optimization, Site selection, Meta-heuristic, Swarm intelligence algorithms ## 1 Introduction The importance of ocean renewable energy cannot be overstated, as it offers a promising means to diversify the global energy portfolio, reduce dependence on fossil fuels, and mitigate the impacts of climate change [1]. Due to the vastness and untapped potential of the world's oceans, harnessing their power for sustainable electricity generation is critical for meeting the rising energy demands of an ever-growing global population. Furthermore, ocean energy resources such as tidal, wave, and ocean thermal energy conversion (OTEC) exhibit lower variability and higher predictability compared to other renewable sources like wind and solar [2]. However, it is the ocean wave energy sector that has witnessed the most substantial advancements in recent years, with numerous ocean wave energy converters (WECs) under development and testing [3, 4]. These devices capture and transform the kinetic and potential energy present in ocean waves into electricity [5]. Moreover, their environmental and economic feasibility have also been investigated. There are several methods for WEC classification. The first one is based on location. The WEC can be located at the shoreline or offshore. Offshore devices can harvest greater amounts of energy. The following criterion is how the device operates; it can be divided into submerged pressure differential, oscillating water column, overtopping device, or oscillating surge wave energy converter [6], which is the most popular one [7]. In this research, an offshore OSWEC device is investigated. The vigorous surge motion, cost-effective installation, and minimal environmental impact have made OSWECs a preferable choice[8]. Numerous studies have investigated the potential of Oscillating Surge Wave Energy Converters (OSWECs) as a viable wave energy conversion technology; for instance, Ghasemipour et al. inspected nearshore regions of the southern coast of Iran for the feasibility of such devices [9]. Folley et al. have studied the effects of water depth [10] and device width [11] on the performance of OSWECs. The effects of the device's flap's width [12], length [13], orientation [14], shape, weight, and thickness [15] on the converter's performance have also been studied. It's been shown that the increase in the OSWEC's PTO has positive effects on power and flap's motion amplitude up to a certain point [16]. The wave characteristics like frequency [11] and period [17] can also influence the OSWEC's performance. Moreover, Lin et al. showed that, on average, the viscous loss of the fluid decreases the capture factor by 20% [18]. Almost all the numerical simulations in recent years have been based on Computational Fluid Dynamics (CFD). On one end of the spectrum of these methods is the Linear Potential Flow theory models [19], which are fast but not very accurate. On the other hand, some studies [20, 21] have used Reynolds Averaged Navier-Stokes Equations (RANS) CFD solvers for WEC analysis and simulation, which are computationally complex and slow but offer higher fidelity [22]. Recently, the WEC-Sim module, designed for MATLAB and LPF-based, has been extensively used for WEC simulations [23]. Different types of converters like point absorbers [24], OSWECs [25, 26, 27, 28], FOSWECs [29, 30, 31], and even novel WEC types [32] have been inspected using WEC-Sim. WEC-Sim is an open-source simulation tool designed for WEC numerical simulations [33]. Much research has been utilizing WEC-Sim to investigate OSWEC performance, which encompasses a variety of objectives, for instance, minimizing cost [25], reducing the hydrodynamic loads [34], lowering the applied loads to the support structure of the device [26], and mitigating the horizontal motion of the OSWEC's platform which in turn reduces costs [31]. In the early studies of wave energy converters, the predominant focus of numerical studies was on linear PTOs. For instance, Sheng et al. have optimized two models of linear PTOs for a Wave-Activated Bodies WEC [35] and an OWC [36]. However, researchers have since shown interest in the performance analysis of WECs with a Hydraulic PTO [37, 38, 39]. A variety of optimization studies have also been used in this field. In [40], an improved version of the differential evolution (DE) algorithm was used for a WEC array, simultaneously achieving more precise convergence and speed. Gomes et al. [41] did a hull optimization of a floating OWC using DE and COBYLA, a direct search method to achieve maximum power output. The Genetic Algorithm (GA) has been used widely in the field of wave energy generation; for instance, in [42], it has been used for shape optimization of a planar pressure differential WEC, and in [43], the WEC array configuration was optimized as well. In [44], multiple meta-heuristic optimization algorithms, like GA or Particle Swarm Optimization (PSO), were used for the geometry optimization of WECs. The PSO has also been used for the optimization of WEC systems [45, 46]. Furthermore, Neshat et al. devised the improved Meth-Flame Optimizer (MFO) to optimize the geometry and the PTO settings of a generic multi-mode WEC [47]. Moreover, the GWO has been utilized for optimization in the field of other sources of renewable energy as well [48, 49]. This study proposes a fast and effective hybrid optimization method for maximizing the power absorption of an OSWEC based on the hindcast wave data from nine zones in the Caspian Sea, each has 9-12 data points. The significant contributions of this work are as follows: * Proposing a novel optimizer to maximize the power absorption of an OSWEC, the Hill Climb-Explorative Gray Wolf Optimizer (HC-EGWO) methodology combines a local search method with a global optimizer to balance exploration and exploitation rates for improved solution quality. * Developing a technical feasibility landscape analysis utilizing the Wave Energy Converter Simulator (WECSim) numerical model to account for the maximum feasible angular motion of the flap, ensuring optimized OSWEC operation within safety and efficiency limits. * Insights for selecting optimal offshore sites, optimizing power output, and promoting the adoption of ocean renewable energy sources. * Achieving a significant increase in power output (up to 3.31%) compared to other methods demonstrates the effectiveness of the proposed HC-EGWO optimization approach. * Gaining valuable knowledge for deploying OSWECs in the South Caspian Sea, considering its unique environmental conditions and energy potential. This study is organized as follows. In section 2, the data collection, WEC's feasibility, and other preliminary analyses are presented. Section 3 goes over the multiple modifications of GWO and proposes a new optimization algorithm. The following section provides the benchmark functions used to evaluate the new algorithm's performance. Section 5 presents the problem formulation information. Finally, Section 6 provides the results of the study. ## 2 Case Study Landscape Analysis In this study, both the analytical model and the GWO algorithm were utilized for numerical modeling. The equations in each section were formulated and implemented in MATLAB. Subsequently, the solutions obtained from the proposed optimization approach were duly validated. ### The Caspian Sea The Caspian Sea is between Iran to the south, Russia to the north, Russia and Azerbaijan to the west, and Turkmenistan and Kazakhstan to the east. This body of water is often categorized either as the largest lake in the world or the most miniature sea on Earth. It spans approximately 1030-1200 kilometers in length and 196-435 kilometers in width. The surface of the Caspian Sea lies around 28 meters below sea level. The northern part of the sea is notably shallow, with only a negligible portion of seawater present in the northern quarter and an average depth of less than 5 meters [50]. Hence, investigating the southern shores becomes more significant for wave analysis. Figure 1 provides an overview of the Caspian Sea. Due to its status as one of Asia's most crucial energy sources, the Caspian Sea has always attracted considerable attention from the industry. The expansion of its southern coast also presents significant potential for harnessing wave energy [51; 52]. Various analyses have been conducted to forecast waves in the southern areas of the Caspian Sea, considering the prevailing direction of the dominant sea waves. Figure 2 displays the projected values of 50-year dominant waves in the southern regions, utilizing the Gumble distribution. Given the wave heights depicted in Figure 2, it becomes crucial to identify a point with maximum wave energy that also offers convenient beach access. Therefore, comprehensive research is needed to analyze wave data in the southern Caspian Sea, aiming to identify this point and establish a general criterion for comparing energy levels among different points using parameters such as wave height and wave period. ### Data Collection To investigate the southern coasts of the Caspian Sea, the initial step involved analyzing the data obtained from the Iranian National Institute for Oceanography and Atmospheric Science (INIO). Specifically, the applied data from the Iranian Seas Wave Modeling (ISWM) and the Iranian Wave Atlas (IWA) models were examined. These datasets covered the entire Caspian Sea over a five-year period, from January 2006 to December 2010, with 1-hour time Figure 1: Caspian Sea landscape [50] Figure 2: the 50-year design wave height in the dominant directions of the southern Caspian Sea [53] intervals. With reference to relevant literature and local assessments, nine ports were selected on the southern coasts of the Caspian Sea. A designated area of 0.2 longitude and latitude was considered around these ports, and locations with available data within this area were extracted. In total, 105 data points from the southern coast area of the Caspian Sea were identified, and their specifications are detailed in Figure 3 and Figure 4. ### Preliminary Analysis In order to understand the waves in the Caspian Sea, the data from nine selected ports were visualized. This was done by plotting wave rose diagrams (Figure 5) and wave scatter diagrams (Figure 6) to visualize the distribution of wave directions and to identify the prevailing wave patterns in the region. The variations in wave height and wave period across different locations in the study area were unveiled by analyzing the wave scatter diagrams. As seen in Figure 5, the waves have a relatively small magnitude and are mainly to the north, which is logical because these ports are in the southern part of the Caspian Sea. Moreover, from the scatter wave diagram in Figure 6, the waves comparably have low heights and low periods, and the most prevalent waves have a height of 20 cm and a period of 3 seconds. Figure 5: the wave rose diagram for the nine analyzed sea ports reveals relatively small wave magnitudes that predominantly originate from the north. This observation aligns with expectations since these ports are located in the southern region of the Caspian Sea. A power matrix was also made specifically for the Caspian Sea, shown in Figure 7. This matrix comprehensively assessed the wave energy potential in these regions. It takes into account important factors such as wave height and wave period to estimate the energy conversion capabilities of OSWECs in these areas. As shown in Figure 7, increasing either the height or the period of the wave can lead to higher absorbed energy. Figure 6: The wave scatter diagram of the nine studied ports. Collectively, the insights gained from the wave rose diagrams, wave scatter diagrams, and power matrix contribute to our understanding of the spatial distribution of wave energy in the Caspian Sea. ### WEC's Feasibility Analysis of OSWEC's flap interaction with the wave under linear water wave theory assumptions requires the flap's excursions to be adequately small. The reason is that the flap's rotation should be small enough so that the correct and non-linear form of the hydrostatic stiffness, which is (\(K_{p}\sin\!\theta\)), can be replaced by (\(K_{p}\)) [54]. Several studies [55; 17; 56; 16; 57] assumed the maximum angular motion of the flap to be 30 \(\,\,\)' [55]. In addition, this limitation helps the device to avoid damage, particularly in extreme sea states [56]. In Figure 8, the feasible area of the damping and stiffness of the PTO are presented. First, the literature for the range of viable damping and stiffness values for the PTO was reviewed, the result of which is shown by the orange color. Next, based on the OSWEC's flap oscillation limitation, the feasible values of \(K_{PTO}\) and \(C_{PTO}\) were finalized, and the red area was omitted. Finally, the remaining area, shown by the color green, represents the applicable range of these two critical parameters. Figure 7: The power matrix of the southern coasts of the Caspian Sea. ### Preliminary Sensitivity Analysis In order to investigate the impact of critical parameters on the performance of an oscillating wave energy converter, a sensitivity analysis was conducted. Wave height (H), wave period (T), PTO's damping (C), and stiffness (K) were analyzed regarding the power output of the system through six plots in Figure 9. By examining the plots, valuable insights were obtained regarding the optimal values for these parameters and their combinations for maximizing power generation. Figure 9-(a) represents the effect of H and T in optimizing the generated power of the OSWEC. The plot reveals that a combination of high wave height and wave period leads to the best power outputs. However, it is noteworthy that extreme values of T can decrease the power. Figure 9-(b) illustrates the influence of K and H on the converter's power generation. As can be seen, both high and low levels of PTO stiffness can result in passable power outputs. However, the highest power is achieved when K is moderate. Figure 9-(c) depicts the relation between C and H and the device's generated power. High wave heights and low values of PTO damping correspond to favorable power outputs and improved performance. Figure 9-(d) shows the effects of K and T variation on power. Accordingly, the wave periods within the range of approximately 6 to 9 seconds yield pleasing power outputs. Furthermore, the power improves significantly as the PTO stiffness approaches its medium value. Figure 9-(e) showcases the influence of C and T. Similar to the previous plot, wave periods ranging from approximately 6 to 9 seconds produce the most favorable power outputs-- additionally, the performance improves as the PTO damping decreases. Finally, in Figure 9-(f), the PTO parameters regarding their effect on the power output have been investigated. According to the plot, almost always lower values C result in better power, and a K value between 10 and 70 M\(\mathrm{N}\mathrm{m}\mathrm{/}\mathrm{a}\mathrm{f}\) leads to the best performance. Overall, the parameter's effects can be explained relatively simply. In summary, higher H and lower C lead to the best power outputs. Furthermore, moderate values of T and K lead to the best performance. Overall, it can be seen that the problem at hand is a multimodal optimization problem and has multiple optima. Figure 8: The feasible area of the PTO damping and stiffness used in this study. Next, the effects of the PTO parameters on the PTO power were further inspected in Figure 10. The black parts show the unfeasible areas calculated in previous sections. Similar to the power output, low values of C and moderate K values bring about the highest PTO forces. It is worth noting that in the OSWEC, the power output is calculated by multiplying PTO force by the flap's velocity [17]. Figure 9: Sensitivity analysis plots for the key parameters of the OSWEC (Wave height, wave period, PTO damping, and PTO stiffness) 3. Optimization Approach ### The Standard GWO The Grey Wolf Optimizer (GWO) is a bio-inspired algorithm based on a grey wolf breed's leadership hierarchy and hunting behaviour. Mirjalili et al. [58] simplified their hunting mechanism and introduced four types of wolves, the alpha (\(\alpha\)), the beta (\(\beta\)), the delta (\(\delta\)), which are, respectively, the best solutions of the algorithms (have the best knowledge about the location of the optimum) and the omegas (\(\omega\)) which comprise the rest of the pack and follow the three aforementioned wolves to get closer to the prey. We can show the encircling of the prey process mathematically using the following equations: \[\vec{D}=\left|\vec{C}\cdot\overrightarrow{X_{p}}(t)-\vec{X}(t)\right| \tag{1}\] \[\vec{X}(t+1)=\overrightarrow{X_{p}}(t)-\vec{A}\cdot\vec{D} \tag{2}\] in which \(t\) is the current iteration, \(\vec{X}\) indicates the position of a grey wolf, \(\overrightarrow{X_{p}}\) is the position of the prey, and \(\vec{A}\) and \(\vec{C}\) are coefficient vectors. The \(\vec{A}\) and \(\vec{C}\) vectors are determined as follows: \[\vec{A}=2\vec{a}\cdot\overrightarrow{r_{1}}-\vec{a} \tag{3}\] \[\vec{C}=2\cdot\overrightarrow{r_{2}} \tag{4}\] Figure 10: Sensitivity analysis result of the effects of PTO parameters on the PTO force. Unfeasible values, shown in black, are excluded from the analysis. in which \(\vec{a}\) linearly decreases from 2 to 0, and \(\overline{r_{1}}\) and \(\overline{r_{2}}\) are random numbers between 0 and 1. As stated, the \(\omega\) wolves update their positions based on the three best search agents (\(\alpha,\beta\), and \(\delta\) wolves). They follow these equations: \[\overline{D_{\alpha}} =\left|\overline{C_{1}}\cdot\overline{X_{\alpha}}-\vec{X}\right| \tag{5}\] \[\overline{D_{\beta}} =\left|\overline{C_{2}}\cdot\overline{X_{\beta}}-\vec{X}\right|\] (6) \[\overline{D_{\delta}} =\left|\overline{C_{3}}\cdot\overline{X_{\delta}}-\vec{X}\right|\] (7) \[\overline{X_{1}} =\overline{X_{\alpha}}-\overline{A_{1}}\cdot\left(\overline{D_ {\alpha}}\right)\] (8) \[\overline{X_{2}} =\overline{X_{\beta}}-\overline{A_{2}}\cdot\left(\overline{D_ {\beta}}\right)\] (9) \[\overline{X_{3}} =\overline{X_{\delta}}-\overline{A_{3}}\cdot\left(\overline{D_ {\delta}}\right)\] (10) \[\vec{X}(t+1) =\frac{\overline{X_{1}}^{*}(t)+\overline{X_{2}}(t)+\overline{X_ {3}}^{*}(t)}{3} \tag{11}\] This was a simple overview of GWO's origin and mechanism. ### Modified GWO (mGWO) Mittal et al. [59] believed that the linear equation of the a does not provide a good balance between exploration and exploitation, so they tried this nonlinear equation: \[\vec{a}=2\left(1-\frac{t^{2}}{r^{2}}\right) \tag{12}\] in which \(t\) represents the current iteration, and \(T\) is the total number of iterations. This equation resulted in 70% exploration and 30% exploitation of the total iterations. ### Exploration-Enhanced GWO (EEGWO) Since in GWO, all the search agents gravitate toward the three best solutions, this algorithm can be susceptible to premature convergence. Therefore, Long et al. [60] modified the position-updating equation inspired by the PSO algorithm to emphasize more on the exploration: \[\vec{X}(t+1)=b_{1}\cdot r_{3}\cdot\frac{\overline{X_{1}}^{*}(t)+\overline{X_ {2}}^{*}(t)+\overline{X_{3}}^{*}(t)}{3}+b_{2}\cdot r_{4}\cdot\left(\overline{ X^{\prime}}-\vec{X}\right) \tag{13}\] where \(\overline{X^{\prime}}\) is another randomly selected search agent from the population, \(r_{3}\) and \(r_{4}\) are random numbers in [0,1], and \(b_{1},b_{2}\in\) (0,1] indicate constant coefficients to balance the exploration/exploitation (in the mentioned study the selected values are \(b_{1}=0.1\) and \(b_{2}=0.9\)). They also proposed a new formula for the control parameter \(\vec{a}\) : \[\vec{a}=a_{\text{initial}}\,-\left(a_{\text{initial}}\,-a_{\text{final}}\, \right)\cdot\left(\frac{r-t}{\tau}\right)^{\mu} \tag{14}\] where \(\mu\) is the nonlinear modulation index ( \(\mu=1.5\) in the aforementioned study), and \(a_{\text{initial}}\) and \(a_{\text{final}}\) are 2 and 0, respectively. ### Improved GWO (IGWO) In order to address the challenges associated with the conventional Maximum Power Point Tracking (MPPT) techniques, which is the power maximization of the PV system [61], and improving their efficiency in finding the global maximum power point, Ma et al. [62] utilized the fitness value of the search agents for their position-updating mechanism as follow: \[\tilde{X}(t+1)=\begin{cases}\frac{f_{\alpha}\cdot\overline{X_{1}^{*}}}{f}+ \frac{f_{\beta}\cdot\overline{X_{2}^{*}}}{f}+\frac{f_{\beta}\cdot\overline{X_ {2}^{*}}}{f},\\ \frac{\overline{X_{1}}+\overline{X_{2}}+\overline{X_{3}}}{3},\end{cases} \tag{15}\] \[f=f_{\alpha}+f_{\beta}+f_{\delta} \tag{16}\] where \(f_{\alpha},f_{\beta}\), and \(f_{\delta}\) are fitness values of \(\alpha,\beta\), and \(\delta\), respectively. \(f_{\text{avg}}\) is the average of these 3 fitness values, and \(f_{i}\) is the fitness value of grey wolf individuals. The authors modified the \(\tilde{a}\) formula as well: \[\tilde{a}=a_{\text{min}}+(a_{\text{max}}-a_{\text{min}})\cdot\left(1-\frac{t} {\tau}\right)^{2} \tag{17}\] where \(a_{\text{min}}\) and \(a_{\text{max}}\) are 0 and 2, respectively. ### Efficient and Robust GWO (ERGWO) With the intention of tackling large-scale numerical optimization problems, Long et al. performed another study to enhance the performance of the GWO [63]. Following the footsteps of the previous studies, they changed both the position-updating equation and a equation. The first change can be seen below, where they used a proportional weighting method similar to [62]: \[w_{1}=\frac{\left|\overline{X_{1}}\right|}{\left|\overline{X_{1}^{*}}\right|+ \left|\overline{X_{2}}\right|+\left|\overline{X_{3}}\right|} \tag{18}\] \[w_{2}=\frac{\left|\overline{X_{2}}\right|}{\left|\overline{X_{1}^{*}}\right|+ \left|\overline{X_{2}}\right|+\left|\overline{X_{3}}\right|} \tag{19}\] \[w_{3}=\frac{\left|\overline{X_{3}}\right|}{\left|\overline{X_{1}^{*}}\right|+ \left|\overline{X_{2}}\right|+\left|\overline{X_{3}}\right|} \tag{20}\] \[\tilde{X}(t+1)=\frac{1}{w_{1}+w_{2}+w_{3}} \tag{21}\] \[\cdot\frac{w_{1}\cdot\overline{X_{1}^{*}}+w_{2}\cdot\overline{X_ {2}}+w_{3}\cdot\overline{X_{3}}}{3}\] where \(a_{\text{initial}}\) and \(a_{\text{final}}\) are 2 and 0, respectively. \(\mu\in[1.0001,1.005]\) is the nonlinear modulation index (in the mentioned study \(\mu=1.001\)). \[\tilde{a}=a_{\text{initial}}-(a_{\text{initial}}-a_{\text{final}})\cdot\mu^{-t} \tag{22}\] ### Hill-Climbing Explorative GWO (HC-EGWO) The Gray Wolf Optimizer (GWO), although remarkable in its performance for solving diverse optimization problems [58], is not without its shortcomings. Primarily, it is prone to premature convergence towards local optima in the search space, specifically during complex, high-dimensional problems [64]. This challenge arises due to the declining exploration rate (parameter \(a\)) in the original GWO algorithm, which transitions from 2 to 0 linearly. While this allows the algorithm to either explore or exploit optimal solutions when \(a\) is above 1, it leads to exploitation when \(a\) is below 1, thereby accelerating convergence towards local optima. In addressing this limitation, we propose the Explorative Gray Wolf Optimizer (EGWO), a novel enhancement to the original GWO that amplifies the exploration rate by modifying the parameter \(a\). In EGWO, \(a\) is altered as per the equations: \[R=\left(\frac{T-t}{T-1}\right) \tag{23}\] \[\tilde{a}=2\cdot\left(1-\left(e^{(t^{R}-T^{R})}\right)\right) \tag{24}\] where \(t\) is the current iteration, and \(T\) is the maximum number of iterations. This modification empowers the algorithm to delay the convergence process and explore the search space more thoroughly, reducing the chance of being trapped in local optima. Furthermore, to fortify the global search capabilities of EGWO, we propose a robust hybrid algorithm that incorporates a Random-restart hill-climbing local search, dubbed HC-EGWO. Figure 11 shows the evolution of \(a\) throughout the iterations for each of the six optimization approaches. Also, the Exploration Ratio (ER) is presented for each method; this value shows how much of the search process is allocated to potential exploration in GWO. By comparing the values, it is clear that HC-EGWO has the best ER value and the most potential to search the unexplored areas of the search space thoroughly. Figure 11: The evolution of \(a\) value during the optimization process for the six evaluated GWO methods in this study, and their Exploration Ratio (ER) In this hybrid scheme, EGWO operates on a superior level to create a global track and procure an array of suitable solutions. When the EGWO encounters stagnation or converges prematurely towards a local optimum, the hill climbing algorithm initiates a local search around the best solution found by the upper level (EGWO). It does so by creating a comprehensive neighbourhood search, thereby preparing to escape such unfavourable scenarios. The performance threshold is computed as follows: \[\Delta\text{ Best }_{THD}=\frac{\sum_{k=1}^{M}\left(\frac{\text{ Best }_{THD_{k}-\text{ Best }_{THD_{k-1}}}}{M}\right)}{M} \tag{25}\] where \(Best_{THD}\) is the optimal solution found per generation, and \(M\) is tied to the range of iterations to determine the average EGWO performance. When the solution offered by the local search outperforms the initial one, EGWO's global best is updated. The HC-EGWO algorithm iteratively executes the hill-climbing process whenever the EGWO performance dips, each time establishing an initial condition to facilitate escape from undesirable circumstances. The search step size is decremented linearly as follows to achieve a fine balance between exploration and exploitation: \[S_{t}=S_{t}-\left(\frac{t}{T}S_{t}\right)+1 \tag{26}\] Here, \(t\) and \(T\) denote the current and maximum iteration numbers, respectively, while \(S\)\({}_{t}\) represents the neighborhood search's step size. Algorithm 1 illustrates the detailed steps of the proposed optimization method (HC-EGWO). The initial solution encompasses wave height (\(H\)), wave period (\(T\)), PTO stiffness coefficient (\(K\)), and PTO damping coefficient (\(C\)). One of the crucial parameters in the local search algorithm is \(g\), which signifies the precision of the neighborhood search surrounding the globally optimal solution proposed by EGWO. A smaller step size for HC slows down the convergence speed. However, a larger step size bolsters the exploration capability, possibly at the expense of the exploitation capability, leading to the possibility of skipping over globally optimal or high-potential solution surfaces. For each decision variable, the neighborhood search evaluates two distinct direct searches, either incremental or decremental. After evaluating the generated solutions, the optimal candidate is chosen to iterate the search algorithm. It should be noted that the HC algorithm should not be employed during the initial iteration of the optimization process due to the pronounced tendency for converging to local optima. Moreover, for optimizing large-scale problems, HC may not be a suitable choice. ``` 1:procedureHC-EGWO 2:\(N=30,D=4\)\(\triangleright\) Population size and dimension size 3:\(\mathbb{S}=\{\langle H_{1},T_{1},K_{1},D_{1}\rangle,...,\langle H_{N},T_{N},K _{N},D_{N}\rangle\}\)\(\triangleright\) Initialize the population of wolves 4: Check if \(lb_{1}^{h}\leq\delta\leq sub_{1}^{h}\)\(\triangleright\) Maximum number of iterations 5:\(Max_{iter}=100\)\(\triangleright\) Maximum number of iterations 6:for\(iter=1,...,Max_{iter}\)do 7:\(R=(Max_{iter}-iter)/(Max_{iter}-1)\)\(\triangleright\) Calculate exploration rate \(\overrightarrow{d}\) with the new formulation 8:\(\overrightarrow{d}=2\cdot(-e^{i\omega\cdot d_{Max_{iter}}\varphi})\)\(\triangleright\) Calculate exploration rate \(\overrightarrow{d}\) with the new formulation 9: Sort the population \(\mathbb{S}\) based on fitness and get the leading wolves \(\alpha,\beta,\) and \(\delta\) 10:for\(i=1,...,N\)do 11:for\(j=1,...,D\)do 12: Calculate \(A_{ij}\) and \(C_{ij}\) for each of the leading wolves 13: Update the position of the \(i\)th wolf in dimension \(j\) using the positions of \(\alpha,\beta,\) and \(\delta\) 14:endfor 15:endfor 16: Update the positions of \(\alpha,\beta,\) and \(\delta\) based on the updated population 17:\(Best_{iter}=Max(\mathbb{S})\)\(\triangleright\) Get the best solution in this iteration 18:\(\Delta Best=Best_{iter}-Best_{iter-1}\)\(\triangleright\) Calculate the difference between the best solutions in the current and previous iterations 19:if\(\Delta Best<Th\)then\(\triangleright\) If the difference is less than a threshold \(Th\), perform Hill Climbing 20: Initialize the constraints \(lb_{1}^{t},ub_{1}^{t}\) ## 4 Benchmark Functions In this section, we evaluate the performance of the HC-EGWO algorithm on a total of 16 benchmark functions. These are classical benchmark functions that have been widely used by researchers in the field. These functions are well-established and are commonly used to evaluate the performance of optimization algorithms. You can find a detailed list of these classical benchmark functions in Tables 1 and 2. The tables provide information such as the dimensionality (Dim) of the function, the range of the function's search space (Range), and the optimal value (fmin) of each function [58]. By benchmarking the HC-EGWO algorithm on these 16 functions, we can evaluate its performance and compare it to other optimization algorithms. All the benchmark functions employed in this study are aimed at minimizing a given objective. These functions can be classified either as multimodal or fixed-dimension multimodal. To assess the performance of the HC-EGWO, it was executed 30 times on each benchmark function. The results were then analyzed statistically, providing the average and standard deviation values. These statistical outcomes are presented in Tables 5, which allow for comparing and evaluating the algorithm's performance across the benchmark functions. Also, a statistical analysis of the comparison is presented in the results section. To validate the results, the HC-EGWO algorithm is compared against other variations of the GWO algorithm, namely the conventional Grey Wolf Optimizer [58], the modified GWO [59], the Exploration-Enhanced GWO [60], the Improved GWO [62], and the Efficient and Robust GWO [63]. \begin{table} \begin{tabular}{l c c c} \hline Function & Dim & Range & \(f_{min}\) \\ \hline \(F_{1}(x)=\sum_{i=1}^{n}(-x_{i}\cdot\sin(\sqrt{|x_{i}|}))\) & 30 & [-500, 500] & –418.9829\(x\)5 \\ \(F_{2}(x)=\sum_{i=1}^{n}(x_{i}^{2}-10\cdot\cos(2\pi x_{i})+10)\) & 30 & [-5.12, 5.12] & 0 \\ \(\text{F}_{3}(x)=-20\cdot\exp\left(-0.2\cdot\sqrt{\frac{\sum_{i=1}^{n}x_{i}^{2} }{n}}\right)\) & 30 & [-32, 32] & 0 \\ - \(\exp\left(\frac{1}{n}\sum_{i=1}^{n}\cos(2\pi x_{i})\right)+20+\text{e}\) & & \\ \(F_{4}(x)=\frac{1}{4000}\sum_{i=1}^{n}x_{i}^{2}-\prod_{i=1}^{n}\cos\left(\frac {x_{i}}{\sqrt{i}}\right)+1\) & 30 & [-600, 600] & 0 \\ \(F_{5}=\frac{n}{n}(10sin(\pi y_{1}))+\sum_{i=1}^{n-1}(y_{i}-1)^{2}\). & & \\ \((1+10\text{sin}^{2}(\pi y_{1}))+\) & & \\ \((y_{n}-1)^{2}+\sum_{i=1}^{n}u(x_{i},10,100,4)\) & & \\ \(y_{i}=1+\frac{x_{i}+1}{4}\) & 30 & [-100, 100] & 0 \\ \(u(x_{i},a,k,m)=\begin{cases}k(x_{i}-a)^{m}&x_{i}>a\\ 0&-a<x_{i}<a\\ k(-x_{i}-a)^{m}&x_{i}<-a\end{cases}\) & \\ \(F_{6}(x)=0.1\left(\sin^{2}(3\pi x_{1})\right.\) & & \\ \(+\sum_{i=1}^{n}\left((x_{i}-1)^{2}\cdot\left(1+\sin^{2}(3\pi x_{i}+1)\right) \right)+\) & & \\ \((\chi_{n}-1)^{2}\left(1+\sin^{2}(2\pi x_{n})\right)+\sum_{i=1}^{n}U(x_{i},5,100,4)\) & & \\ \hline \end{tabular} \end{table} Table 1: Multimodal Benchmark Functions [58] ## 5 Problem Formulation ### The Wave Energy Converter As stated before, the Oscillating Surge WEC was chosen for this study due to multiple reasons. The OSWEC is fixed to the ground, and it features a hinged connection between its base and flap. This hinge constrains the flap's movement, allowing it to pitch around the hinge point. The converter's physical dimensions at scale are shown in Figure 12. Moreover, the OSWEC's flap has a mass of 127 tonnes, and its other properties are listed in Table 3. \begin{table} \begin{tabular}{l c c c c} \hline \hline \multicolumn{2}{c}{Function} & \multicolumn{1}{c}{Dim} & Range & \(f_{min}\) \\ \hline \(F_{7}=\left(\frac{1}{500}+\sum_{j=1}^{25}\frac{1}{j+\sum_{j=1}^{2}(x_{i}-x_{0 })^{2}}\right)^{-1}\) & 2 & [-65, 65] & 1 \\ \(F_{8}(x)=\sum_{i=1}^{11}\left(a_{i}-\frac{x_{0}(x_{i}^{2}+x_{0})^{2}}{b_{i}^{2 }+x_{0})+x_{0}}\right)^{2}\) & 4 & [-5, 5] & 0.00030 \\ \(F_{9}(x)=4x_{1}^{2}-2.1x_{1}^{4}+\frac{1}{3}x_{1}^{6}+x_{1}x_{2}-4x_{2}^{2}+4x_ {2}^{4}\) & 2 & [-5, 5] & -1.0316 \\ \(F_{10}(x)=\left(x_{2}-\frac{5.1}{4\pi^{2}}x_{1}^{2}+\frac{5}{2}x_{1}-6\right) ^{2}\) & 2 & [-5, 5] & 0.398 \\ \(+\) 10\(\left(1-\frac{1}{8\pi}\right)\cos(x_{1})+10\) & & & \\ \(F_{11}(x)=\left(1+(x_{1}+x_{2}+1)^{2}(19-14x_{1}+3x_{1}^{2}-14x_{2}+\right.\) & & \\ \(6x_{1}x_{2}+3x_{2})^{2}\left(30+(2x_{1}-3x_{2})^{2}(18-32x_{1}+12x_{1}^{2}\) & 2 & [-2, 2] & 3 \\ \(+\) 48x_{2}-36x_{1}x_{2}+27x_{2}^{2}\) & & & \\ \(F_{12}(x)=-\sum_{i=1}^{4}\left(c_{i}\exp\left(-\sum_{j=1}^{3}a_{ij}(x_{j}-p_{ ij})^{2}\right)\right)\) & 3 & [1, 3] & -3.86 \\ \(F_{13}(x)=-\sum_{i=1}^{4}\left(c_{i}\exp\left(-\sum_{j=1}^{6}a_{ij}(x_{j}-p_{ ij})^{2}\right)\right)\) & 6 & [0, 1] & -3.32 \\ \(F_{14}(x)=-\sum_{i=1}^{5}\left(\left((X-a_{i})(X-a_{i})^{T}+c_{i}\right)^{-1}\right)\) & 4 & [0, 10] & -10.1532 \\ \(F_{15}(x)=-\sum_{i=1}^{7}\left(\left((X-a_{i})(X-a_{i})^{T}+c_{i}\right)^{-1}\right)\) & 4 & [0, 10] & -10.4028 \\ \(F_{16}(x)=-\sum_{i=1}^{10}\left(\left((X-a_{i})(X-a_{i})^{T}+c_{i}\right)^{-1}\right)\) & 4 & [0, 10] & 10.5363 \\ \hline \hline \end{tabular} \end{table} Table 2: Fixed-dimension Multimodal Benchmark Functions. [58] \begin{table} \begin{tabular}{|c|c|c|c c c|} \hline \hline Body & Direction & Center of Gravity (m) & \(I_{xx}\) (kg.m\({}^{2}\)) & \(I_{yy}\) (kg.m\({}^{2}\)) & \(I_{zz}\) (kg.m\({}^{2}\)) \\ \hline \multirow{3}{*}{Flap} & \(x\) & 0 & 0 & 0 & 0 \\ & \(y\) & 0 & 0 & 1,850,000 & 0 \\ \cline{1-1} & \(z\) & -3.9 & 0 & 0 & 0 \\ \hline \hline \end{tabular} \end{table} Table 3: OSWEC’s Flap Mass Properties [23] ### WEC-Sim WEC-Sim provides an open-source simulation tool for the community. In order to determine the dynamic response of the WEC system, the equation of motion for the device about its center of gravity in the time domain has to be solved [23]: \[m\ddot{X}=F_{\text{exc}}\left(t\right)+F_{\text{rad}}\left(t\right)+F_{PTO}(t)+F_ {B}(t) \tag{27}\] where \(m\) is the mass matrix of the WEC, \(\mathcal{X}\) is the acceleration vector, \(F_{exc}(t)\) is the wave excitation vector, \(F_{rad}(t)\) is the force and torque vector caused by wave radiation, \(F_{PTO}(t)\) is the PTO force and torque vector, and \(F_{B}(t)\) is the net buoyancy restoring force and torque vector. The \(F_{exc}(t)\) and \(F_{rad}(t)\) are calculated using Boundary Element Method (BEM) solvers [65]. This module is developed on MATLAB/Simulink/Simscape. Figure 13 shows the Simulink models of the proposed OSWEC investigated in this paper [23]. Moreover, irregular waves are simulated as a superposition of regular waves [66]. In WEC-Sim, the PTO unit can be characterized by a linear spring-damper system, in which the PTO force is calculated by: \[F_{PTO}=K\cdot X+C\cdot X \tag{28}\] where K is the PTO stiffness coefficient, C is the PTO damping coefficient, and \(X\) and \(X\) are the relative motion and velocity between the flap and the base of the OSWEC. Since the studied device is fixed to the bed, \(X\) and \(X\) can be considered the flap's motion and velocity. Next, the power output of the PTO can be obtained by the following [23]: \[P_{PTO}=F_{PTO}\cdot X^{\prime}=K\cdot X\cdot X^{\prime}+C\cdot X^{2} \tag{29}\] In WEC-Sim, the regular wave excitation force after the ramp time (the necessary time for the system to stabilize from the starting stage of the simulation) is obtained from the: \[F_{\text{exc}}(t)=\Re\left[\frac{\mu}{2}F_{exc}(\omega,\theta)e^{iot}\right] \tag{30}\] where \(<\) denotes the real part of the term in bracket, \(H\) is the wave height, \(F_{exc}\) is the frequency dependent complex wave-excitation amplitude vector, and \(\theta\) is the wave direction. The excitation force in irregular sea states can be calculated as follows: \[F_{\text{exc}}\left(t\right)=\Re\left[\Sigma_{j=1}^{N}F_{\text{exc}}\left( \omega_{j},\theta\right)e^{i\left(\omega t+\phi\right)}\sqrt{2S\left(\omega_{ j}\right)d\omega_{j}}\right] \tag{31}\] Figure 12: OSWEC’s full-scale detailed dimensions [23] where \(N\) is the number of frequency bands that discretizes the wave spectrum, \(\varphi\) is the randomized phase angle, and \(S(\omega)\) is the distribution of wave energy over a range of wave frequencies that are characterized by a \(H_{s}\) and \(T_{p}\). The software uses the following equation to calculate the radiation terms, namely the added mass and the radiationdamping torques. In this equation, the first term is the added mass torque, and the second term is the radiation-damping torque: \[F_{rad}(t)=-A_{\infty}\ddot{X}-\int_{0}^{t}K_{r}(t-\tau)\dot{X}(\tau)d\tau \tag{32}\] where \(A_{\infty}\) is the added mass matrix at an infinite frequency, and \(K_{r}\) is the radiation impulse response function, which is calculated by this equation: \[K_{r}t=\frac{2}{\pi}\int_{0}^{\infty}B(\omega)\cos{(\omega t)}d\omega \tag{33}\] Notably, the assumption is that there is no motion before t = 0. The \(A_{\infty}\) and \(B(\omega)\) coefficients are calculated by the NEMOH, the BEM solver WEC-Sim uses. _5.3. Optimization Run Details_ In order to optimize the performance of an OSWEC in the southern Caspian Sea, WEC-Sim was used to simulate the converter and HC-EGWO to optimize its power output. A simulation time of 400 s and a ramp time of 100 s were chosen, with time steps of 0.1 s. Furthermore, ten optimization runs using HC-EGWO were performed, each with 1000 iterations and 20 search agents. Figure 13: OSWEC’s Simulink model in WEC-Sim [23] ## 6 Results ### Benchmark Functions Results Based on the results of the landscape analysis, it has been revealed that the problem at hand exhibits a multimodal nature. Therefore, multimodal benchmark functions have been used to test the effectiveness of HC-EGWO to optimize a wide range of complex problems. Furthermore, assessing the performance of HC-EGWO using multimodal benchmark functions can help in clarifying the generalization ability of the optimisation method. The importance of generalization lies in its ability to prevent overfitting [67], a situation where an optimization algorithm excessively fine-tunes its parameters to match specific conditions perfectly. Overfitting can result in subpar performance when the algorithm is applied to unfamiliar problem instances. By giving priority to generalization, optimization methods concentrate on capturing fundamental patterns and principles that can be transferred to new problem instances, resulting in solutions that are more dependable and efficient. \begin{table} \begin{tabular}{p{56.9pt}|p{113.8pt}|p{113.8pt}} \hline Abbreviation & Full name & Description \\ \hline H & Height (m) & Measure of the amplitude or intensity of a wave \\ \hline T & Period (s) & Time for completing one full cycle of wave \\ \hline K & PTO Stiffness Coefficient (MNm/rad) & Relationship between the deformation of a PTO system and the force it generates \\ \hline C & PTO Damping Coefficient (MNsm/rad) & Relationship between the PTO system’s velocity and the force it generates \\ \hline FlapEP & maxFlapExcitationPitch & Maximum Flap’s Excitation Force (kN) \\ \hline FlapRDP & maxFlapRDPitch & Maximum Flap’s Radiation Damping Force (kN) \\ \hline FlapAMP & maxFlapAMPAnPitch & Maximum Flap’s Added Math Force (kN) \\ \hline FlapRP & maxFlapRestoringPitch & Maximum Flap’s Restoring Force (kN) \\ \hline ForceTPTOP & maxForceTotalPTOpitch & Maximum PTO’s Force (kN) \\ \hline meanFlapAVD & meanFlapAngularVelocityD & Average Flap’s Velocity (degree/s) \\ \hline maxFlapAVD & maxFlapAngularVelocityD & Maximum Flap’s Velocity (degree/s) \\ \hline FlapARD & maxFlapAngularRotationD & Maximum Flap’s Rotation (degree) \\ \hline Flapa & flap’s acceleration (degree/s\({}^{2}\)) & Affects the PTO system’s structural integrity \& overall performance. \\ \hline Flapfam & flap’s force added math (N) & \\ \hline Flapfe & flap’s excitation force (N) & A function of the displacement, velocity and acceleration of PTO system \\ \hline Flapfr & flap’s restoring force (N) & Result of the buoyancy and gravity forces acting on the WEC \\ \hline Flapfrd & flap’s radiation damping force (N) & Due to the interaction between a WEC and the surrounding water waves \\ \hline Flapft & flap’s total force (N) & Sum of hydrodynamic forces, gravity forces, buoyancy forces, and other forces generated by the PTO system. \\ \hline Flapv & flap’s velocity (degree/s) & Rate of change of the flap angle, \\ \hline Flapx & flap’s position (degree) & \\ \hline PTOa & PTO’s acceleration (degree/s\({}^{2}\)) & Alteration rate of its velocity over time \\ \hline PTOv & PTO’s velocity (degree/s) & Alteration rate of its position over time \\ \hline PTOf & PTO’s force (N) & Is transmitted from the WEC to the PTO system due to the motion of the waves \\ \hline PTOx & PTO’s position (degree) & \\ \hline \end{tabular} \end{table} Table 4: The details of optimisation parameters and other variables related to the wave power simulation applied. These benchmark functions have multiple global and local optima, which increase by the number of dimensions. This characteristic makes them the perfect functions to test the exploration ability of an algorithm [58]. As shown in Table 5, the HC-EGWO can provide very competitive results (especially in fixed-dimension multimodal benchmark function). This algorithm reaches the best solutions in 7 test functions and the second-best answer in 2 functions in this category, which is the best performance among the analyzed algorithms. It is notable that in some functions, like F11 and F12, the difference in performance is very minuscule. ## References Figure 14 shows a comparative plot of the five variants of GWO and the proposed GWO (HC-EGWO) performance over the 16 benchmarks. The performance average rank of each variant and the significant differences using the Friedman test can confirm that HC-EGWO performed best in these 16 multi-modal optimisation benchmarks. The average rank is used to calculate the Friedman statistic, which is compared to the critical value to determine whether there are significant differences in performance. ### Algorithms Performance in the Defined Problem Here, the proposed algorithm was compared to the conventional GWO and its four variants that were introduced earlier. Each algorithm was run five times with a population size of 20 and 1000 iterations to achieve maximum power output. Table 6 shows the critical parameters of the OSWEC in the average performance of each algorithm. As can be seen in Table 6, all the algorithms have competitive performances; however, the proposed algorithm (HC-EGWO) outperforms the others. HC-EGWO can improve the power output by 0.08% up to 3.31% compared to the other candidates. Moreover, the EEGWO has the worst performance by far. Next, the convergence curves for the six inspected algorithms are presented in Figure 15. Figure 14: Average performance rank of the HC-EGWO compared with other five variants of GWOs over the 16 benchmarks using the Friedman test. Figure 15 shows how competitive all these algorithms' performances were in this problem. In addition, all the methods were able to reach high amounts of average power output in under 50 iterations. ### Wave and PTO parameters optimization In this section, the results of the wave and PTO parameters optimization of the oscillating wave surge converter are presented. In order to get a better understanding of the effects of optimization on performance, the converter's functioning and outputs in 3 scenarios are analyzed and compared. One of the cases is the scenario with the best-found solution by the HC-EGWO (Case C). Next, the case with the default WEC-Sim parameters was chosen to see how much improvement the input fine-tuning has achieved (Case A). However, since the default WEC-Sim case parameters were in the unfeasible area following the literature review this study performed at the research's beginning, another scenario was added for evaluation. It was observed that the PTO damping with a value of 0.012 MNsm/rad was in the unfeasible area, so based on the literature review and the initial sensitivity analysis, the minimum feasible value, which was 90 MNsm/rad, was chosen for the following case, and the three other parameters stayed the same (Case B). Table 7 presents the inputs, forces, oscillation, and power of the system in the three analyzed cases in detail. First, the wave elevation during the simulation for the 3 cases is presented in Figure 16 in order to analyze the other parameters more effectively. Next, the resulting oscillation details, PTO force, and power output will be inspected. As stated before, cases A and B have the same wave conditions. Hence, one wave elevation graph is plotted to represent the sea state in both cases in Figure 16. According to this figure, in cases A and B, wave elevation relatively stays in the same range, and the amplitude does not change drastically at any point during the simulation. On the other hand, in case C, especially after the halfway mark, wave heights are greater and reach their maximum absolute value at around the 215-second mark. As previously mentioned, when using linear PTO for the OSWEC, WEC-Sim calculates the power output by multiplying the PTO force by the flap's angular velocity. Both the flap's motion and its angular velocity positively dictate the device's power output, which is this study's main objective. First, we compare the flap's oscillation in cases A and B in Figure 17. Since these two have the same wave characteristics but different PTO configurations, one can assign almost all the difference in oscillation to the PTO stiffness and damping. Both parameters are virtually trivial in case A and come into effect in case B. It can be seen that the PTO C and K in isolation dampen the flap's oscillations, both the motion and the velocity. For case C, the flap's fluctuation during the simulation almost mimics the shape of the wave elevation, which is predictable. But the maximum flap motion roughly occurs in t = 125 s of case A. Figure 16: Wave elevation profile during the performance of the WEC for the 3 cases Next, in Figure 18 the PTO force and power output for the 3 cases are presented. Note that for this purpose, the y-axis for the 3 cases is modified for more clear visualization. According to the y-axis, it can be said that roughly case B produces ten times more power the case A produces. And that case C generates 50 times more the case A. Figure 17: Flap’s oscillations motion and velocity the 3 cases For case C, the PTO force is peaking during the 200 s and 250 s marks due to the substantial wave magnitude (see Figure 16). This happens multiple times to a lesser extent at different times during the simulation (100-140 s, 155-165 s, 175-180 s, 270-375 s). Similarly, it can be seen for case B that the highest power outputs occur when the PTO force is at max. But overall, the extent of the produced power is about five times smaller, which can be attributed to the PTO mechanical parameters. Finally, for case A, the default WEC-Sim case, considering the PTO stiffness coefficient is zero and the damping coefficient is very low, the generated power is almost between 500 and 1000 times smaller than the best-found sample in case C. In the context of optimizing the power output of a WEC, parallel coordinate plots can be helpful in understanding the relationships between the optimization parameters (such as damping, stiffness, wave height, and period) and the resulting power output. For example, in Figure 19, we can observe that increasing the damping and stiffness of the WEC leads to a decline in power output, while increasing the wave height leads to an upsurge in power output. Figure 19 showcases the parallel plots for the two selected runs of the HC-EGWO in order to achieve the highest power output; this includes the best-found solution by all the ten optimization runs. Furthermore, by analyzing the lines corresponding to each parameter in this Figure, it can be achievable to identify the range of parameter values that lead to optimal power output; for instance, the optimal ranges of K and C are [50-65] and [70-80], respectively. Another significant observation from the parallel plots is that there are sharp, non-linear relationships between the optimization parameters and the power output of the WEC. Figure 18: PTO force and power output for the 3 cases Figure 20 shows the parallel plots for the three scenarios studied in the result (See Section 6.3). These data are in real-time during the simulation time. Next, all the y-axis, except for the power output, are symmetrical, showcasing the device's oscillating nature and, therefore, its parameters. But when taking a closer look at case A (Figure 20(a)), it is visible that the only parameters that have the simultaneous maximum as the power output (red lines) are the flap's angular velocity and PTO force, this is consistent with equations for calculating the power output in WEC-Sim in earlier sections of the study. And also corresponds to the moderate values of the flap's acceleration, restoring torque, excitation torque, and other hydrodynamic forces. Based on case C (Figure 20(c)), it can be seen that low absolute values of excitation force correspond to low power. Since both these parameters positively correlate with wave height, that is consistent with the theory. Notably, not every hydrodynamic force has the same trajectory as the power; for instance, the restoring force's maximum absolute values coincide with the lowest power outputs. Since the flap's restoring torque is dependent on the flap's displacement only, we can see that the displacement alone can not lead to the best performance. The flap's velocity can be considered a more critical and deciding factor. Figure 19: Two examples of the best-performed optimization method’s exploration through the decision variables (H, T, K, and C) with internal parameters of the simulator listed in Table 4, plus the average of total power output distribution visualized by a parallel coordinates plot. The dark red lines indicate the highest absorbed power output based on the configurations. (in the figures, the H, T, K, and C are the input parameters. FlapEP, FlapRD, and FlapAMP are, respectively, the excitation, radiation damping, and added mass torques, and the ForceTPTPOP is the PTO force. The meanFlapAVD and maxFlapAVD are the average and maximum angular velocity of the flap, the FlapARD is the flap’s angular rotation of the flap, respectively, and the Power is the average power output of the system.) Figure 20: Parallel interactions plot for the average of total power output distribution based on three scenarios described in the Result section. The technical details of the variables are listed in Table 4. ### Sensitivity Analysis Sensitivity analysis is a crucial mechanism in post-processing optimization methods due to identifying the most significant factors affecting the efficiency of the optimized models [68]. Table 7-Case C reports the best configuration of decision variables (H, T, K, and C) and internal hydrodynamic parameters of the simulator proposed by the HCGWO. Meanwhile, the sensitivity analysis results can be seen in Figure 21. The black lines show the unfeasible areas of the search space for K (Figure 21(c)) and C (Figure 21(d)). The power output of the best-found solution discovered by sensitivity analysis was 1333.1 (kW). This negligible improvement can confirm that the proposed optimization method (HC-EGWO) is able to explore comprehensively the search space and converge to an appropriate solution. ### Site Selection In the last segment, an operational spot will be selected as the best location for the installation of the device. This is obtained by analyzing the best-found solution and finding the location from the 105 initial data points in the Caspian Sea that has the closest wave characteristic values, namely wave height and wave period, to the theoretic best location. The optimum values are \(\mathrm{H}=4.223\) m and \(\mathrm{T}=7.39\) s. Then, the best location was found using a Root Mean Square Error method (Figure 22), and the RMSE values for all the data points were evaluated. In the end, a data point belonging to the Kiashahr Port resulted in \(\mathrm{RMSE}=2.78\), which was the minimum among the analyzed spots. The longitude and latitude of the best-found location are \(37.6^{\circ}\) N \(50.1^{\circ}\) E; this spot belongs to the Kiashahr Port. Figure 21: Sensitivity analysis of the best-found configuration using the proposed hybrid optimisation method. Figure 22: Categorization of the 105 data points based on the fitness of their wave significant height and peak period (using the RMSE method) 7. Conclusion This study focused on optimizing the power take-off (PTO) parameters and site selection for an offshore oscillating surge wave energy converter (OSWEC) in the Caspian Sea. The optimization was performed using the novel Hill Climbing Explorative Gray Wolf Optimizer (HC-EGWO), which showed strong performance across 16 benchmark multimodal functions. When applied to the OSWEC case study, the HC-EGWO discovered a high-quality solution that increased the power output by approximately 3% compared to other methods. The results provide valuable insights into the complex interplay between the converter's mechanical design and the surrounding wave climate. Specifically, the sensitivity analyses revealed that wave height and PTO damping have the most substantial impact on the power output. Increasing the wave height boosts the absorbed power, while lower PTO damping is preferable. Meanwhile, moderate values of wave period and PTO stiffness lead to the highest outputs. There are also non-linear relationships and trade-offs between the parameters influencing the hydrodynamic forces acting on the device. Overall, the proposed HC-EGWO algorithm proved effective in handling this challenging multimodal optimization problem. The hybridization with local search prevented premature convergence and bolstered the exploration of the solution space. The outcomes showcase the method's capabilities for optimizing offshore renewable energy systems where complex hydrodynamic interactions are involved. They provide a valuable starting point for devising control strategies that ensure OSWECs operate safely within extreme seas while maximizing power generation. Moreover, the results offer insights into deploying OSWECs in the unique conditions of the Caspian Sea. The landscape analysis of available wave data from the region informed the creation of feasible parameter bounds. And the site selection analysis pinpointed a location with strong energy potential based on the optimal wave height and period found by the HC-EGWO. Hence, the outcomes provide a launchpad for harnessing the vast untapped wave resources of the Caspian basin. Future work can focus on incorporating more advanced hydrodynamic modeling into the WEC simulations. The effects of viscosity, turbulence, and non-linear waves could be accounted for using computational fluid dynamics. Additionally, real-world PTO systems like hydraulic and direct-drive PTOs can be simulated to optimize their parameters. Finally, expanding the optimization to more complex problems like WEC arrays and combining it with control strategy optimization represents promising research directions.
2309.16511
Toloka Visual Question Answering Benchmark
In this paper, we present Toloka Visual Question Answering, a new crowdsourced dataset allowing comparing performance of machine learning systems against human level of expertise in the grounding visual question answering task. In this task, given an image and a textual question, one has to draw the bounding box around the object correctly responding to that question. Every image-question pair contains the response, with only one correct response per image. Our dataset contains 45,199 pairs of images and questions in English, provided with ground truth bounding boxes, split into train and two test subsets. Besides describing the dataset and releasing it under a CC BY license, we conducted a series of experiments on open source zero-shot baseline models and organized a multi-phase competition at WSDM Cup that attracted 48 participants worldwide. However, by the time of paper submission, no machine learning model outperformed the non-expert crowdsourcing baseline according to the intersection over union evaluation score.
Dmitry Ustalov, Nikita Pavlichenko, Sergey Koshelev, Daniil Likhobaba, Alisa Smirnova
2023-09-28T15:18:35Z
http://arxiv.org/abs/2309.16511v1
# Toloka Visual Question Answering Benchmark ###### Abstract In this paper, we present _Toloka Visual Question Answering_, a new crowdsourced dataset allowing comparing performance of machine learning systems against human level of expertise in the grounding visual question answering task. In this task, _given an image and a textual question, one has to draw the bounding box around the object correctly responding to that question_. Every image-question pair contains the response, with only one correct response per image. Our dataset contains 45,199 pairs of images and questions in English, provided with ground truth bounding boxes, split into train and two test subsets. Besides describing the dataset and releasing it under a CC BY license, we conducted a series of experiments on open source zero-shot baseline models and organized a multi-phase competition at WSDM Cup that attracted 48 participants worldwide. However, by the time of paper submission, no machine learning model outperformed the non-expert crowdsourcing baseline according to the intersection over union evaluation score. **(a)** What do we use to support the immune system and get vitamin C? **Fig. 1: Given an image and a textual question, draw a bounding box containing the correct answer to the question.** Above is a sample of three image-question pairs from the training subset of our dataset. Every image contains the response, with only one correct response per image. Bounding boxes are drawn for illustrative purposes only; they are not parts of images in our dataset but are available as ground truth for all images. All images are from the MS COCO dataset [17] under the same license. Introduction Recently, prominent multi-modal deep learning models such as CLIP [20] and DALL-E [21] have demonstrated remarkable performance in demanding tasks such as text-image similarity measurement and text-to-image generation, respectively. Concurrently, modern machine learning methods have achieved superhuman results on challenging multi-task benchmarks like SuperGLUE [22] and VLUE [26]. However, most of these benchmarks incorporate a combination of well-known tasks with limited modality. In this study, we enhance the level of difficulty for machine learning methods by introducing the _Toloka Visual Question Answering_, an open-source multi-modal dataset designed to evaluate artificial intelligence systems. We provide a comprehensive description of the benchmark, outline our crowdsourcing pipeline for data collection, and present the performance of current pre-trained and fine-tuned models in tackling the challenging problem of grounding visual question answering. Our task is formulated as follows. Given an image and an English textual question, the objective is to draw a bounding box around the object that provides the correct response to the question (Figure 1). For instance, in a photograph of a bathroom, a question like "Where do I wash my hands?" would require selecting the sink as the answer. Successfully solving this task necessitates the non-trivial integration of visual, textual, and commonsense information. We assert that our approach, which employs free-form, open-ended textual questions paired with bounding boxes as answers, presents a fair challenge for contemporary multi-modal models. The remainder of this paper is organized as follows: Section 2 provides an overview of related work, Section 3 introduces our grounding visual question answering dataset, Section 4 describes the annotation pipeline employed to create the dataset, Section 5 defines the evaluation metrics and baselines used for assessment, Section 6 presents the evaluation results for publicly-available models and submissions in our competition, Section 7 conducts an error analysis of both human and machine performance on our dataset, and finally, Section 8 outlines the limitations of our work and concludes with final remarks. ## 2 Related Work In recent years, the scientific community has made significant progress in the development of diverse datasets containing multi-modal data, enabling numerous applications at the intersection of natural language processing and computer vision. One prominent application in this domain is visual question answering (VQA) [1], where models are tasked with providing textual responses based on image-question pairs, often involving commonsense knowledge. Several datasets have been created to facilitate research in VQA, such as GQA [8], CLEVR [9], and VQA v2 [7], which leverage MS COCO1 images [17] (which is also the case for our dataset). Footnote 1: [https://cocodataset.org/](https://cocodataset.org/) However, the conventional VQA paradigm assumes that the output should be in textual form. In contrast, visual grounding task requires to find a region of an image referring to a textual description of an image. The RefClef, RefCOCO, RefCOCO+ [11], and GrIT [18] datasets are examples that highlight the challenges of this visual grounding task. Our work lies at the intersection of these tasks, requiring both natural language understanding to process the question and the ability to comprehend the visual scene to detect relevant objects. We frame the problem as a _grounding visual question answering task_, where the model must output an object identified by a bounding box as the answer to the question. It is important to note that our problem cannot be easily reduced to the standard text-only question answering or detection based solely on textual prompts, as the answer to the question depends on the content of the image. In grounding VQA task [28, 19, 4] one predicts the region in the image used to arrive at the answer but not necessarily the answer itself. We guarantee that the answer is always present, yet we firmly believe that our proposed setup presents a formidable challenge for modern multi-modal models. ## 3 Dataset Description Our dataset is comprised of the images associated with textual questions (Figure 1). One entry (instance) in our dataset is a question-image pair labeled with the ground truth coordinates of a bounding box containing the object answering the given question. We guarantee that in most cases each image contains one and only one correct response to the given question. The images were obtained from a subset of the Microsoft Common Objects in Context, MS COCO, dataset [17] that was licensed under the Creative Commons Attribution (CC BY) license. Our dataset consists of 45,199 instances, which are divided into three subsets as shown in Table 1: _train_ (38,990 instances), _public test_ (1,705 instances), and _private test_ (4,504 instances). The names of these subsets correspond to the different phases of the competition that we organized (refer to Section 6 for further information). Since the public release of the entire dataset the _train_ subset can be used as the training set, the _public test_ subset as the validation set, and _private test_ as the test set. The dataset is provided in the form of textual files in comma-separated value (CSV) format, containing the following information: (a) URL of an image on a public content delivery network, (b) question in English, (c) image width and height, and (d) bounding box coordinates (left, top, right, bottom). We made our complete dataset, along with the baselines and ground truth bounding boxes, publicly available on various platforms to encourage research and development of multi-modal question answering models. The dataset can be accessed on Zenodo,2 Hugging Face Hub,3 Kaggle,4 and GitHub.5 It was released under the same CC BY license as the MS COCO subset we used. To ensure the integrity of the dataset and address potential concerns regarding dataset split-view poisoning [3], we computed and uploaded SHA-256 hashes for the images and electronically signed the repository commits that contain our data files. Additionally, we have uploaded all the images to Zenodo and Kaggle to mitigate any potential unavailability issues with the Azure content delivery network that we utilized to store the images. Footnote 2: [https://doi.org/10.5281/zenodo.7057740](https://doi.org/10.5281/zenodo.7057740) Footnote 3: [https://huggingface.co/datasets/toloka/WSDKup2023](https://huggingface.co/datasets/toloka/WSDKup2023) Footnote 4: [https://www.kaggle.com/datasets/dustlov/toloka-wsdm-cup-2023-vqa](https://www.kaggle.com/datasets/dustlov/toloka-wsdm-cup-2023-vqa) Footnote 5: [https://github.com/Toloka/WSDKup2023](https://github.com/Toloka/WSDKup2023) Our dataset has an equal proportion of images and questions to allow capturing different parts of different images, making it useful for training both visual and textual aspects of the model. Descriptive statistics of our dataset are presented in Table 1. It is evident that the subsets share a similar structure, with the majority of bounding boxes located near the centers of the images (Figure 1(a)). Furthermore, we extracted portions of images from the dataset, confined within the bounding boxes, and captioned them using BLIP-2 [15]. This process resulted in textual descriptions of the selected objects. By clustering these descriptions in the _test private_ subset, we found that 72% of them formed 65 distinct clusters, while Figure 2: Visual analysis of the ground truth bounding boxes in our dataset. 28% of objects did not belong to any cluster, demonstrating the diversity and non-triviality of our dataset. The 30 most common types of objects enclosed in bounding boxes are illustrated in Figure 2b. To further analyze the questions in our dataset, we sampled 100 random questions from the private test subset. Then, three authors of the paper manually annotated whether it is possible to answer the given question without seeing an image. We aggregated the annotations by majority vote; the inter-annotator agreement was high as indicated by Krippendorff's \(\alpha=0.87\)[14]. Our evaluation showed that only 56% of questions were answerable without seeing an image. That is, there was a specific answer to these questions (e.g., "What does the baseball player use to hit the ball?") while the rest 44% of questions have multiple answers (e.g., "What can be used to eat food with?") or the question is specifically about the image (e.g., "What is the person riding?"). Thus, our analysis shows that almost a half of the questions cannot be answered without seeing an image. ## 4 Annotation Methodology We performed all annotations, including the creation of bounding boxes and questions, using the open-call task on the Toloka crowdsourcing platform. The annotations were generated from scratch, utilizing exclusively the CC BY-licensed images from MS COCO. The annotators were asked to select the images containing the objects they found subjectively interesting and then compose questions about these objects. Then, for each question-image pair, we asked the annotators to select the answer on the image using a bounding box, allowing us to exclude unanswerable questions. Although it was possible to facilitate the question composition using such models as DH-GAN [10], we decided to stick to the pure crowdsourcing approach. This also allowed us to avoid synthetic data in our task and acquire the more natural formulations made by real humans. We adopted the well-known methodology called Find-Fix-Verify [2] for crowdsourcing creative datasets. It separates content production and verification tasks, and enables solving both tasks using crowdsourcing at scale. Our experience shows that the key issue in creative tasks is to ensure that _all the annotators understand the task the same way as we do_. Thus, we had to run multiple iterations of task design that finally resulted in seven-stage annotation pipeline in Figure 3. We will describe these stages in four coherent parts in the subsequent subsections. At the verification tasks the annotator who submitted the bounding box or a question and the annotator who verified it were two different people. Figure 3: A diagram of our annotation pipeline. First, we perform an _annotator selection_ (Section 4.1). Then, we ask the annotators to draw _bounding boxes_ around interesting objects (Section 4.2) and to _compose questions_ (Section 4.3). Finally, during _post-processing_, we ask the annotators to answer the composed image-question pairs (Section 4.4). ### Annotator Selection As composing good questions and drawing good bounding boxes imply a fair amount of creativity, we had to select the annotators who understand and feel the task the same way as we do. Besides this requirement, we needed the annotators to be able to actually solve it -- by being able to formulate grammatically correct questions in English. As a result, we designed a two-step admission procedure for annotator selection that included a language test and a question verification task. Language Test.We had to make sure they have good reading and writing skills in English. Thus, we designed a single-choice test of five questions that was similar to the reading comprehension part of English exams. Each question contained a paragraph of approximately ten complex English sentences and was provided with four possible interpretations. Only one interpretation was correct. Since during prototyping we found that failing this task led to further problems with the question composition, we required the annotators to solve the test without any mistakes. Question Verification.After the annotators passed the language test, we wanted them to get a good understanding of what we expect them to produce. During prototyping we found that question composition part required additional attention, so the annotators had to solve the same verification task as described in Section 4.3: given an image, a bounding box, and a question, confirm whether the question is well-formulated according to our strict requirements. We manually annotated the qualification dataset for this task. Those who passed this task were admitted for the real annotation. ### Bounding Boxes After sampling images from MS COCO and selecting the right annotators for our task, we performed _bounding box annotation_ in two steps. First, we asked the annotators to pick one large unique object in the given image and draw a tight bounding box around it. Then, for each pair of image and bounding box, we asked the same annotators to check the submitted bounding boxes against the same instruction that we showed in the previous step. ### Question Composition After obtaining the bounding boxes, we performed the similar steps to produce questions about the selected objects. We found this part to be the most challenging in our entire annotation task and spent most of our pipeline development time on it. First, given an image and a bounding box, we asked the annotators to compose a simple question in plain English that will allow one to find the object selected in the bounding box. Then, we asked the annotators to check the submitted questions against the same instruction that we showed them before. Since we have only one bounding box per image, we composed only one question per image. ### Post-Processing We performed three additional steps to ensure a high quality of our dataset to exclude poorly-formulated, leaking, and potentially offensive instances. Unanswerable Questions.After receiving the entire dataset, we decided to ask the annotators to perform the same ask as the algorithms should: given an image and a question, draw a bounding box around the answer. We were able to establish the crowdsourcing baseline for further use (see Section 5 for more details). Also, we managed to exclude from our dataset the instances for which the ground truth bounding boxes were significantly diverging from the newly-annotated bounding boxes. Intersection Avoidance.Since we used images from MS COCO, we explicitly checked the overlap between bounding boxes in our dataset and in the original dataset. About 20% of them had non-empty overlap, so we put all such instances into the train dataset. Otherwise the dataset splits were random. Offensive Content.During the dataset inspection, we found certain unacceptable questions suggesting offensive content. There were two prominent examples. First, there was a question "What can hit the animals?" for an image showing two zebras in a clearly non-hostile environment with a rock on sand.6 We reformulated the question. Second, there was a photo of a zebra shot by a group of hunters with the corresponding question. We fixed this by keyword filtering. Footnote 6: [https://toloka-cdn.azureedge.net/wsdmcup2023/000000535978-jpg](https://toloka-cdn.azureedge.net/wsdmcup2023/000000535978-jpg) ## 5 Metrics and Baselines In our task, the answers correspond to bounding box coordinates, with only one bounding box per image. Therefore, we employ the _intersection over union_ (IoU), also known as the _Jaccard index_, as our evaluation criterion. For the \(i\)-th image, we define it as follows: \[\text{IoU}_{i}=\frac{I_{i}}{U_{i}},\] where \(I_{i}\) represents the intersection area between the ground truth bounding box and the predicted bounding box, and \(U_{i}\) is the union of these boxes. Consequently, for the entire dataset of \(N\) images, the evaluation criterion is the _average intersection over union_, denoted as \(\text{AIoU}\): \[\text{AIoU}=\frac{1}{N}\sum_{i=1}^{N}\text{IoU}_{i}\,.\] For convenience, we multiply the IoU values by 100. We used the following baselines to estimate the human and machine performance on our task. We additionally reported the prediction accuracy values that were obtained by choosing the threshold value for IoU and treating the instances for which the threshold was passed as the correct ones. We used two common thresholds, \(\text{IoU}=50\) and \(\text{IoU}=70\), and denote the accuracy values as \(\text{IoU}>50\) and \(\text{IoU}>70\), correspondingly. Crowdsourcing.We evaluated how well non-expert human annotators can solve our task by running a dedicated round of crowdsourcing annotations on Toloka. We found them to tackle this task successfully without knowing the ground truth. On all three subsets of our data, the average IoU value is \(87.124\pm 0.746\), which we consider as a _strong human baseline_ for our task. Krippendorff's \(\alpha\) coefficients for the public test is \(0.68\) and for the private test is \(0.66\), showing the decent agreement between the responses; we used \(1-\text{IoU}\) as the distance metric when calculating the \(\alpha\) value. Ofa + Sam.The first baseline is zero-shot and is primarily based on OFA [24], combined with bounding box correction using SAM. To solve the task, we followed a two-step zero-shot setup. First, we addressed visual question answering, where the model was given a prompt "{question} Name an object in the picture" along with an image. The model provided the name of a clue object to the question. In the second step, an object corresponding to the answer from the previous step was annotated using the prompt "which region does the text "{answer}" describe?", resulting in \(\text{IoU}=42.462\). Subsequently, with the obtained bounding boxes, SAM generated the corresponding masks for the annotated object, which were then transformed into bounding boxes. This enabled us to achieve \(\text{IoU}=44.851\) with this baseline. Ofa + Sam (VQA without Image).We added the ablation study that answers questions without images as a new baseline model called OFA + SAM (VQA without Image). In particular, this method performed visual question answering by asking the question to the OFA model with blank white image and then drawing a bounding box corresponding to the obtained textual answer on the original image. This ablation shows that image is important for answering questions. The results were worse than the original OFA + SAM baseline, demonstrating \(\text{IoU}=39.075\) vs. \(\text{IoU}=44.851\) on the private test subset of our dataset. OVSeg + Sam.Another zero-shot baseline, called OVSeg [16], utilizes SAM [13] as a proposal generator instead of MaskFormer in the original setup. This approach achieved \(\text{IoU}=35.073\) on the private test subset. **Kosmos-2.** We also evaluated a grounding multi-modal large language model Kosmos-2 [18] in a zero-shot setup. We addressed the visual grounding task with "Find an object which answers the question. Question: "{question}". Answer:". This baseline demonstrated \(\mathrm{IoU}=22.571\) on private test. Yolor + Clip.Our last baseline used a detection model, YOLOR [23], to generate candidate rectangles. Then, we applied CLIP [20] to measure the similarity between the question and a part of the image bounded by each candidate rectangle. To make a prediction, it used the candidate with the highest similarity. This baseline method achieved \(\mathrm{IoU}=21.292\) on the private test subset. ## 6 Evaluation To assess our dataset beyond zero-shot baselines and crowd annotators, we conducted a large-scale open-call competition at WSDM Cup, which took place alongside the WSDM '23 conference.7 To ensure fair participation, we hosted the competition on CodaLab. Participants were given access to the complete _train_ subset for training their models, as well as a masked _public test_ subset for the leaderboard competition.8 The final rankings were determined based on performance on a concealed _private test_ subset, utilizing Docker images provided by the participants and executed on our servers. The inference code was required to complete within one hour on an Azure virtual machine equipped with 16 CPU cores, 200 GB of RAM, and one NVIDIA A100 80 GB GPU. We received a total of 48 participants in our competition, of which 9 submitted their code for the final stage. For the sake of brevity, Table 2 reports only top-3 performance along with the above-described baselines. We put a more complete table with the competition results to supplementary materials. In the following paragraphs, we provide a brief overview of the methodologies employed by the three winning teams during the reproduction phase on the private test subset of our dataset. Footnote 7: [http://www.wsdm-conference.org/2023/program/wsdm-cup](http://www.wsdm-conference.org/2023/program/wsdm-cup) Footnote 8: [https://codalab.lisn.upsaclay.fr/competitions/7434](https://codalab.lisn.upsaclay.fr/competitions/7434) **3rd Place.** The only single-person winning team, komleva.ep, fine-tuned the pre-trained multi-modal OFA model [24] on the competition dataset. In order to increase the prediction quality, this team additionally used data from the pre-processed GQA dataset [8]. **2nd Place.** The team jinx, Zhouyang_Chi devised a three-step pipeline solution. First, at the _coarse tuning_ step, they generated textual pseudo answers for the questions and tuned the OFA model to produce textual answers. Then, at the _fine tuning_ step, they used prompt engineering of the coarse-tuned OFA model to draw the bounding boxes. Finally, at the _post-processing_ step, they ran an ensemble of these coarse- and fine-tuned models to propose and select the best bounding box candidate. **1st Place.** The wztxy89 team created a variant detector using Uni-Perceiver as the multi-modal backbone network [27], with ViT-Adapter for cross-modal localization [5], and DINO as the prediction head [25]. They also included an auxiliary loss [12] and a test-time augmentation module for improved performance, which helped them win the challenge [6]. Even though the winning systems significantly outperformed our machine learning baselines, no system approached the non-expert level of human performance in our task. ## 7 Error Analysis Before performing error analysis, we evaluated whether the errors made by three top-performing systems were similar or not. We took their predictions on the _private test_ subset and then computed the Krippendorff's \(\alpha\) coefficient similarly to Section 5. The coefficient value of 0.77 demonstrated a moderate agreement between responses of all three systems. As this indicates that all the three systems tend to produce similar correct and incorrect responses, we further analyzed the errors made by these systems and our crowdsourcing baseline as reported in Section 6. First, we sampled the data instances from the private test set where IoU of humans and models was less than 80, which resulted in 355 out of 4,504 instances (approximately 8%). **This sample is heavy biased towards the most challenging instances** as all the models showed poor results on them; it is _not representative of the entire dataset_. Out of these 355, we sampled 100 instances and manually evaluated the quality of the obtained bounding boxes. This provided us with 100 judgements per method. We identified the following eight error classes as summarized in Table 3 and Figure 4: * **Small Object.** The target object is very small, thus it was difficult to draw a bounding box precisely because of the low resolution. * **Insignificant Error.** The difference between prediction and ground truth is marginal (roughly, it means that IoU was greater than 70). * **Wrong Object Predicted.** The prediction is entirely incorrect, so IoU was zero. * **Inaccurate Ground Truth.** The prediction is more accurate than ground truth. * **Inaccurate Prediction.** The prediction is significantly less accurate than ground truth. * **Wrong Question, Correct Prediction.** The ground truth does not answer the question but the corresponding prediction does. * **Wrong Question, Incorrect Prediction.** Neither the ground truth nor the prediction answer the question. * **Question Ambiguity.** The question might not have a single correct answer (for example, when there are several objects answering the same question). Also, we notice a few cases when the question is unclear or ambiguous (e.g., "What is he doing"). Having compared the predictions of the crowd annotators and top-performing machine learning systems, we noticed three facts. First, _it was especially difficult for crowd to draw a bounding box around small objects_ (the number of such errors is much higher than for models). Second, _the models make a completely wrong prediction more often than the crowd._ However, even when the prediction is incorrect, the models still predict some object on the image and not just a random bounding box. Third, _a significant amount of crowd errors is caused by the wrong or ambiguous question_. In most of such cases the crowd gives a correct answer. This is inevitable in crowdsourced datasets even with rigorous quality control approach like we used. It is worth noting that this observation is only applicable to the cases where all four approaches fail to make a good prediction (only 8% of the data). ## 8 Conclusion In our grounding visual question answering task, the inputs consisted of an image and a question, with the output being the corresponding bounding box. While the top-performing systems showed remarkable improvement compared to the baselines, none of them surpassed non-expert annotators by a significant margin. We consider this fact important, as it indicates that our benchmark remains relevant until larger multi-modal models become accessible. The entire dataset, except for the images themselves, was generated through crowdsourcing on the Toloka platform, rendering it a valuable resource for the creation of demanding benchmarks. As a future work, we consider increasing the dataset size and adjusting our annotation pipeline to produce questions on different regions in the same image. We foresee the following potential **downstream applications** of our dataset and derivative models beside the evaluation of machine learning models: * **Visual Search.** With accurate bounding boxes, grounding VQA enables better understanding and recognition of objects in images, allowing users of e-commerce platforms to query products based on their appearance rather than relying only on text-based search. * **Augmented Reality (AR).** Accurate bounding box annotation can help in integrating virtual objects into real-world scenes during AR applications. Grounding VQA aids in object recognition and aligning virtual content with real-world objects, and can assist in image annotation, making it easier to search images based on specific object queries. * **Robotics.** For robots to interact with objects in their environment effectively, accurate object localization is crucial. grounding VQA can be utilized to identify and track objects, enabling robots to navigate, grasp, and manipulate objects in a more intelligent and precise manner. Figure 4: Typical examples of error classes we observed during the analysis. Image captions include the associated questions followed by the error class labels and are provided. In the images, predictions are red-colored and ground truth is green-colored. However, our dataset has the following **limitations**: * **Dataset Bias.** We only consider images from MS COCO, which itself may contain biases related to gender and race. Additionally, the questions and selected objects chosen by annotators may introduce bias or have limited variability, potentially limiting the generalizability of models trained on this dataset. * **English-Only Questions.** This dataset focuses solely on English questions. However, this narrow focus may restrict the ability of models to handle other languages and cultures. * **Real-World Applications.** Since there are no production-level applications of the proposed grounding VQA setup yet, future real-world applications may involve more complex questions that require deeper understanding, reasoning, and context awareness. We wish to highlight certain **potential negative social impacts** of models trained on our dataset: * **Reinforcing Bias.** Due to potential biases present in the dataset, models trained on it may inadvertently perpetuate societal inequalities and discrimination when deployed in real-world applications. * **Ethical Use of Models.** As grounding VQA models become more advanced, they can be exploited for malicious purposes, such as privacy invasion. It is crucial to establish proper safeguards and guidelines to prevent misuse and protect individuals' rights and well-being. Acknowledgements.This work would not have been possible without the collaborative efforts of people from different teams in Toloka. Individuals listed in each contribution role are arranged alphabetically based on their surnames. We thank Ekaterina Fedorenko, Natalia Fedorova, Ujwal Gadiraju, Valentina Mikhno, and Evgeniya Sukhodolskaya for helping in organizing our competition at WSDM Cup. We acknowledge the invaluable efforts of Oleg Pavlov, Mikhail Potalitsyn, and Rosmiyama Shekhovtsova, whose expertise was crucial for the annotation pipeline design. We express gratitude to Anastasia Egupova, Aleksei Gerasimenko, Timur Pevzner, Kuen Pham, Ekaterina Saenko, and Anna Stepanova for their help in raising awareness of the competition among the community, allowing to attract 48 participants from across the world. We are grateful to Egor Babkin, Dmitry Lekontsev, and Andrei Voitovich for building the Web presence of our competition. We acknowledge the invaluable contribution of Tatiana Ignatova, Daria Kalakina, and Victoriya Vidma in navigating complex legal and financial aspects. Last but not least, we would like to thank the CodaLab and the WSDM Cup teams, especially Hady W. Lauw, and the competition participants, for making it a big success.
2309.04115
Two-sorted Modal Logic for Formal and Rough Concepts
In this paper, we propose two-sorted modal logics for the representation and reasoning of concepts arising from rough set theory (RST) and formal concept analysis (FCA). These logics are interpreted in two-sorted bidirectional frames, which are essentially formal contexts with converse relations. On one hand, the logic $\textbf{KB}$ contains ordinary necessity and possibility modalities and can represent rough set-based concepts. On the other hand, the logic $\textbf{KF}$ has window modality that can represent formal concepts. We study the relationship between \textbf{KB} and \textbf{KF} by proving a correspondence theorem. It is then shown that, using the formulae with modal operators in \textbf{KB} and \textbf{KF}, we can capture formal concepts based on RST and FCA and their lattice structures.
Prosenjit Howlader, Churn-Jung Liau
2023-09-08T04:28:10Z
http://arxiv.org/abs/2309.04115v1
# Two-sorted Modal Logic for Formal and Rough Concepts+ ###### Abstract In this paper, we propose two-sorted modal logics for the representation and reasoning of concepts arising from rough set theory (RST) and formal concept analysis (FCA). These logics are interpreted in two-sorted bidirectional frames, which are essentially formal contexts with converse relations. On one hand, the logic **KB** contains ordinary necessity and possibility modalities and can represent rough set-based concepts. On the other hand, the logic **KF** has window modality that can represent formal concepts. We study the relationship between **KB** and **KF** by proving a correspondence theorem. It is then shown that, using the formulae with modal operators in **KB** and **KF**, we can capture formal concepts based on RST and FCA and their lattice structures. Keywords:Modal logic Formal concept analysis Rough set theory. ## 1 Introduction Rough set theory (RST) [13] and formal concept analysis (FCA) [16] are both well-established areas of study with a variety of applications in fields like knowledge representation and data analysis. There has been a great deal of research on the intersections of RST and FCA over the years, including those by Kent [10], Saquer et al [14], Hu et al [9], Duntsch and Gediga [3], Yao [19], Yao et al [20], Meschke [12], and Ganter et al [4]. Central notions in FCA are formal contexts and their associated concept lattices. A formal context (or simply context) is a triple \(\mathbb{K}:=(G,M,I)\) where \(I\subseteq G\times M\). A given context induces two maps \(+:(\mathcal{P}(G),\subseteq)\rightarrow(\mathcal{P}(M),\supseteq)\) and \(-:(\mathcal{P}(M),\supseteq)\rightarrow(\mathcal{P}(G),\subseteq)\), where for all \(A\in\mathcal{P}(G)\) and \(B\in\mathcal{P}(M)\): \[A^{+}=\{m\in M\mid\text{ for all }g\in A\ \ gIm\},\] \[B^{-}=\{g\in G\mid\text{ for all }m\in B\ \ gIm\}.\] A pair of set \((A,B)\) is called a _formal concept_ (or simply concept) if \(A^{+}=B\) and \(A=B^{-}\). The set \(\mathcal{FC}\) of all concepts forms a complete lattice and is called a _concept lattice_. On the other hand, the basic construct of the original RST is the _Pawlakian approximation space_\((W,E)\), where \(W\) is the universe and \(E\) is an equivalence relation on \(W\). Then, by applying notions of modal logic to RST, Yao et al [21] proposed generalised approximation space \((W,E)\) with \(E\) being any binary relation on \(W\). In addition, they also suggested to use a binary relation between two universes of discourse, containing objects and properties respectively, as another generalised formulation of approximation spaces. The rough set model over two universes is thus a formal context in FCA. Duntsch et al. [3] defined sufficiency, dual sufficiency, possibility and necessity operators based on a rough set model over two universes, where necessity and possibility operators are, in fact, rough set approximation operators. Based on these operators, Duntsch et al. [3] and Yao [19] introduced property oriented concepts and object oriented concepts respectively. For a context \(\mathbb{K}:=(G,M,I)\), \(I(x):=\{y\in M:xIy\}\) and \(I^{-1}(y):=\{x\in G:xIy\}\) are the \(I\)-neighborhood and \(I^{-1}\)-neighbourhood of \(x\) and \(y\) respectively. For \(A\subseteq G\), and \(B\subseteq M\), the pairs of dual approximation operators are defined as: \(B_{I}^{\lozenge^{-1}}:=\{x\in G:I(x)\cap B\neq\emptyset\}\), \(B_{I}^{\square^{-1}}:=\{x\in G:I(x)\subseteq B\}\). \(A_{I^{-1}}^{\lozenge}:=\{y\in M:I^{-1}(y)\cap A\neq\emptyset\}\), \(A_{I^{-1}}^{\square}:=\{y\in M:I^{-1}(y)\subseteq A\}\). If there is no confusion about the relation involved, we shall omit the subscript and denote \(B_{I}^{\lozenge^{-1}}\) by \(B^{\lozenge^{-1}}\), \(B_{I}^{\square^{-1}}\) by \(B^{\square^{-1}}\) and similarly for the case of \(A\). A pair \((A,B)\) is a _property oriented concept_ of \(\mathbb{K}\) iff \(A^{\lozenge}=B\) and \(B^{\square^{-1}}=A\); and it is an _object oriented concept_ of \(\mathbb{K}\) iff \(A^{\square}=B\) and \(B^{\lozenge^{-1}}=A\). As in the case of FCA, the set \(\mathcal{OC}\) of all object oriented concepts and the set \(\mathcal{PC}\) of all property oriented concepts form complete lattices, which are called _object oriented concept lattice_ and _property oriented concept lattice_ respectively. For any concept \((A,B)\), the set \(A\) is called its _extent_ and \(B\) is called its _intent_. For concept lattices \(\mathcal{X}=\mathcal{FC},\mathcal{PC},\mathcal{OC}\), the set of all extents and intents of \(\mathcal{X}\) are denoted by \(\mathcal{X}_{ext}\) and \(\mathcal{X}_{int}\), respectively. Proposition 1: For a context \(\mathbb{K}:=(G,M,I)\), the following holds. 1. \(\mathcal{FC}_{ext}=\{A\subseteq G\mid A^{+-}=A\}\) and \(\mathcal{FC}_{int}=\{B\subseteq M\mid B^{-+}=B\}\). 2. \(\mathcal{PC}_{ext}=\{A\subseteq G\mid A^{\lozenge^{-1}}=A\}\) and \(\mathcal{PC}_{int}=\{B\subseteq M\mid B^{\square^{-1}\lozenge}=B\}\). 3. \(\mathcal{OC}_{ext}=\{A\subseteq G\mid A^{\square\lozenge^{-1}}=A\}\) and \(\mathcal{OC}_{int}=\{B\subseteq M\mid B^{\lozenge^{-1}\square}=B\}\). It can be shown that the sets \(\mathcal{FC}_{ext},\mathcal{PC}_{ext}\) and \(\mathcal{OC}_{ext}\) form complete lattices and are isomorphic to the corresponding concept lattices. Analogously, the sets \(\mathcal{FC}_{int},\mathcal{PC}_{int}\) and \(\mathcal{OC}_{int}\) form complete lattices and are dually isomorphic to the corresponding concept lattices. Therefore, a concept can be identify with its extent or intent. The relationship between these two kinds of rough concept lattices and concept lattices of FCA are investigated in [18]. In particular, the following theorem is proved. Theorem 1.1: [18] For a context \(\mathbb{K}=(G,M,I)\) and the complemented context \(\mathbb{K}^{c}=(G,M,I^{c})\), the following holds. 1. The concept lattice of \(\mathbb{K}\) is isomorphic to the property oriented concept lattice of \(\mathbb{K}^{c}\). 2. The property oriented concept lattice of \(\mathbb{K}\) is dually isomorphic to the object oriented concept lattice of \(\mathbb{K}\). 3. The concept lattice of \(\mathbb{K}\) is dually isomorphic to the object oriented concept lattice of \(\mathbb{K}^{c}\). In addition, to deal with the negation of concept, the notions of _semiconcepts_ and _protoconcepts_ are introduced in [17]. Algebraic studies of these notions led to the definition of double Boolean algebras and pure double Boolean algebras [17]. These structures have been investigated by many authors [17, 15, 1, 8]. There is also study of logic corresponding to these algebraic structures [6, 7]. The operators used in formal and rough concepts correspond to modalities used in modal logic [5, 2]. In particular, the operator used in FCA is the window modality (sufficiency operator) [5] and those used in RST are box (necessity operator) and diamond (possibility operator) [2]. Furthermore, a context is a two-sorted structure consisting of a set of objects and a set of properties. Considering these facts, our goal in this work is to formulate two-sorted modal logics that are sound and complete with respect to the class of all contexts and can represent all the three kinds of concepts and their lattices. To achieve the goal, we first introduce the notion of _two-sorted bidirectional frame_, which is simply a formal context extended with the converse of the binary relation. Then, we propose two-sorted modal logics **KB** and **KF** as representation formalism for rough and formal concepts respectively, and two-sorted bidirectional frames serve as semantic models of the logics. We also prove the soundness and completeness of the proposed logics with respect to the semantic models. Next, we will review basic definitions and main results of general many-sorted polyadic modal logic. Then, in Section 2.1, we define the logic **KB** and characterize the pairs of formula that represent property and object oriented concepts of context. The logic **KF** and formal concept are discussed in Section 2.2. We revisit the three concept lattices and their relations in terms of logic in Section 3. Finally, we summarize the paper and indicate directions of future work in Section 4. ### Many-sorted polyadic modal logic The many-sorted polyadic modal logic is introduced in [11]. The alphabet of the logic consists of a many-sorted signature \((S,\Sigma)\), where \(S\) is the collection of sorts and \(\Sigma\) is the set of modalities, and an \(S\)_-indexed_ family \(P:=\{P_{s}\}_{s\in S}\) of propositional variables, where \(P_{s}\neq\emptyset\) and \(P_{s}\cap P_{t}=\emptyset\) for distinct \(s,t\in S\). Each modality \(\sigma\in\Sigma\) is associated with an arity \(s_{1}s_{2}\ldots s_{n}\to s\). For any \(n\in\mathbb{N}\), we denote \(\Sigma_{s_{1}s_{2}\ldots s_{n}s}=\{\sigma\in\Sigma\mid\sigma:s_{1}s_{2}\ldots s _{n}\to s\}\) For an \((S,\Sigma)\)-modal language \(\mathcal{ML}_{S}\), the set of formulas is an \(S\)-index family \(Fm_{S}:=\{Fm_{s}\mid s\in S\}\), defined inductively for each \(s\in S\) by \[\phi_{s}::=p_{s}\ \mid\ \neg\phi_{s}\ \mid\ \phi_{s}\wedge\phi_{s}\ \mid\ \sigma(\phi_{s_{1}}\ldots\phi_{s_{n}})\ \mid\ \sigma^{\square}(\phi_{s_{1}}\ldots\phi_{s_{n}}),\] where \(p_{s}\in P_{s}\) and \(\sigma\in\Sigma_{s_{1}s_{2}\ldots s_{n}s}\). A _many-sorted relational frame_ is a pair \(\mathfrak{F}:=(\{W_{s}\}_{s\in S},\{R_{\sigma}\}_{\sigma\in\Sigma})\) where \(W_{s}\neq\emptyset\), \(W_{s_{i}}\cap W_{s_{j}}=\emptyset\) for \(s,s_{i}\neq s_{j}\in S\) and \(R_{\sigma}\subseteq W_{s}\times W_{s_{1}}\ldots\times W_{s_{n}}\) if \(\sigma\in\Sigma_{s_{1}s_{2}\ldots s}\). The class of all many-sorted relational frames is denoted as \(\mathbb{S}\mathbb{R}\mathbb{F}\). A _valuation_\(v\) is an \(S\)-indexed family of maps \(\{v_{s}\}_{s\in S}\), where \(v_{s}:P_{s}\rightarrow\mathcal{P}(W_{s})\). A many-sorted model \(\mathfrak{M}:=(\mathfrak{F},v)\) consists of a many-sorted frame \(\mathfrak{F}\) and a valuation \(v\). The satisfaction of a formula in a model \(\mathfrak{M}\) is defined inductively as follows. Definition 1: Let \(\mathfrak{M}:=(\{W_{s}\}_{s\in S},\{R_{\sigma}\}_{\sigma\in\Sigma},v)\) be a many-sorted model, \(w\in W_{s}\) and \(\phi\in Fm_{s}\) for \(s\in S\). We define \(\mathfrak{M},w\models_{s}\phi\) by induction over \(\phi\) as follows: 1. \(\mathfrak{M},w\models_{s}p\) iff \(w\in v_{s}(p)\) 2. \(\mathfrak{M},w\models_{s}\neg\phi\) iff \(\mathfrak{M},w\not\models_{s}\phi\) 3. \(\mathfrak{M},w\models_{s}\phi_{1}\wedge\phi_{2}\) iff \(\mathfrak{M},w\models_{s}\phi_{1}\) and \(\mathfrak{M},w\models_{s}\phi_{2}\) 4. If \(\sigma\in\Sigma_{s_{1}s_{2}\ldots s}\), then \(\mathfrak{M},w\models_{s}\sigma(\phi_{1},\phi_{2}\ldots\phi_{n})\) iff there is \((w_{1},w_{2}\ldots w_{n})\in W_{s_{1}}\times W_{s_{2}}\ldots W_{s_{n}}\) such that \((w,w_{1},w_{2}\ldots w_{n})\in R_{\sigma}\) and \(\mathfrak{M},w_{i}\models_{s_{i}}\phi_{i}\) for \(i\in\{1,2\ldots n\}\) Definition 2: [11] Let \(\mathfrak{M}\) be an \((S,\Sigma)\)-model. Then, for a set \(\Phi_{s}\) of formula, \(\mathfrak{M},w\models_{s}\Phi_{s}\) if \(\mathfrak{M},w\models_{s}\phi\) for all \(\phi\in\Phi_{s}\). Let \(\mathcal{C}\) be a class of models. Then, for a set \(\Phi_{s}\cup\{\phi\}\subseteq Fm_{s}\), \(\phi\) is a local semantic consequence of \(\Phi_{s}\) over \(\mathcal{C}\) and denoted as \(\Phi_{s}\models_{s}^{\mathcal{C}}\phi\) if \(\mathfrak{M},w\models_{s}\Phi_{s}\) implies \(\mathfrak{M},w\models_{s}\phi\) for all models \(\mathfrak{M}\in\mathcal{C}\). If \(\mathcal{C}\) is the class of all models, we omit the superscript and denote it as \(\Phi_{s}\models_{s}\phi\). If \(\Phi_{s}\) is empty, we say \(\phi\) is valid in \(\mathcal{C}\) and denoted it as \(\mathcal{C}\models_{s}\phi\). When \(\mathcal{C}\) is the class of all models based on a given frame \(\mathfrak{F}\), we also denote it by \(\mathfrak{F}\models_{s}\phi\). To characterize the local semantic consequence, the modal system \(\mathbf{K}_{(S,\Sigma)}:=\{\mathbf{K}_{s}\}_{s\in S}\) is proposed in [11], where \(\mathbf{K}_{s}\) is the axiomatic system in Figure 1 in which \(\sigma\in\Sigma_{s_{1}\ldots s_{n},s}\): When the signature is clear from the context, the subscripts may be omitted and we simply write the system as \(\mathbf{K}\). Definition 3: [11] Let \(\Lambda\subseteq Fm_{S}\) be an \(S\)-sorted set of formulas. The normal modal logic defined by \(\Lambda\) is \(\mathbf{K}\Lambda:=\{\mathbf{K}\Lambda_{s}\}_{s\in S}\) where \(\mathbf{K}\Lambda_{s}:=\mathbf{K}_{s}\cup\{\lambda^{\prime}\in Fm_{s}\mid \lambda^{\prime}\) is obtained by uniform substitution applied to a formula \(\lambda\in\Lambda_{s}\}\). Definition 4: [11] A sequence of formulas \(\phi_{1},\phi_{2},\ldots\phi_{n}\) is called a \(\mathbf{K}\Lambda\)-proof for the formula \(\phi\) if \(\phi_{n}=\phi\) and \(\phi_{i}\) is in \(\mathbf{K}\Lambda_{s_{i}}\) or inferred from \(\phi_{1},\ldots,\phi_{i-1}\) using modus pones and universal generalization. If \(\phi\) has a proof in \(\mathbf{K}\Lambda\), we say that \(\phi\) is a theorem and write \(\vdash_{s}^{\mathbf{K}\Lambda}\phi\). Let \(\Phi\cup\{\phi\}\subseteq Fm_{s}\) be a set of formulas. Then, we say that \(\phi\) is provable form \(\Phi\), denoted by \(\Phi\vdash_{s}^{\mathbf{K}\Lambda}\phi\), if there exist \(\phi_{1},\ldots,\phi_{n}\in\Phi\) such that \(\vdash_{s}^{\mathbf{K}\Lambda}(\phi_{1}\wedge\ldots\wedge\phi_{n})\rightarrow\phi\). In addition, the set \(\Phi\) is \(\mathbf{K}\Lambda\)-inconsistent if \(\bot\) is provable from it, otherwise it is \(\mathbf{K}\Lambda\)-consistent. Proposition 2: [11]\(\mathbf{K}\Lambda\) is strongly complete with respect to a class of models \(\mathcal{C}\) if and only if any consistent set \(\Gamma\) of formulas is satisfied in some model from \(\mathcal{C}\). Definition 5: [11] The canonical model is \[\mathfrak{M}^{\mathbf{K}\Lambda}:=(\{W_{s}^{\mathbf{K}\Lambda}\}_{s\in S},\{R_{ \sigma}^{\mathbf{K}\Lambda}\}_{\sigma\in\Sigma},V^{\mathbf{K}\Lambda})\] where 1. for any \(s\in S\), \(W_{s}^{\mathbf{K}\Lambda}=\{\Phi\subseteq Fm_{s}\ \mid\ \Phi\text{ is maximally $\mathbf{K}\Lambda$-consistent}\}\), 2. for any \(\sigma\in\Sigma_{s_{1}\ldots s_{n},s}\), \(w\in W_{s}^{\mathbf{K}\Lambda},u_{1}\in W_{s_{1}}^{\mathbf{K}\Lambda},\ldots u _{n}\in W_{s_{n}}^{\mathbf{K}\Lambda}\), \(R_{\sigma}^{\mathbf{K}\Lambda}wu_{1}\ldots u_{n}\) if and only if \((\psi_{1},\ldots,\psi_{n})\in u_{1}\times u_{2}\times\ldots\times u_{n}\) implies that \(\sigma(\psi_{1},\ldots,\psi_{n})\in w\). 3. \(V^{\mathbf{K}\Lambda}=\{V_{s}^{\mathbf{K}\Lambda}\}\) is the valuation defined by \(V_{s}^{\mathbf{K}\Lambda}(p)=\{w\in W_{s}^{\mathbf{K}\Lambda}\ \mid\ p\in w\}\) for any \(s\in S\) and \(p\in P_{s}\). Lemma 1: [11] If \(s\in S\), \(\phi\in Fm_{s}\), \(\sigma\in\Sigma_{s_{1}\ldots s_{n},s}\) and \(w\in W_{s}^{\mathbf{K}\Lambda}\) then the following hold: 1. \(R_{\sigma}^{\mathbf{K}\Lambda}wu_{1}\ldots u_{n}\) if and only if for any formulas \(\psi_{1},\ldots,\psi_{n}\), \(\sigma^{\square}(\psi_{1},\ldots,\psi_{n})\in w\) implies \(\psi_{i}\in u_{i}\) for some \(i\in\{1,2,\ldots,n\}\). 2. If \(\sigma(\psi_{1},\ldots,\psi_{n})\in w\) then for any \(i\in\{1,2\ldots,n\}\) there is \(u_{i}\in W_{s_{i}}^{\mathbf{K}\Lambda}\) such that \(\psi_{1}\in u_{1},\ldots,\psi_{n}\in u_{n}\) and \(R_{\sigma}^{\mathbf{K}\Lambda}wu_{1}\ldots u_{n}\). 3. \(\mathfrak{M}^{\mathbf{K}\Lambda},w\models_{s}\phi\) if and only if \(\phi\in w\). Proposition 3: [11] If \(\Phi_{s}\) is a \(\mathbf{K}\Lambda\)-consistent set of formulas then it is satisfied in the canonical model. These results implies the soundness and completeness of \(\mathbf{K}\) directly. Theorem 3.1: \(\mathbf{K}\) is sound and strongly complete with respect to the class of all \((S,\Sigma)\)-models, that is, for any \(s\in S\), \(\phi\in Fm_{s}\) and \(\Phi_{s}\subseteq Fm_{s}\), \(\Phi_{s}\vdash_{s}^{\mathbf{K}}\phi\) if and only if \(\Phi_{s}\models_{s}\phi\). ## 2 Two-sorted modal logic and concept lattices In this section, we present the logics **KB** and **KF** and discuss their relationship with rough and formal concepts. ### Two-sorted modal logic and concept lattices in rough set theory Let us consider a special kind of two-sorted signature \((\{s_{1},s_{2}\},\Sigma)\) where \(\Sigma=\Sigma_{1}\uplus\Sigma_{2}\) is the direct sum of two sets of unary modalities such that \(\Sigma_{1}=\Sigma_{s_{1}s_{2}}\) and \(\Sigma_{2}=\Sigma_{s_{2}s_{1}}=\Sigma_{1}^{-1}:=\{\sigma^{-1}:\sigma\in\Sigma_ {1}\}\). We say that the signature is bidirectional. Modal languages built over bidirectional signatures are interpreted in bidirectional frames. Definition 6: For the signature above, a two-sorted bidirectional frame is a quadruple : \[\mathfrak{F}_{2}:=(W_{1},W_{2},\{R_{\sigma}\}_{\sigma\in\Sigma_{1}},\{R_{ \sigma^{-1}}\}_{\sigma\in\Sigma_{1}})\] where \(W_{1},W_{2}\) are non-empty disjoint sets and \(R_{\sigma}\subseteq W_{2}\times W_{1}\), \(R_{\sigma^{-1}}\) is the converse of \(R_{\sigma}\). The class of all two-sorted bidirectional frame is denoted as \(\mathbb{BSFR}_{2}\). The logic system **KB** for two-sorted bidirectional frames is define as \(\textbf{K}\Lambda\) where \(\Lambda\) consists of the following axioms: \[\text{(B) }p\rightarrow(\sigma^{-1})^{\square}\sigma p\text{ and }q \rightarrow\sigma^{\square}\sigma^{-1}q\text{ where }p\in P_{s_{1}}\text{ and }q\in P_{s_{2}}.\] Theorem 2.3: **KB** is sound with respect to class \(\mathbb{BSFR}_{2}\) of all two-sorted bidirectional frame. Proof: The proof is straightforward. Here we give the proof for the axiom \(p\rightarrow(\sigma^{-1})^{\square}\sigma p\). Let \(\mathfrak{M}\) be a model based on the frame \(\mathfrak{F}_{2}\) defined above and \(\mathfrak{M},w_{1}\models_{s_{1}}p\) for some \(w_{1}\in W_{1}\). Now, for any \(w_{2}\in W_{2}\) such that \(R_{\sigma^{-1}}w_{1}w_{2}\), we have \(\mathfrak{M},w_{2}\models_{s_{2}}\sigma p\) because \(R_{\sigma}w_{2}w_{1}\) follows from the converse of relation. This leads to \(\mathfrak{M},w_{1}\models_{s_{1}}(\sigma^{-1})^{\square}\sigma p\) immediately. The completeness theorem is proved using the canonical model of **KB**, which is an instance of that constructed in Definition 5. Hence, \[\mathfrak{M}^{\textbf{KB}}:=(\{W_{s_{1}}^{\textbf{KB}},W_{s_{2}}^{\textbf{KB}} \},\{R_{\sigma}^{\textbf{KB}},R_{\sigma^{-1}}^{\textbf{KB}}\}_{\sigma\in\Sigma },V^{\textbf{KB}})\] It is easy to see that the model satisfies the following properties for \(x\in W_{s_{1}}^{\textbf{KB}}\) and \(y\in W_{s_{2}}^{\textbf{KB}}\): 1. \(R_{\sigma}^{\textbf{KB}}yx\) iff \(\phi\in x\) implies that \(\sigma\phi\in y\) for any \(\phi\in Fm_{s_{1}}\). 2. \(R_{\sigma^{-1}}^{\textbf{KB}}xy\) iff \(\phi\in y\) implies that \(\sigma^{-1}\phi\in x\) for any \(\phi\in Fm_{s_{2}}\). Theorem 2.4: **KB** is strongly complete with respect to class of all two-sorted bidirectional models, that is for any \(s\in\{s_{1},s_{2}\}\), \(\phi\in Fm_{s}\) and \(\Phi_{s}\subseteq Fm_{s}\), \(\Phi_{s}\models_{s}^{\mathbb{BSFR}_{2}}\phi\) implies that \(\Phi_{s}\models_{s}^{\textbf{KB}}\phi\). Proof: It is sufficient to show that the canonical model is a bidirectional frame. Then, the result follows from Propositions 2 and 3. Let \(x\in W_{s_{1}}^{\mathbf{KB}}\) and \(y\in W_{s_{2}}^{\mathbf{KB}}\) and assume \((y,x)\in R_{\sigma}^{\mathbf{KB}}\). Then, for any \(\phi\in y\), we have \(\sigma^{\square}\sigma^{-1}\phi\in y\) by axiom (B), which in turns implies \(\sigma^{-1}\phi\in x\) by Lemma 1. Hence, \((x,y)\in R_{\sigma^{-1}}^{\mathbf{KB}}\) by property (b) of the canonical model. Analogously, we can show that \((x,y)\in R_{\sigma^{-1}}^{\mathbf{KB}}\) implies \((y,x)\in R_{\sigma}^{\mathbf{KB}}\). That is, \(R_{\sigma^{-1}}^{\mathbf{KB}}\) is indeed the converse of \(R_{\sigma}^{\mathbf{KB}}\). To represent rough concepts, we consider a particular bidirectional signature \((\{s_{1},s_{2}\},\{\Diamond,\Diamond^{-1}\})\) (i.e. the signature that \(\Sigma_{1}\) is a singleton containing the modality \(\Diamond\)). As usual, we denote the dual modalities of \(\Diamond\) and \(\Diamond^{-1}\) by \(\square\) and \(\square^{-1}\) respectively. Let \(\mathcal{SF}_{2}\) denote the class of all bidirectional frames over the signature and let \(\mathcal{K}\) be the set of all contexts. Then, there is a bijective correspondence between \(\mathcal{K}\) and \(\mathcal{SF}_{2}\) given by \((G,M,I)\mapsto(G,M,I^{-1},I)\). Note that \(I^{-1}\) and \(I\) respectively correspond to modalities \(\Diamond\) and \(\Diamond^{-1}\) under the mapping. We use \(Fm(\mathbf{RS}):=\{Fm(\mathbf{RS})_{s_{1}},Fm(\mathbf{RS})_{s_{2}}\}\) and \(\mathbf{KB}_{2}\) to denote the indexed family of formulas and its logic system over the particular signature respectively. By Theorems 3 and 4, \(\mathbf{KB}_{2}\) is sound and complete with respect to the class \(\mathcal{SF}_{2}\) and hence \(\mathcal{K}\). Let us denote the truth set of a formula \(\phi\in Fm(\mathbf{RS})_{s_{i}}(i=1,2)\) in a model \(\mathfrak{M}\) by \([[\phi]]_{\mathfrak{M}}:=\{w\in W_{i}\mid\mathfrak{M},w\models_{s_{i}}\phi\}\). We usually omit the subscript and simply write \([[\phi]]\). Proposition 4: Let \(\mathbb{K}:=(G,M,I)\) be a context and \(\mathfrak{M}:=(G,M,I^{-1},I,v)\) be a model based on its corresponding frame. Then, the relationship between approximation operators and modal formulas is as follows: 1. \([[\phi]]^{\Diamond}=[[\Diamond\phi]]\) and \([[\phi]]^{\square}=[[\square\phi]]\) for \(\phi\in Fm(\mathbf{RS})_{s_{1}}\). 2. \([[\phi]]^{\Diamond^{-1}}=[[\Diamond^{-1}\phi]]\) and \([[\phi]]^{\square^{-1}}=[[\square^{-1}\phi]]\) for \(\phi\in Fm(\mathbf{RS})_{s_{2}}\). Definition 7: Let \(\mathfrak{C}:=\{(G,M,I^{-1},I)\}\) be a frame based on the context \(\mathbb{K}=(G,M,I)\). Then, we define 1. \(Fm_{PC_{ext}}:=\{\phi\in Fm(\mathbf{RS})_{s_{1}}\mid\models_{s_{1}}^{ \mathfrak{C}}\square^{-1}\Diamond\phi\leftrightarrow\phi\}\) and \(Fm_{PC_{int}}:=\{\phi\in Fm(\mathbf{RS})_{s_{2}}\mid\models_{s_{2}}^{ \mathfrak{C}}\Di\square^{-1}\phi\leftrightarrow\phi\}\) 2. \(Fm_{OC_{ext}}:=\{\phi\in Fm(\mathbf{RS})_{s_{1}}\mid\models_{s_{1}}^{ \mathfrak{C}}\Diamond^{-1}\Box\phi\leftrightarrow\phi\}\) and \(Fm_{OC_{int}}:=\{\phi\in Fm(\mathbf{RS})_{s_{2}}\mid\models_{s_{2}}^{ \mathfrak{C}}\Box\Diamond^{-1}\phi\leftrightarrow\phi\}\) 3. \(Fm_{PC}:=\{(\phi,\psi)\mid\phi\in Fm_{PC_{ext}},\psi\in Fm_{PC_{int}}, \models_{s_{1}}^{\mathfrak{C}}\phi\leftrightarrow\square^{-1}\psi,\models_{s_{2 }}^{\mathfrak{C}}\Diamond\phi\leftrightarrow\psi\}\) 4. \(Fm_{OC}:=\{(\phi,\psi)\mid\phi\in Fm_{OC_{ext}},\psi\in Fm_{OC_{int}}, \models_{s_{1}}^{\mathfrak{C}}\phi\leftrightarrow\Diamond^{-1}\psi,\models_{s_{2 }}^{\mathfrak{C}}\Box\phi\leftrightarrow\psi\}\) Obviously, when \((\phi,\psi)\in Fm_{PC}\), \(([[\phi]],[[\psi]])\in\mathcal{PC}\) for any models based on \(\mathfrak{C}\). Hence, \(Fm_{PC}\) consists of pairs of formulas representing property oriented concepts. Analogously, \(Fm_{OC}\) provides the representation of object oriented concepts. Note that these sets are implicitly parameterized by the underlying context and should be indexed with \(\mathbb{K}\). However, for simplicity, we usually omit the index. ### Two sorted modal logic and concept lattice in formal concept analysis To represent formal concepts, we consider another two-sorted bidirectional signature \((\{s_{1},s_{2}\},\{\boxminus,\boxminus^{-1}\})\), where \(\Sigma_{s_{1}s_{2}}=\{\boxminus\}\) and \(\Sigma_{s_{2}s_{1}}=\{\boxminus^{-1}\}\), and the logic \(\mathbf{KF}\) based on it. Syntactically, the signature is the same as that for \(\mathbf{KB}_{2}\) except we use different symbols to denote the modalities. Hence, formation rules of formulas remain unchanged and we denote the indexed family of formulas by \(Fm(\mathbf{KF})=\{Fm(\mathbf{KF})_{s_{1}},Fm(\mathbf{KF})_{s_{2}}\}\). In addition, while both \(\mathbf{KF}\) and \(\mathbf{KB}_{2}\) are interpreted in bidirectional models, the main difference between them is on the way of their modalities being interpreted. Definition 8: Let \(\mathfrak{M}:=(W_{1},W_{2},R,R^{-1},v)\). Then, 1. For \(\phi\in Fm(\mathbf{KF})_{s_{1}}\) and \(w\in W_{2}\), \(\mathfrak{M},w\models_{s_{2}}\boxminus\phi\) iff for any \(w^{\prime}\in W_{1}\), \(\mathfrak{M},w^{\prime}\models_{s_{1}}\phi\) implies \(R(w,w^{\prime})\) 2. For \(\phi\in Fm(\mathbf{KF})_{s_{2}}\) and \(w\in W_{1}\), \(\mathfrak{M},w\models_{s_{1}}\boxminus^{-1}\phi\) iff for any \(w^{\prime}\in W_{2}\), \(\mathfrak{M},w^{\prime}\models_{s_{2}}\phi\) implies \(R^{-1}(w,w^{\prime})\) The logic system \(\mathbf{KF}:=\{\mathbf{KF}_{s_{1}},\mathbf{KF}_{s_{2}}\}\) is shown in Figure 2. We define a translation \(\rho:Fm(\mathbf{KF})\to Fm(\mathbf{RS})\) where \(\rho=\{\rho_{1},\rho_{2}\}\) is defined as follows: Figure 2: The axiomatic system \(\mathbf{KF}\) 1. \(\rho_{i}(p):=p\) for all \(p\in P_{s_{i}}\) for \(i=1,2\). 2. \(\rho_{i}(\phi\wedge\psi):=\rho_{i}(\phi)\wedge\rho_{i}(\psi)\) for \(\phi,\psi\in Fm(\mathbf{KF})_{s_{i}}\), \(i=1,2\). 3. \(\rho_{i}(\neg\phi):=\neg\rho_{i}(\phi)\) for \(\phi\in Fm(\mathbf{KF})_{s_{i}}\), \(i=1,2\). 4. \(\rho_{1}(\boxdot\phi):=\square\neg\rho_{1}(\phi)\) for \(\phi\in Fm_{\mathbf{KF}_{s_{1}}}\). 5. \(\rho_{2}(\boxdot^{-1}\phi):=\square^{-1}\neg\rho_{2}(\phi)\) for \(\phi\in Fm_{\mathbf{KF}_{s_{2}}}\). Theorem 5.1: For any formula \(\phi\in Fm_{\mathbf{KF}_{si}}(i=1,2)\) the following hold. 1. \(\Phi\vdash^{\mathbf{KF}}\phi\) if and only if \(\rho(\Phi)\vdash^{\mathbf{KB}_{2}}\rho(\phi)\) for any \(\Phi\subseteq Fm_{\mathbf{KF}_{si}}\). 2. Let \(\mathfrak{M}:=(W_{1},W_{2},I,I^{-1},v)\) be a model and \(\mathfrak{M}^{c}:=(W_{1},W_{2},I^{c},(I^{-1})^{c},v)\) be the corresponding complemented model, \(w\in W_{i},\mathfrak{M},w\models_{s_{i}}\phi\) if and only if \(\mathfrak{M}^{c},w\models_{s_{i}}\rho(\phi)\) for all \(i=1,2\). 3. \(\phi\) is valid in the class \(\mathcal{SF}_{2}\) if and only if \(\rho(\phi)\) is valid in \(\mathcal{SF}_{2}\). Proof: 1. We can prove it by showing that \(\phi\) is an axiom in \(\mathbf{KF}\) if and only if \(\rho(\phi)\) is an axiom in \(\mathbf{KB}_{2}\), and for each rule in \(\mathbf{KF}\), there is a translation of it in \(\mathbf{KB}_{2}\) and vice verse. 2. By induction on the complexity of formulas, as usual, the proof of basis and Boolean cases are straightforward. For \(\phi=\boxdot\psi\), let us assume any \(w\in W_{2}\). Then, by Definition 8, \(\mathfrak{M},w\models_{s_{2}}\boxdot\phi\) iff for all \(w^{\prime}\in W_{1}\), \(I^{c}ww^{\prime}\) implies that \(\mathfrak{M},w^{\prime}\models_{s_{1}}\neg\psi\). By induction hypothesis, this means that for all \(w^{\prime}\in W_{1}\), \(I^{c}ww^{\prime}\) implies that \(\mathfrak{M}^{c},w^{\prime}\models_{s_{1}}\neg\rho(\psi)\). That is, \(\mathfrak{M}^{c},w\models_{s_{2}}\square\neg\rho(\phi)\). By definition of \(\rho\), this is exactly \(\mathfrak{M}^{\mathfrak{c}},w\models_{s_{2}}\rho(\phi)\). The case of \(\phi=\boxdot^{-1}\psi\) is proved analogously. 3. This follows immediately from (b). Proposition 5.2: For \(\phi_{1},\phi_{2}\in Fm(\mathbf{KF})_{s_{1}}\), \(\boxdot\phi_{2}\rightarrow\boxdot\phi_{1}\) \(\phi_{1}\rightarrow\phi_{2}\) 2. For \(\phi_{1},\phi_{2}\in Fm(\mathbf{KF})_{s_{2}}\), \(\boxdot^{-1}\phi_{2}\rightarrow\boxdot^{-1}\phi_{1}\) Proof: We only prove (a) and the proof of (b) is similar. \[\vdash^{\mathbf{KF}}\phi_{1}\rightarrow\phi_{2}\] \[\vdash^{\mathbf{KB}_{2}}\rho(\phi_{1})\rightarrow\rho(\phi_{2})( \text{Theorem \ref{thm:Kf} (a)})\] \[\vdash^{\mathbf{KB}_{2}}(\rho(\phi_{1})\rightarrow\rho(\phi_{2})) \rightarrow(\neg\rho(\phi_{2})\rightarrow\neg\rho(\phi_{1}))(\text{PL})\] \[\vdash^{\mathbf{KB}_{2}}\neg\rho(\phi_{2})\rightarrow\neg\rho( \phi_{1})(\text{MP})\] \[\vdash^{\mathbf{KB}_{2}}\square(\neg\rho(\phi_{2})\rightarrow\neg \rho(\phi_{1}))(\text{UG})\] \[\vdash^{\mathbf{KB}_{2}}\square\neg\rho(\phi_{2})\rightarrow\neg \rho(\phi_{1}))\rightarrow(\square\neg\rho(\phi_{2})\rightarrow\square\neg\rho( \phi_{1}))(\text{K})\] \[\vdash^{\mathbf{KB}_{2}}\square\neg\rho(\phi_{2})\rightarrow\square \neg\rho(\phi_{1})(\text{MP})\] \[\vdash^{\mathbf{KF}}\boxdot\phi_{2}\rightarrow\boxdot\phi_{1}( \text{Theorem \ref{thm:Kf} (a)})\] Theorem 5.3: **KF** is sound and strongly complete with respect to the class \(\mathcal{SF}_{2}\). Proof: This follows from Theorem 5.3 and the fact that \(\mathbf{KB}_{2}\) is sound and strongly complete with respect to \(\mathcal{SF}_{2}\). Proposition 6: Recalling the definition of truth set, we have 1. \([[\boxed{\phi}]]=[[\phi]]^{+}\) for \(\phi\in Fm(\mathbf{KF})_{s_{1}}\) 2. \([[\boxed{\beta}^{-1}\phi]]=[[\phi]]^{-}\) for \(\phi\in Fm(\mathbf{KF})_{s_{2}}\). Definition 9: Let \(\mathfrak{C}:=\{(G,M,I^{-1},I)\}\) be a frame based on the context \((G,M,I)\). Then, we define 1. \(Fm_{FC_{ext}}:=\{\phi\in Fm(\mathbf{KF})_{s_{1}}\mid\models^{\mathfrak{C}}_{s_ {1}}\boxed{\Sigma}^{-1}\boxed{\phi}\leftrightarrow\phi\}\) and \(Fm_{FC_{int}}:=\{\phi\in Fm(\mathbf{KF})_{s_{2}}\mid\models^{\mathfrak{C}}_{s_ {2}}\boxed{\Sigma}\boxed{\Sigma}^{-1}\phi\leftrightarrow\phi\}\) 2. \(Fm_{FC}:=\{(\phi,\psi)\mid\phi\in Fm_{FC_{ext}},\psi\in Fm_{FC_{int}},\models^ {\mathfrak{C}}_{s_{1}}\phi\leftrightarrow\boxed{\Sigma}^{-1}\psi,\models^{ \mathfrak{C}}_{s_{2}}\boxed{\phi}\leftrightarrow\psi\}\) In other words, the set \(Fm_{FC}\) represents formal concepts induced from the context \((G,M,I)\). ## 3 Logical representation of three concept lattices We have seen that a certain pairs of formulas in the logic \(\mathbf{KF}\) and \(\mathbf{KB}_{2}\) can represent concepts in FCA and RST respectively. The observation suggests the definition below. Definition 10: Let \(\phi\in Fm(\mathbf{RS})_{s_{1}}\), \(\psi\in Fm(\mathbf{RS})_{s_{2}}\), \(\eta\in Fm(\mathbf{KF})_{s_{1}}\), and \(\gamma\in Fm(\mathbf{KF})_{s_{2}}\). Then, for a context \((G,M,I)\), we say that 1. \((\phi,\psi)\) is a _(logical) property oriented concept_ of \(\mathbb{K}\) if \((\phi,\psi)\in Fm_{PC}\). 2. \((\phi,\psi)\) is a _(logical) object oriented concept_ of \(\mathbb{K}\) if \((\phi,\psi)\in Fm_{OC}\). 3. \((\eta,\gamma)\) is a _(logical) formal concept_ of \(\mathbb{K}\) if \((\eta,\gamma)\in Fm_{FC}\). We now explore the relationships between the three notions and their properties. In what follows, for a context \(\mathbb{K}=(G,M,I)\), we usually use \(\mathfrak{C}_{0}:=\{(G,M,I^{-1},I)\}\) and \(\mathcal{C}_{1}:=\{(G,M,(I^{c})^{-1}),I^{c}\}\) to denote frames corresponding to \(\mathbb{K}\) and \(\mathbb{K}^{c}\) respectively. Proposition 7: Let \(\mathbb{K}:=(G,M,I)\) be a context. Then, 1. \((\phi,\psi)\) is a property oriented concept of \(\mathbb{K}\) iff \((\neg\phi,\neg\psi)\) is an object oriented concept of \(\mathbb{K}^{c}\) for \(\phi\in Fm(\mathbf{RS})_{s_{1}}\) and \(\psi\in Fm(\mathbf{RS})_{s_{2}}\). 2. \((\phi,\psi)\) is a formal concept of \(\mathbb{K}\) iff \((\rho(\phi),\neg\rho(\psi))\) is a property oriented concept of \(\mathbb{K}^{c}\) for \(\phi\in Fm(\mathbf{KF})_{s_{1}}\) and \(\psi\in Fm(\mathbf{KF})_{s_{2}}\). 3. \((\phi,\psi)\) is a formal concept of \(\mathbb{K}\) iff \((\neg\rho(\phi),\rho(\psi))\) is an object oriented concept of \(\mathbb{K}^{c}\) for \(\phi\in Fm(\mathbf{KF})_{s_{1}}\) and \(\psi\in Fm(\mathbf{KF})_{s_{2}}\).. Proof: (a) Suppose that \((\phi,\psi)\) is a property oriented concept of \(\mathbb{K}\), then by definition, \(\models^{\mathfrak{C}_{0}}_{s_{1}}\boxed{\Sigma}^{-1}\lozenge\phi\leftrightarrow\phi\), \(\models^{\mathfrak{C}_{0}}_{s_{2}}\lozenge\Box^{-1}\psi\leftrightarrow\psi\), \(\models^{\mathfrak{C}_{0}}_{s_{1}}\phi\leftrightarrow\Box^{-1}\psi\), and \(\models^{\mathfrak{C}_{0}}_{s_{2}}\lozenge\phi\leftrightarrow\psi\). Hence, we have the following derivation, \[\models^{\mathfrak{C}_{0}}_{s_{1}}\boxed{\Sigma}^{-1}\lozenge\phi\leftrightarrow\phi\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}(\Box^{-1}\lozenge\phi \leftrightarrow\phi)\leftrightarrow(\neg\phi\leftrightarrow\neg\Box^{-1} \lozenge\phi)\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}\neg\phi\leftrightarrow\neg\Box^{-1} \lozenge\phi\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}\neg\phi\leftrightarrow\lozenge^{-1} \Box\neg\phi\] Therefore, \(\neg\phi\in Fm_{PC_{ext}}\). Similarly, by \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Diamond\Box^{-1}\psi\leftrightarrow\psi\), contraposition and modus ponens, we can show that \(\neg\psi\in Fm_{PC_{int}}\). Using \(\models^{\mathfrak{C}_{0}}_{s_{1}}\phi\leftrightarrow\Box^{-1}\psi\), \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Diamond\phi\leftrightarrow\psi\), contraposition and modus ponens, we can show that \((\neg\phi,\neg\psi)\in Fm_{OC}\). We can also prove the converse direction by replacing \(\phi\), \(\psi\), \(\Diamond\), \(\Box\) with \(\neg\phi\), \(\neg\psi\), \(\Box\), \(\Diamond\) respectively. 2. Because \((\phi,\psi)\) is a formal concept, we have \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box\phi\leftrightarrow\phi\), \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\Box^{-1}\psi\leftrightarrow\psi\), \(\models^{\mathfrak{C}_{0}}_{s_{1}}\phi\leftrightarrow\Box^{-1}\psi\), and \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\phi\leftrightarrow\psi\). By \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box\phi\leftrightarrow\phi\) and Theorem 5 (a), we have \(\models^{\mathfrak{C}_{1}}_{s_{1}}\rho(\Box^{-1}\Box\phi)\leftrightarrow\rho(\phi)\) which implies that \(\models^{\mathfrak{C}_{1}}_{s_{1}}\Box^{-1}\Diamond\rho(\phi)\leftrightarrow \rho(\phi)\). By \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\Box^{-1}\psi\leftrightarrow\psi\) and Theorem 5 (a), we have \(\models^{\mathfrak{C}_{1}}_{s_{2}}\Box\Box^{-1}\neg\rho(\psi)\leftrightarrow\rho(\psi)\), which implies that \(\models^{\mathfrak{C}_{1}}_{s_{2}}\Diamond\Box^{-1}\neg\rho(\psi)\leftrightarrow \neg\rho(\psi)\). Similarly, we can show that \(\models^{\mathfrak{C}_{1}}_{s_{1}}\rho(\phi)\leftrightarrow\Box^{-1}\neg\rho( \psi)\), \(\models^{\mathfrak{C}_{1}}_{s_{2}}\Diamond\rho(\phi)\leftrightarrow\neg\rho(\psi)\). Therefore, \((\rho(\phi),\rho(\psi))\) is a property oriented concept for \((G,M,I^{c})\). The proof for the converse direction is similar. 3. It follows from (a) and (b) immediately. Now, we can define a relation \(\equiv_{1}\) on the set \(Fm_{PC}\) as follows: For \((\phi,\psi),(\phi^{\prime},\psi^{\prime})\in Fm_{PC}\), \((\phi,\psi)\equiv_{1}(\phi^{\prime},\psi^{\prime})\) if and only if \(\models^{\mathfrak{C}_{0}}\phi\leftrightarrow\phi^{\prime}\). Analogously, we can define \(\equiv_{2}\) and \(\equiv_{3}\) on the set \(Fm_{OC}\) and \(Fm_{FC}\), respectively. Obviously, \(\equiv_{1},\equiv_{2}\) and \(\equiv_{3}\) are all equivalence relations. Let \(Fm_{PC}/\equiv_{1},Fm_{OC}/\equiv_{2}\), and \(Fm_{FC}/\equiv_{3}\) be the sets of equivalence classes. Proposition 8: For \((\phi,\psi),(\phi^{\prime},\psi^{\prime})\in Fm_{X}\), \((\phi,\psi)\equiv_{i}(\phi^{\prime},\psi^{\prime})\) iff \(\models^{\mathfrak{C}_{0}}\psi\leftrightarrow\psi^{\prime}\), where \(i\in\{1,2,3\}\) for \(X\in\{PC,OC,FC\}\) respectively. Proof: Let us prove the case of \(FC\) as an example. Suppose \((\phi,\psi),(\phi^{\prime},\psi^{\prime})\in Fm_{FC}\) and \((\phi,\psi)\equiv_{3}(\phi^{\prime},\psi^{\prime})\). Then, \(\models^{\mathfrak{C}_{0}}_{s_{1}}\phi\leftrightarrow\phi^{\prime}\), which implies \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\phi\leftrightarrow\Box\phi^{\prime}\) according to the semantics of \(\mathbf{KF}\). In addition, by definition of \(Fm_{FC}\), \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\phi\leftrightarrow\psi\), and \(\models^{\mathfrak{C}_{0}}_{s_{2}}\Box\phi^{\prime}\leftrightarrow\psi^{\prime}\). Hence, \(\models^{\mathfrak{C}_{0}}_{s_{2}}\psi\leftrightarrow\psi^{\prime}\). Proof: (a). We prove the case of \(Fm_{FC_{ext}}\) as an example and other cases can be proved in a similar way. Let \(\phi,\phi^{\prime}\in Fm_{FC_{ext}}\). Then, \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box\phi\leftrightarrow\phi\) and \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box\phi^{\prime}\leftrightarrow\phi^{\prime}\). By using the translation \(\rho\) and Theorem 5, we have both \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box(\phi\wedge\phi^{\prime}) \rightarrow\Box^{-1}\Box\phi\) and \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box(\phi\wedge\phi^{\prime}) \rightarrow\Box^{-1}\Box\phi^{\prime}\). Hence, we can derive \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\Box(\phi\wedge\phi^{\prime}) \rightarrow(\phi\wedge\phi^{\prime})\). Also, with the translation, we have \(\models^{\mathfrak{C}_{0}}_{s_{1}}(\phi\wedge\phi^{\prime})\rightarrow\Box^{-1} \Box(\phi\wedge\phi^{\prime})\) because the formula is mapped to an instance of axiom (B). Hence, \(\phi\wedge\phi^{\prime}\in Fm_{FC_{ext}}\). 2. Let us prove the case of \(\phi\in Fm_{FC_{ext}}\) as an example. Assume that \(\phi\in Fm_{FC_{ext}}\) and \(\circ=\Box\). Then, according to the semantics of \(\box\), \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\boxplus\phi\leftrightarrow\phi\) implies \(\models^{\mathfrak{C}_{0}}_{s_{1}}\Box\Box^{-1}\boxplus\phi\leftrightarrow\Box\phi\). Hence \(\Box\phi\in Fm_{FC_{int}}\). Similarly, if \(\psi\in Fm_{FC_{int}}\), then \(\Box^{-1}\psi\in Fm_{FC_{ext}}\). From the proposition, we can derive the following corollary immediately. Corollary 1: 1. \((\phi_{1},\psi_{1}),(\phi_{2},\psi_{2})\in Fm_{PC}\) implies that \((\phi_{1}\wedge\phi_{2},\Diamond(\phi_{1}\wedge\phi_{2}))\) and \((\Box^{-1}(\psi_{1}\wedge\psi_{2}),\psi_{1}\wedge\psi_{2})\in Fm_{PC}\). 2. \((\phi_{1},\psi_{1}),(\phi_{2},\psi_{2})\in Fm_{OC}\) implies that \((\phi_{1}\wedge\phi_{2},\Box(\phi_{1}\wedge\phi_{2}))\) and \((\Diamond^{-1}(\psi_{1}\wedge\psi_{2}),\psi_{1}\wedge\psi_{2})\in Fm_{OC}\). 3. \((\phi_{1},\psi_{1}),(\phi_{2},\psi_{2})\in Fm_{FC}\) implies that \((\phi_{1}\wedge\phi_{2},\Box(\phi_{1}\wedge\phi_{2}))\) and \((\Box^{-1}(\psi_{1}\wedge\psi_{2}),\psi_{1}\wedge\psi_{2})\in Fm_{FC}\). Now we can define the following structures: \((Fm_{PC}/\equiv_{1},\vee_{1},\wedge_{1})\), where \([(\phi,\psi)],[(\phi^{\prime},\psi^{\prime})]\in Fm_{PC}/\equiv_{1}\), \[[(\phi,\psi)]\wedge_{1}[(\phi^{\prime},\psi^{\prime})]:=[(\phi\wedge\phi^{ \prime},\Diamond(\phi\wedge\phi^{\prime}))]\] \[[(\phi,\psi)]\vee_{1}[(\phi^{\prime},\psi^{\prime})]:=[(\Box^{-1}(\psi\wedge \psi^{\prime}),(\psi\wedge\psi^{\prime}))]\] \((Fm_{OC}/\equiv_{2},\vee_{2},\wedge_{2})\), where \([(\phi,\psi)],[(\phi^{\prime},\psi^{\prime})]\in Fm_{OC}/\equiv_{2}\), \[[(\phi,\psi)]\wedge_{2}[(\phi^{\prime},\psi^{\prime})]:=[(\phi\wedge\phi^{ \prime},\Box(\phi\wedge\phi^{\prime}))]\] \([(\phi,\psi)]\vee_{2}[(\phi^{\prime},\psi^{\prime})]:=[(\Diamond^{-1}(\psi \wedge\psi^{\prime}),(\psi\wedge\psi^{\prime}))]\) \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\), where \([(\phi,\psi)],[(\phi^{\prime},\psi^{\prime})]\in Fm_{FC}/\equiv_{3}\), \[[(\phi,\psi)]\wedge_{3}[(\phi^{\prime},\psi^{\prime})]:=[(\phi\wedge\phi^{ \prime},\Box(\phi\wedge\phi^{\prime}))]\] \[[(\phi,\psi)]\vee_{3}[(\phi^{\prime},\psi^{\prime})]:=[(\Box^{-1}(\psi\wedge \psi^{\prime}),(\psi\wedge\psi^{\prime}))]\] Theorem 4.1: For a context \(\mathbb{K}\), \((Fm_{PC}/\equiv_{1},\vee_{1},\wedge_{1})\), \((Fm_{OC}/\equiv_{2},\vee_{2},\wedge_{2})\) and \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\), are lattices. Proof: We give proof for the structure \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\) and the proofs of other cases are similar. Let \((\phi,\psi),(\phi_{1},\psi_{1}),(\phi^{\prime},\psi^{\prime}),(\phi^{\prime}_{ 1},\psi^{\prime}_{1})\in Fm_{FC}\) such that \((\phi,\psi)\equiv_{3}(\phi_{1},\psi_{1})\) and \((\phi^{\prime},\psi^{\prime})\equiv_{3}(\phi^{\prime}_{1},\psi^{\prime}_{1})\). By Corollary 1, \((\phi\wedge\phi^{\prime},\Box(\phi\wedge\phi^{\prime})),(\Box^{-1}(\psi\wedge \psi^{\prime}),\psi\wedge\psi^{\prime}),(\phi_{1}\wedge\phi^{\prime}_{1},\Box( \phi_{1}\wedge\phi^{\prime}_{1}))\) and \((\Box^{-1}(\psi_{1}\wedge\psi^{\prime}_{1}),\psi_{1}\wedge\psi^{\prime}_{1})\in Fm _{FC}\). Now \((\phi,\psi)\equiv_{3}(\phi_{1},\psi_{1})\) and \((\phi^{\prime},\psi^{\prime})\equiv_{3}(\phi^{\prime}_{1},\psi^{\prime}_{1})\) implies that \(\models^{\mathfrak{C}_{0}}\phi\leftrightarrow\phi_{1}\) and \(\models^{\mathfrak{C}_{0}}\phi^{\prime}\leftrightarrow\phi^{\prime}_{1}\). By Proposition 8, \(\models^{\mathfrak{C}_{0}}\psi\leftrightarrow\psi_{1}\) and \(\models^{\mathfrak{C}_{0}}\psi^{\prime}\leftrightarrow\psi^{\prime}_{1}\). \(\models^{\mathfrak{C}_{0}}\phi\wedge\phi^{\prime}\leftrightarrow\phi_{1}\wedge \phi^{\prime}_{1}\) and \(\models^{\mathfrak{C}_{0}}\psi\wedge\psi^{\prime}\leftrightarrow\psi_{1}\wedge \psi^{\prime}_{1}\) which implies that \((\phi\wedge\phi^{\prime},\Box(\phi\wedge\phi^{\prime}))\equiv_{3}(\phi_{1} \wedge\phi^{\prime}_{1},\Box(\phi_{1}\wedge\phi^{\prime}_{1}))\) and \((\Box^{-1}(\psi\wedge\psi^{\prime}),\psi\wedge\psi^{\prime})\equiv_{3}(\Box^{-1 }(\psi_{1}\wedge\psi^{\prime}_{1}),\psi_{1}\wedge\psi^{\prime}_{1})\). Hence, \(\wedge_{3}\) and \(\vee_{3}\) are well-defined operations. Their commutativity and associativity follow from the fact that \(\vdash^{\mathbf{KF}}\phi\wedge\psi\leftrightarrow\psi\wedge\phi\) and \(\vdash^{\mathbf{KF}}(\phi\wedge\psi)\wedge\gamma\leftrightarrow\phi\wedge(\psi \wedge\gamma)\). Now we will show that for all \([(\phi_{1},\psi_{1})],[(\phi_{2},\psi_{2})]\in Fm_{FC}/\equiv_{3}\), \([(\phi_{1},\psi_{1})]\wedge([\phi_{1},\psi_{1}]\vee[(\phi_{2},\psi_{2})])=[(\phi_{ 1},\psi_{1})]\) which is equivalent to \([(\phi_{1}\wedge\Box^{-1}(\psi_{1}\wedge\psi_{2}),\Box(\phi_{1}\wedge\Box^{-1}( \psi_{1}\wedge\psi_{1})))]=[(\phi_{1},\psi_{1})]\). We know that \(\models^{\mathfrak{C}_{0}}\phi_{1}\wedge\Box^{-1}(\psi_{1}\wedge\psi_{2}) \rightarrow\phi_{1}\). In addition, \[\models^{\mathfrak{C}_{0}}_{s_{1}}\phi_{1}\leftrightarrow\Box^{-1} \psi_{1}\text{ as }(\phi_{1},\psi_{1})\in Fm_{FC}\] \[\models^{\mathfrak{C}_{0}}_{s_{2}}\psi_{1}\wedge\psi_{2} \rightarrow\psi_{1}\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}\Box^{-1}\psi_{1}\rightarrow\Box^{-1 }(\psi_{1}\wedge\psi_{2})\text{ by Proposition }5\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}\phi_{1}\rightarrow\Box^{-1}( \psi_{1}\wedge\psi_{2})\] \[\models^{\mathfrak{C}_{0}}_{s_{1}}\phi_{1}\rightarrow\phi_{1} \wedge\Box^{-1}(\psi_{1}\wedge\psi_{2})\] So \(\models^{\mathfrak{C}_{0}}_{s_{1}}\phi_{1}\leftrightarrow\Box^{-1}(\psi_{1}\wedge \psi_{2})\) which implies that \([(\phi_{1},\psi_{1})]\wedge([\phi_{1},\psi_{1}]\vee[(\phi_{2},\psi_{2})])=[(\phi_ {1},\psi_{1})]\). Analogously, we can show that \([(\phi_{1},\psi_{1})]\vee([\phi_{1},\psi_{1}]\wedge[(\phi_{2},\psi_{2})])=[( \phi_{1},\psi_{1})]\). Hence \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\) is a lattice. Theorem 3.1: Let \(\mathbb{K}\) be a context and let \(\mathbb{K}^{c}\) be its corresponding complemented context. Let \(Fm_{FC}\) be the set of logical formal concepts of \(\mathbb{K}\) and let \(Fm_{PC}\) and \(Fm_{OC}\) be the sets of logical property oriented concepts and logical object oriented concepts of \(\mathbb{K}^{c}\) respectively. Then, 1. \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\) and \((Fm_{PC}/\equiv_{1},\vee_{1},\wedge_{1})\) are isomorphic. 2. \((Fm_{PC}/\equiv_{1},\vee_{1},\wedge_{1})\) and \((Fm_{OC}/\equiv_{2},\vee_{2},\wedge_{2})\) are dually isomorphic. 3. \((Fm_{FC}/\equiv_{3},\vee_{3},\wedge_{3})\) and \((Fm_{OC}/\equiv_{2},\vee_{2},\wedge_{2})\) are dually isomorphic. Proof: (a) By Proposition 7, the mapping \(h:Fm_{FC}/\equiv_{3}\to Fm_{PC}/\equiv_{1}\) defined by \(h([(\phi,\psi)]):=[(\rho(\phi),\rho(\neg\psi))]\) is well-defined and surjective. Now \(h([(\phi_{1},\psi_{1})])=h([(\phi_{2},\psi_{2})])\) implies \([\rho((\phi_{1}),\rho(\neg\psi_{1}))]=[(\rho(\phi_{2}),\rho(\neg\psi_{2}))]\), which in turn implies \(\models^{\mathfrak{C}_{1}}\rho(\phi_{1})\leftrightarrow\rho(\phi_{2})\), and by Theorem 3.1, \(\models^{\mathfrak{C}_{0}}\phi_{1}\leftrightarrow\phi_{2}\). This means that \([(\phi_{1},\psi_{1})]=[(\phi_{2},\psi_{2})]\). Thus, \(h\) is injective, and as a result, \(h\) is a bijection. In addition, \[h([(\phi_{1},\psi_{1})]\wedge_{3}[(\phi_{2},\psi_{2})]) =h([(\phi_{1}\wedge\phi_{2},\Box(\phi_{1}\wedge\phi_{2}))])\] \[=([\rho(\phi_{1}\wedge\phi_{2}),\rho(\neg\box(\phi_{1}\wedge\phi_ {2}))])\] \[=([\rho(\phi_{1}\wedge\phi_{2}),\Diamond\rho(\phi_{1}\wedge\phi_ {2})])\] \[=h([(\phi_{1},\psi_{1})])\wedge_{1}h([(\phi_{2},\psi_{2})])\] Therefore, \(h\) is an isomorphism. (b) Analogously, we can show that \(f:Fm_{PC}/\equiv_{1}\to Fm_{OC}/\equiv_{1}\) such that \(f([(\phi,\psi)]):=[(\neg\phi,\neg\psi)]\) is a dual isomorphism. (c) It follows from (a) and (b) immediately. ## 4 Conclusion and future direction In this paper, we show that concepts based on RST and FCA can be represented in two dual instances of two-sorted modal logics **KB** and **KF**. An interesting question is how to deal with both kinds of concepts in a single framework. To address the question, we apparently need a signature including all modalities in **KB** and **KF** together. For that, the Boolean modal logic proposed in [5] may be helpful. Hence, to investigate many-sorted Boolean modal logic and its representational power for concepts based on both RST and FCA will be an important direction in our future work. As a formal context consists of objects, properties, and a relation between them, the relationship between objects and properties can change over time. Hence, to model and analyze the dynamics of contexts is also desirable. Using two-sorted bidirectional relational frames, we can model contexts at some time. Therefore, integrating temporal logic with many-sorted modal logic will provide an approach to model dynamics of contexts. This is another possible direction for further research.
2309.05784
Grey-box Bayesian Optimization for Sensor Placement in Assisted Living Environments
Optimizing the configuration and placement of sensors is crucial for reliable fall detection, indoor localization, and activity recognition in assisted living spaces. We propose a novel, sample-efficient approach to find a high-quality sensor placement in an arbitrary indoor space based on grey-box Bayesian optimization and simulation-based evaluation. Our key technical contribution lies in capturing domain-specific knowledge about the spatial distribution of activities and incorporating it into the iterative selection of query points in Bayesian optimization. Considering two simulated indoor environments and a real-world dataset containing human activities and sensor triggers, we show that our proposed method performs better compared to state-of-the-art black-box optimization techniques in identifying high-quality sensor placements, leading to accurate activity recognition in terms of F1-score, while also requiring a significantly lower (51.3% on average) number of expensive function queries.
Shadan Golestan, Omid Ardakanian, Pierre Boulanger
2023-09-11T19:31:14Z
http://arxiv.org/abs/2309.05784v1
# Grey-box Bayesian Optimization for Sensor Placement in Assisted Living Environments ###### Abstract Optimizing the configuration and placement of sensors is crucial for reliable fall detection, indoor localization, and activity recognition in assisted living spaces. We propose a novel, sample-efficient approach to find a high-quality sensor placement in an arbitrary indoor space based on grey-box Bayesian optimization and simulation-based evaluation. Our key technical contribution lies in capturing domain-specific knowledge about the spatial distribution of activities and incorporating it into the iterative selection of query points in Bayesian optimization. Considering two simulated indoor environments and a real-world dataset containing human activities and sensor triggers, we show that our proposed method performs better compared to state-of-the-art black-box optimization techniques in identifying high-quality sensor placements, leading to accurate activity recognition in terms of F1-score, while also requiring a significantly lower (51.3% on average) number of expensive function queries. Bayesian optimization Grey-box optimization Sensor placement Intelligent indoor environments ## 1 Introduction Smart indoor spaces are commonly equipped with various networked sensors, such as motion sensors and cameras, to monitor the activities and physical and mental health of the occupants [1]. Previous work shows that the placement of these sensors in the environment is one of the main factors determining the performance of machine learning models that utilize the data generated by these sensors [2, 3]. In particular, an optimized sensor placement strategy makes possible accurate activity recognition and localization using the smallest number of sensors. However, due to the exponential size of the search space and the high cost of evaluating the model performance for each sensor placement under normal occupancy conditions, finding the best sensor placement through exhaustive search is prohibitive in practice. To date, many efforts have been made to design sample-efficient techniques to optimize the location of sensors, given a lower bound on the performance of downstream applications that consume the sensor data. Greedy and evolutionary algorithms, such as the genetic algorithm (GA), are the most notable methods used to place and configure sensors in an indoor environment [4, 5, 6]. But these methods do not perform well in general, because they rely only on local information provided by the samples of \(f\), the function being optimized [7], with \(x\) being a sensor placement and \(f(x)\) representing some performance measure of the machine learning model. On the contrary, estimation of distribution algorithms, such as Bayesian Optimization (BO) [8], use local information to acquire global information about \(f(x)\), i.e., building a probabilistic surrogate model of this function. When this model represents \(f(x)\) accurately, the optimizer can perform more effective exploration/exploitation. A wide range of problems have been recently solved using BO, from optimization over permutation spaces [9] and combinatorial spaces [10; 11] to setting up sensor networks for air quality monitoring [12]. However, the application of BO to optimize the sensor placement for indoor activity recognition has not been explored in previous work. The main shortcoming of BO is that it treats \(f(x)\) as a black-box function, meaning that it analyzes only its input-output behavior, disregarding any inherent, domain-specific knowledge that might exist about this function. Grey-box optimization [13], however, incorporates this knowledge in the optimization process. Although it requires more careful design, it can be faster and drastically improve the quality of the solution. Our hypothesis is that in the problem of optimizing sensor placement with respect to the performance of an activity recognition model, inherent knowledge about the distribution of activities in different parts of the space could help BO quickly identify important regions in the search space. This paper presents Distribution-Guided Bayesian Optimization (DGBO), a novel grey-box BO algorithm that learns the spatial distribution of activities and takes advantage of this knowledge to speed up the search for the best sensor placement. This algorithm, depicted in Figure 1, combined with high-fidelity building simulation for generating realistic movement and activity traces and corresponding sensor triggers, establishes the foundation of a framework for sample-efficient identification of a motion sensor placement that supports accurate activity recognition in an aged-care facility. The proposed approach does not cause any discomfort for the occupants, nor does it infringe their privacy or raise major ethical concerns. We show empirically that DGBO outperforms BO, genetic, and greedy algorithms in two simulated test buildings, and corroborate this finding using real sensor data collected in an indoor environment with multiple installed sensors. Our evaluation result indicates that the surrogate model of the objective function contains useful information in the context of assisted living. By leveraging this information and the inherent knowledge of the indoor space, DGBO achieves superb performance with respect to the activity recognition accuracy and better sample efficiency than BO across all test environments. Our contribution is threefold: 1) We propose DGBO: a novel grey-box BO algorithm for optimizing sensor placement in an indoor environment; 2) We develop a simulation-based assessment and optimization framework allowing the use of synthetic activities and movement trajectories to eval Figure 1: An overview of our proposed grey-box optimization algorithm (DGBO). uate the black-box function, instead of real-world traces that are difficult and expensive to collect; 3) Through extensive experiments, we show the efficacy of DGBO in finding high-quality sensor placements in three different indoor environments. Our dataset and code are available on GitHub, and we will add the link in the final version of the paper. ## 2 Related Work **Maximizing area coverage of sensors**. Most related work on sensor placement seeks to maximize the area covered by the sensors, without considering the accuracy of downstream applications that consume the sensor data (see for example [14, 15]). The fundamental limitation of this approach comes from the assumption that all areas of the building are equally important and should be monitored for an application such as activity recognition. In the real world, some building spaces are rarely occupied, which implies that covering the _active_ regions is more important. To address this shortcoming, several studies utilize contextual information about the building (e.g. its floor plan) to obtain a set of points that represent the potential location of occupants, and place sensors such that these points are maximally covered [16, 17, 5]. For instance, Vlasenko _et al._[17] designed a near-optimal greedy algorithm that gets synthetic occupant trajectories and finds a motion sensor placement for accurate indoor localization. However, assuming all points are equally important may lead to misidentification of critical, yet rare activities, e.g., bathing, in an aged-care monitoring application. We use a greedy algorithm as a baseline. **Maximizing the activity recognition accuracy**. There are only a few known studies that identify the best sensor placement by maximizing the accuracy of a downstream application. Thomas _et al._[4] proposed a framework to find the optimal placement of omnidirectional, ceiling-mounted motion sensors by maximizing the accuracy of an activity classifier while minimizing the number of deployed sensors, given the occupants' activities and movement trajectories. They used a real-world dataset to generate synthetic occupant trajectories and employed GA to effectively find high-quality sensor placements. However, generating a synthetic dataset from real-world sensor readings is costly and challenging. GA serves as the baseline for comparison. **Sample-efficient optimization of black-box functions**. In recent work, Deshwal _et al._[11] proposed a surrogate modeling approach for high-dimensional combinatorial spaces. They used a dictionary-based embedding to map discrete structures from the input space into an ordinal feature space, allowing the use of continuous surrogate models, such as the Gaussian process. However, the size of the search space grows exponentially with the number of possible sensor locations (ranging from nearly \(200\) to \(1000\) locations in a small, e.g., 700-square-foot, indoor environment). Due to its inability to directly control the sensor numbers, our assessments show that applying this method to the sensor placement problem often leads to installing \(>\)\(500\) sensors in our environments, which is overly costly, rendering this approach of limited practical value in this domain. **Grey-box Bayesian optimization**. Some studies have explored using the domain-specific knowledge available about \(f(x)\), thereby treating \(f(x)\) as a grey-box function [13]. They have sought a method to capture this knowledge and incorporate it into the definition of an _acquisition function_, which decides on the next query point in the optimization process. For example, Weissteiner _et al._[18] proposed a method for estimating an upper uncertainty bound, to design a new acquisition function for BO-based combinatorial auctions optimization. **Novelty of this work**. We cast sensor placement in an indoor environment as a Bayesian optimization problem. We then introduce a novel grey-box BO, called DGBO, and compare the resulting motion sensor placement with those found by BO, GA, and greedy. We show that the available domain-specific knowledge in our problem guides DGBO towards better sensor placements, and increases its sample efficiency. ## 3 Methodology We present our simulation-driven motion sensor placement framework in Figure 2. At the core of this framework lies an optimization algorithm that finds a sensor placement maximizing the performance of a model that classifies _activities of daily living_ (ADL), given data generated by the sensors. To efficiently obtain motion sensor data for each candidate sensor placement, we use a simulator of smart indoor spaces (SIM\({}_{\text{sis}}\)) developed by Golestan _et al._[19] (such simulators are thoroughly studied [20; 21]). the SIM\({}_{\text{sis}}\) gets ADL plans of \(N\) occupants and the location of motion sensors, and simulates motion sensor triggers. The sensor triggers and corresponding activities are then passed to the activity classifier module to obtain the result. This module trains an ADL classifier using \(80\%\) of the dataset, and evaluates its performance on the remaining \(20\%\) of the dataset (test set). Putting it all together, we treat the simulation of the indoor environment as a stochastic function, \(f(x)\), that gets as input a candidate sensor placement \(x\) and outputs a noisy observation, which is a measure of performance of an activity recognition model. ### Simulator of Smart Indoor Spaces The SIM\({}_{\text{sis}}\) is an open-source1 indoor environment simulator capable of realistically modeling occupants' behavior and simulating motion sensor readings. It receives a space model, i.e., the specification of an indoor environment, daily plans of \(N\) occupants i.e., regular ADLs, such as sleeping and cooking, and a candidate sensor placement which describes the number and locations of motion sensors. Each sample from the model generates a slightly different daily plan in terms of ADL order and duration. The SIM\({}_{\text{sis}}\) generates a synthetic dataset with sensor readings and their corresponding activity for each of the \(N\) occupants: Footnote 1: [https://github.com/shadangolestan/SIM_SIS-Simulator](https://github.com/shadangolestan/SIM_SIS-Simulator) \[\mathcal{D}=\{\{Y_{i}^{t},A_{i}^{t}\}_{t=1}^{T}\}_{i=1}^{N} \tag{1}\] where \(Y_{i}^{t}\) and \(A_{i}^{t}\) denote respectively the sensor triggers (a binary vector of size \(D\) where \(D\) is the number of sensors installed in the environment) and the activity of the \(i\)-th occupant at time \(t\). An ideal sensor placement2 should result in a dataset where each activity has a distinct enough pattern in the space of sensor readings so that an activity recognition model can recognize it [22]. Footnote 2: We assume the motion sensors are omnidirectional and attached to the ceiling, so the sensor placement problem reduces to determining the location of each sensor in a 2D space. Figure 2: An overview of the proposed optimization framework for sensor placement. ### Activity Classifier The activity classifier gets the dataset \(\mathcal{D}\) generated by a fixed set of sensors installed in the environment and applies the leave-one-out cross-validation method. Specifically, it considers the data that pertains to one occupant as the test set and the data of the remaining occupants as the training set. This process is repeated for each occupant to obtain different test sets. Following Cook _et al._[1], we use a random forest as the activity recognition model. For each test set, the activity classifier calculates the macro-averaged F1-score of the ADL and then outputs the arithmetic mean of the F1-scores (given below) as the model performance: \[F^{1}=\frac{1}{N}\Sigma_{i=1}^{N}\frac{1}{M}\Sigma_{j=1}^{M}\frac{tp_{j}}{tp_{j }+\frac{1}{2}(fp_{j}+fn_{j})} \tag{2}\] Here \(N\) is the number of occupants, \(M\) is the number of activities, and \(tp_{j}\), \(fp_{j}\), and \(fn_{j}\) are true positive, false positive and false negative of the \(j\)-th activity, respectively. Notice that \(F^{1}\) is sensitive to both false negative and false positive. We argue that it is a good performance measure, because both of these factors are important in the activity recognition task, and in particular for elderly monitoring systems where failing to alert the caregiver or issuing a false alarm could have dire consequences. Equation 2 calculates the macro-averaged F1-score over the \(M\) activities, since there are rare and short, but precarious activities, e.g., bathing, that should be deemed as important as other activities in our problem. ### Optimization Algorithm The optimization algorithm module receives an observation, which is the performance of the activity recognition model expressed in Equation 2 (\(f(x){=}F^{1}\)), and proposes a candidate sensor placement in each iteration to maximize this performance measure. This optimization can be performed using vanilla BO, DGBO (our proposed method), greedy and genetic algorithms, which are two popular heuristic search algorithms adopted in the literature. #### 3.3.1 Problem Formulation Let us denote the possible sensor locations in the indoor environment as a set \(\mathcal{L}\) define as: \[\mathcal{L}=\{l_{i}\}_{i=1}^{L} \tag{3}\] We seek to find the subset of \(\mathcal{L}\) such that if motion sensors are installed at these locations then \(f(x)\) will be maximized. #### 3.3.2 Bayesian Optimization BO is a sequential search strategy for optimizing a black-box function. In our problem, this function is \(f\) which returns the performance of the activity recognition model. Thus, BO solves the following problem: \[\max_{x\in S}f(x) \tag{4}\] where \(S\) is the search space containing all possible placements of \(D\) sensors (\(1{\leq}D{\leq}L\)) and \(x\) represents a feasible sensor placement. Note that the size of \(S\) grows exponentially with the number of sensors: \(|S|{=}\binom{L}{D}\). The BO assumes that observations of \(f(x)\) are the results of a stochastic process and are independent. There are two sources of noise in \(f\): 1) SIM\({}_{\text{sis}}\): it does not necessarily capture all dynamics and uncertainties that exist in the real-world; 2) the activity classifier: the random forest fits several decision tree classifiers on different randomly generated sub-samples of the dataset. Thus, each execution might yield slightly different results. BO approximates \(f\) using a surrogate model, denoted \(\hat{f}\), given a set of observations \(\mathcal{Z}\): \[\hat{f}(x)=p(f(x)|\mathcal{Z}). \tag{5}\] According to this definition, the surrogate model \(\hat{f}\) estimates \(F^{1}\) (Equation 2) given a sensor placement \(x\). We use the Probabilistic Random Forest (PRF) as the BO's surrogate model [23]. We use the observations to compute the mean and standard deviation of \(\hat{f}(x)\) at a sensor placement \(x\) and denote them as \(\mu_{x}\) and \(\sigma_{x}\). Hence, we can write \(\hat{f}(x){=}\mu_{x}+\sigma_{x}u\) where \(u{\sim}\mathcal{N}(0,1)\). In each iteration \(n\), the surrogate model is indeed a prior over \(f\) given observations received from the first iteration to the \(n\)-th iteration: \(\mathcal{Z}_{1:n}{=}\{(x_{i},f(x_{i})\}_{i=1}^{n}\). The surrogate model \(\hat{f}\) is used to form an acquisition function \(\alpha(x;\hat{f})\), which determines the next candidate sensor placement (\(x_{n+1}\)) to evaluate in iteration \(n{+}1\). We use the expected improvement (EI) function [24]: \[\alpha_{\text{EI}}(x;\hat{f})=\int\limits_{-\infty}^{\infty}\max \big{(}\hat{f}(x)-f(x^{*}),0\big{)}\phi(u)\,du= \tag{6}\] \[(\mu_{x}{-}f(x^{*}))\Phi(\frac{\mu_{x}{-}f(x^{*})}{\sigma_{x}})+ \sigma_{x}\phi(\frac{\mu_{x}{-}f(x^{*})}{\sigma_{x}})\] where \(f(x^{*}){=}\max\{f(x_{1}),...,f(x_{n})\}\) is the observation that corresponds to the best sensor placement found so far \(x^{*}\) (i.e. the incumbent); \(\Phi(.)\) and \(\phi(.)\) denote the cumulative density function (CDF) and probability density function (PDF) of the standard normal distribution, respectively. The acquisition function is used to find potentially better sensor placements. The next query point is given by: \[x_{n+1}=\operatorname*{arg\,max}_{x}\alpha_{\text{EI}}(x;\hat{f}), \tag{7}\] which is the point with the largest expected improvement. The choice of EI is due to its widespread use and because it strikes a good balance between exploration and exploitation. After evaluating \(x_{n+1}\), the corresponding observation, i.e., \(z_{n+1}{=}(x_{n+1},f(x_{n+1}))\), is appended to the sequence of past observations, \(\mathcal{Z}_{1:n}\), to obtain \(Z_{1:n+1}{=}\{Z_{1:n},z_{n+1}\}\). Next, \(\mathcal{Z}_{1:n+1}\) is used to update the surrogate model \(\hat{f}\), resulting in the posterior surrogate model that better approximates \(f\). The posterior surrogate model is then used as a prior for obtaining \(x_{n+2}\). The process continues for \(1000\) iterations and the best sensor placement found is reported. We randomly choose the initial placement (\(x_{1}\)) and implement BO using the package from [25]. #### 3.3.3 Distribution-Guided Bayesian Optimization (DGBO) We now describe the proposed grey-box BO. Similar to BO, DGBO utilizes a surrogate model and an acquisition function. It also takes advantage of the distribution of indoor activities, which can be captured from the evaluation of \(f(x)\). For each possible sensor location \(l_{i}{\in}\mathcal{L}\), we define an activation region \(R_{i}\) which contains every point that would be within the range of a motion sensor installed at \(l_{i}\). Note that the activation regions of two sensors may overlap. The objective is to estimate an unknown function \(I(R_{i})\) that outputs the amount of information \(R_{i}\) provides. This function provides insight into how useful a sensor is when monitoring a region. Intuitively, this should depend on the spatial distribution of activities and the location of other sensors in the environment. To obtain prior information about each activation region \(R_{i}\), we query \(f(x)\) at \(l_{i}\); this is equivalent to installing exactly one sensor in the middle of \(R_{i}\) and using only its data for activity recognition. DGBO uses the prior information about each activation region \(R_{i}\) to measure the amount of information that this region could provide at iteration \(n\), and inserts it into a tabular data structure, denoted \(\mathcal{I}_{0:n}\{R_{i}\}\) and marked _Information Profiles_ in Figure 1. Thus, after receiving the \(n\)-th observation (\(z_{n}{=}(x_{n},f(x_{n}))\)), we update \(\mathcal{I}_{0:n}\{R_{i}\}\) for every \(R_{i}\) that the respective \(l_{i}\) represents the location of a sensor in \(x_{n}\) as follows: \[\mathcal{I}_{0:n}\{R_{i}\}=\bigg{\{}\mathcal{I}_{0:n-1}\{R_{i}\},\frac{ \mathcal{I}_{0:0}\{R_{i}\}}{\Sigma_{l_{j}\in x_{n}}\mathcal{I}_{0:0}\{R_{j}\}}f (x_{n})\bigg{\}} \tag{8}\] where \(\mathcal{I}_{0:0}\{R_{i}\}\) is the prior information of region \(R_{i}\). Equation 8 contributes a portion of \(f(x_{n})\) to \(R_{i}\) based on its prior information relative to other regions' prior information. This is similar to the temporal credit assignment problem [26] in Reinforcement Learning, wherein a sequence of actions taken by an agent receive credit based on its outcome. With this analogy, our approach, which is a form of spatial credit assignment, identifies the contribution of a number of regions to the particular level of performance achieved as a result of installing sensors in the middle of these regions. We use Equation 8 to build a prior over \(I(R_{i})\): \[\hat{I}(R_{i})=p(I(R_{i})\mid\mathcal{I}_{0:n}\{R_{i}\}). \tag{9}\] Assuming \(\hat{I}(R_{i})\) follows a Gaussian distribution (\(\mathcal{N}(\mu_{R_{i}},\sigma_{R_{i}})\)), we can use the mean and standard deviation of \(\mathcal{I}_{0:n}\{R_{i}\}\) to calculate its parameters. We define the relative _information gain_ of a region compared to the information gain of the incumbent: \[I_{n}^{+}(R_{i}){=}\max(\hat{I}(R_{i})-I^{*},0) \tag{10}\] where \(I^{*}{=}\frac{1}{D}\sum_{l_{j}\in x^{*}}I_{n-1}^{+}(R_{j})\) is the average information gain of regions in incumbent sensor placement \(x^{*}\), and \(I_{0}^{+}(R_{i}){=}\mathcal{I}_{0:0}\{R_{i}\}\). The expected information gain of each region is then calculated by taking the expected value of Equation 10 (see module 2 in Figure 1). For each placement \(x\), our acquisition function, \(\alpha_{\text{DG}}(x)\), calculates the average expected information gain of the regions \(R_{i}\) where a sensor is placed in \(x\): \[\alpha_{\text{DG}}(x;\hat{f})=\tfrac{1}{D}\sum_{l_{i}\in x}\mathbb{E}[I_{n}^{ +}(R_{i})] \tag{11}\] With a reparameterization, we can write \(\hat{I}(R_{i}){=}\mu_{R_{i}}{+}\sigma_{R_{i}}u\) where \(u{\sim}\mathcal{N}(0,1)\): \[\alpha_{\text{DG}}(x;\hat{f}){=}\tfrac{1}{D}\sum_{l_{i}\in x} \int\limits_{u_{0}}^{+\infty}(\mu_{R_{i}}{+}\sigma_{R_{i}}u{-}I^{*})\phi(u)\,du\] \[{=}\tfrac{1}{D}\sum_{l_{i}\in x}\left(\,\int\limits_{u_{0}}^{+ \infty}(\mu_{R_{i}}{-}I^{*})\phi(u)\,du+\int\limits_{u_{0}}^{+\infty}\sigma_{ R_{i}}u\phi(u)\,du\right)\] \[{=}\tfrac{1}{D}\sum_{l_{i}\in x}\left((\mu_{R_{i}}{-}I^{*})\int \limits_{u_{0}}^{+\infty}\phi(u)\,du+\frac{\sigma_{R_{i}}}{\sqrt{2\pi}}\int \limits_{u_{0}}^{+\infty}ue^{\frac{-n^{2}}{2}}\,du\right)\] \[{=}\tfrac{1}{D}\sum_{l_{i}\in x}\left((\mu_{R_{i}}{-}I^{*})(1{-} \Phi(u_{0}))-\frac{\sigma_{R_{i}}}{\sqrt{2\pi}}\left[e^{\frac{-n^{2}}{2}} \right]_{u_{0}}^{+\infty}\right)\] \[{=}\tfrac{1}{D}\sum_{l_{i}\in x}(\mu_{R_{i}}-I^{*})\Phi(\frac{ \mu_{R_{i}}{-}I^{*}}{\sigma_{R_{i}}})+\sigma_{R_{i}}^{2}\phi(\frac{\mu_{R_{i} }{-}I^{*}}{\sigma_{R_{i}}}) \tag{12}\] where \(u_{0}=\frac{I^{*}{-}\mu_{R_{i}}}{\sigma_{R_{i}}}\) is the value of \(u\) at which the information gain becomes zero. To exploit the exploitation and exploration tendency of \(\alpha_{\text{DG}}\) and \(\alpha_{\text{EI}}\), respectively, we use scalarization and maximize \(\alpha_{\text{DG}}(x;\hat{f}){+}\alpha_{\text{EI}}(x;\hat{f})\) to choose the next query point (see module 3 in Figure 1). Intuitively, this point maximizes the sum of expected improvement and expected information gain, rather than any of them. ## 4 Experiments ### Simulation Case Study We consider two testbeds for our simulation case study: an \(8\times 8\)\((m^{2})\) fully-furnished, one-bedroom suite (marked T1) and a \(5.2\times 8\)\((m^{2})\) fully-furnished, studio suite (marked T2) from the Lifestyle Options Retirement Communities [27]. Figure 3 shows the floor plan of these suites. The suites are designated for older adults needing independent living, assisted living, or memory care. We choose these suites as our testbeds due to their differences, such as layout and size, which could lead to a diverse set of traces and unique challenges. Specifically, T2 exhibits distinct movement trajectories in some areas, e.g., the kitchen, since the entryway and bathroom are accessible only from the kitchen, and T1 exhibits slightly scattered movement trajectories and has no trajectories in the balcony. Similar to most related work, such as [4], we postulate that the set of possible sensor locations forms a 2D grid on the given floor plan. Thus, \(L{=}H{\times}W\) (see Equation 3) with \(H{=}\lceil h/\epsilon\rceil{-}1\) and \(W{=}\lceil w/\epsilon\rceil{-}1\) where \(h\) and \(w\) are the height and width of the indoor environment, respectively, and \(\epsilon\) is the spacing between consecutive rows and columns. Higher \(\epsilon\) values indicate lower granularity and lower computation overhead. The case study includes five occupants (\(N{=}5\)) performing various ADLs independently in each testbed. These activities and corresponding sensor triggers are simulated by SIM\({}_{\rm{sis}}\). Table 1 shows an ordered list of \(23\) detailed activities the occupants perform in \(196\) minutes in total (in Equation 1, \(A\) is the list of detailed activities and \(T{=}\frac{196\times 60}{3}\) since data is collected every \(3\) seconds). The order of performing ADLs and some detailed activities in each ADL (denoted using the same superscript in Table 1) can be interchanged, producing slightly different activity plans. \begin{table} \begin{tabular}{|l|l|} \hline **ADL** & **Seq. of detailed activities (duration in minutes)** \\ \hline Bathing & Undress (5), Take a shower (15), Dress (6) \\ \hline Hygiene & Use toilet (3), Wash hands (3) \\ \hline Dining routine & Make tea (10), Grab ingredients (2), Fry eggs (10), Toast breads (5), Grab utensils (1), Eat (10), Take medicine (2), Wipe dining table\({}^{a}\) (5), Wash dishes\({}^{a}\) (3), Clean kitchen\({}^{a}\) (5) \\ \hline Brooming & Grab the broom from storage (2), Broom (7), Return the broom (2) \\ \hline Others & Sit and work with tablet\({}^{b}\) (30), Exercise\({}^{b}\) (30), Watch TV\({}^{b}\) (15), Iron\({}^{b}\) (5), Sleep\({}^{b}\) (20) \\ \hline \end{tabular} \end{table} Table 1: ADLs in our simulated case study. ADLs and detailed activities with the same superscript can be shuffled. \begin{table} \begin{tabular}{|l|l|l|l|l|l|} \hline \multicolumn{2}{|c|}{\multirow{2}{*}{**Testbed**}} & \multicolumn{4}{c|}{**Avg. \(F^{1}\pm\) one standard deviation (no. sensors used)**} \\ \cline{3-6} & **GA** & **Greedy** & **BO** & **DGBO** \\ \hline \multirow{4}{*}{**T1**} & \(\epsilon{=}\mathbf{0.25(m)}\) & \(56.7{\pm}1.0\) (9) & — & \(76.9{\pm}0.1\) (9) & \(77.6{\pm}1.1\) (9) \\ & \(\epsilon{=}\mathbf{0.5(m)}\) & \(59.7{\pm}0.4\) (11) & — & \(75.6{\pm}2.0\) (7) & \(77.5{\pm}0.1\) (15) \\ & \(\epsilon{=}\mathbf{1.0(m)}\) & \(54.5{\pm}1.0\) (9) & \(69.7{\pm}1.1\) (13) & \(75.0{\pm}1.1\) (9) & \(77.6{\pm}0.2\) (9) \\ \cline{2-6} & \(\epsilon{=}\mathbf{0.25(m)}\) & \(42.9{\pm}1.5\) (10) & — & \(72.3{\pm}1.6\) (11) & \(72.3{\pm}1.8\) (11) \\ **T2** & \(\epsilon{=}\mathbf{0.5(m)}\) & \(42.7{\pm}0.4\) (6) & \(59.8{\pm}2.7\) (7) & \(67.7{\pm}0.3\) (11) & \(68.7{\pm}2.6\) (15) \\ & \(\epsilon{=}\mathbf{1.0(m)}\) & \(40.7{\pm}1.3\) (5) & \(66.8{\pm}1.3\) (15) & \(66.9{\pm}1.1\) (13) & \(69.6{\pm}0.4\) (11) \\ \hline **Aruba** & \(60.2{\pm}1.2\) (7) & \(64.0{\pm}1.3\) (15) & \(75.7{\pm}0.2\) (9) & \(76.3{\pm}0.2\) (9) \\ \hline \end{tabular} \end{table} Table 2: The performance of GA, Greedy, BO and DGBO in terms of the macro-averaged \(F^{1}\) (Equation 2) in T1, T2 and Aruba. A dash indicates that no sensor placement was found after \(1000\) queries. We report the median number of sensors for GA. ### Real-World Case Study We use a publicly available, real-world dataset called Aruba [1]. This dataset was collected from the home of an adult who lives alone but has regular visitors. The dataset contains \(219\) days worth of data generated by \(31\) motion sensors installed in that home. The following \(11\) ADLs were manually labeled: 1) meal preparation, 2) relax, 3) eating, 4) work, 5) sleeping, 6) wash dishes, 7) bed to toilet, 8) enter home, 9) leave home, 10) housekeeping, and 11) respirate. In this case, SIM\({}_{\text{sis}}\) gets as input the real-world dataset and instead of simulating human activities and corresponding sensor triggers, it simply filters a subset of the sensor data based on the sensors present in the candidate sensor placement. This portion of the dataset is then used for activity recognition, with the first \(70\%\) of the days comprising the training set and the rest comprising the test data. ### Results We compare motion sensor placements found by DGBO (the proposed method), BO, GA, and greedy algorithm in our case studies. The implementation of our baselines, i.e. GA and greedy algorithm, is described in the appendix. For each value of \(\epsilon\)\(\in\)\(\{0.25m,0.5m,1.0m\}\) in the simulation case study, we run each algorithm \(5\) times, using different seeds. Given the range of motion sensors in the simulator (a circle with radius of \(1\) meter), these \(\epsilon\) values are reasonable because they allow some overlap between the areas covered by multiple sensors and maintaining a clear line of sight despite the obstacles that exist in the environment. For greedy, BO, and DGBO, we repeat the process after setting the total number of sensors to \(5\), \(7\), \(9\), \(11\), \(13\), and \(15\). GA decides the number of sensors automatically. Each algorithm can query the black-box function 1,000 times. Table 2 shows the performance (Equation 2) of DGBO, BO, greedy and GA for the number of sensors that led to the best performance in each case (mentioned in brackets). In both T1 and T2, the greedy algorithm mostly exhausts the 1,000 black-box function queries for \(\epsilon\)\(=\)\(0.25\) and \(0.5\), failing to find a solution. For example, when \(\epsilon\)\(=\)\(0.5\) in T1, \(1115\) function queries would be needed to place \(5\) sensors (see appendix for details). We make the following observations: 1) Both DGBO and BO significantly outperform GA and greedy for all \(\epsilon\) values.3 This confirms our hypothesis that the surrogate model of \(f(x)\) contains useful information in this context. 2) DGBO consistently attains better performance than BO in all testbeds with any \(\epsilon\) value. 3) Greedy algorithm performs well in T2 with \(\epsilon\)\(=\)\(1\). We attribute this to the rather small search space for this value of \(\epsilon\), increasing the chance of reaching the global optimum. 4) DGBO performs robustly across all testbeds, as evidenced by its convergence to a similar level Figure 3: The floor plan/space model of the two apartments: (a) an \(8\times\)\(8(m^{2})\) one-bedroom suite; (b) a \(5.2\times\)\(8(m^{2})\) studio. of performance compared to the other methods. We believe this is because the information profiles guide DGBO towards more interesting parts of the indoor environment (see Figure 1). We investigate the fourth observation in detail. First, Figure 4 depicts the average performance of DGBO, BO and greedy across the 5 runs after each iteration. We specifically focus on the best-performing number of sensors for each algorithm. We witness that DGBO quickly finds a high-quality sensor placement in all testbeds. To compare the sample efficiency of DGBO and BO, we find the first iteration at which DGBO and BO reach the \(95\%\) confidence interval (CI) of the best performance of DGBO after 1,000 iterations; these iterations are marked with green and red dots in Figure 4, respectively. Table 3 shows how many fewer queries are required by DGBO to attain the same performance as BO, as the percentage of the number of queries executed by BO. On average, DGBO requires \(55.4\%\), \(58.9\%\), and \(39\%\) fewer queries than BO in T1, T2, and Aruba, respectively. This is a significant improvement, especially because these queries are typically expensive. Second, Figure 5 shows the best sensor locations found after 1,000 iterations in all runs of DGBO and BO (considering different \(\epsilon\) values, random seeds, and target number of sensors) in T1 and T2. It also shows the spatial distribution of activities using a heatmap overlaid on the floor plan of each building, with dark/light blue showing more/less activities in the space. It can be readily seen that both methods place sensors in highly occupied areas, such as the kitchen and dining room. Yet, Figure 4: The average performance of DGBO, BO, and greedy (when available) for the best number of sensors found by each method versus the number of \(f(x)\) queries. DGBO's sensor placements are more promising. Specifically, in T1, BO places sensors in areas where no activities were performed e.g., the balcony, entryway, left side of the bedroom, and right side of the living room, However, DGBO is less inclined to place sensors in these areas. The same argument can be made for T2. \begin{table} \begin{tabular}{|c|l|c|c|} \hline \multicolumn{2}{|c|}{**Testbed**} & \(100\times\frac{\epsilon-\bullet}{\bullet\bullet}\) & avg. \\ \hline \multirow{2}{*}{**T1**} & \(\epsilon{=}{\bf 0.25}(\mathbf{m})\) & \(-17.9\%\) & \multirow{2}{*}{\(-55.4\%\)} \\ & \(\epsilon{=}{\bf 0.5}(\mathbf{m})\) & \(-61.8\%\) & \\ & \(\epsilon{=}{\bf 1}(\mathbf{m})\) & \(-86.6\%\) & \\ \hline \multirow{2}{*}{**T2**} & \(\epsilon{=}{\bf 0.25}(\mathbf{m})\) & \(-41.0\%\) & \multirow{2}{*}{\(-58.9\%\)} \\ & \(\epsilon{=}{\bf 0.5}(\mathbf{m})\) & \(-71.7\%\) & \\ \(\epsilon{=}{\bf 1}(\mathbf{m})\) & \(-64.1\%\) & \\ \hline \multicolumn{2}{|c|}{**Aruba**} & \(-39.6\%\) & \(-39.6\%\) \\ \hline \end{tabular} \end{table} Table 3: The convergence analysis of DGBO compared to BO across our case studies. The avg. shows the average value of red/green dots of each testbed for all \(\epsilon\) values. Figure 5: Illustration of the spatial distribution of activities (a), and the best sensor locations found in all runs of DGBO (b) and in all runs of BO (c). The first row shows T1 and the second row shows T2. The intensity of the color shows the likelihood of installing a sensor at that location. Figure 6: Expected information gain: T1 (left), T2 (right). ## 5 Discussion We have found that DGBO and BO outperform the conventional methods for finding motion sensor placements, i.e., genetic and greedy algorithms, resulting in significantly higher activity recognition accuracy. The DGBO learns the spatial distribution of activities during the search process and utilizes this distribution to consistently find high-quality sensor placements using noticeably fewer queries than BO. We wish to emphasize that in the BO literature, \(f(x)\) queries are typically costly [24]. In our problem, resolving these queries even through simulation is time-consuming, taking roughly 2 minutes on a computer with an Intel i7 4.00Ghz CPU and 16GB memory. Thus, coming up with sample efficient algorithms is key. We argue that our spatial credit assignment strategy (Equation 8) leads to an accurate estimation of the expected information gain of each region after a small number of iterations. To show this, we plot the expected information gain at iteration \(n{=}50\) in Figure 6. Comparing this to Figure 5 (b), we conclude that our estimation of expected information gain becomes accurate early in the optimization process. We wish to clarify that our framework in Figure 2 is not designed for a particular choice of the activity classifier. To show this, we have repeated the process in T1 with \(\epsilon{=}1\) using two different classifiers: gradient boosting and K-nearest neighbor (KNN) with \(k{=}5\). In both cases, the superior performance of BO and DGBO remains statistically significant, and DGBO remains significantly more sample-efficient. Further studies should be conducted using DGBO. A future work direction is to apply transfer learning to the expected information gain and use this information in other environments. Another future direction is to assess the effectiveness of DGBO in other emerging applications such as air pollution monitoring [12], wildfire monitoring [28], and effective emergency response [29]. Finally, we have explored the optimal placement of one type of sensor that is not deemed intrusive and is commonly used for activity detection in aging-in-place settings. Our method can be extended to other sensor types, e.g., mmWave radars. ## 6 Conclusion This paper casts optimal sensor placement in indoor environments as a black-box optimization problem addressed using Bayesian Optimization (BO), a method that has not been applied to this problem before. It then introduces Distribution-Guided Bayesian Optimization (DGBO) that incorporates the learned spatial distribution of activities into the acquisition function of Bayesian optimization. Our approach entails using (a) a high-fidelity simulator for modeling the indoor environment, its occupants, and sensors; (b) an optimization algorithm to find a sensor placement that maximizes the detection accuracy of ADL. We hypothesized that DGBO could explore the search space more effectively. To test this hypothesis, we evaluated DGBO in two simulated suites of an assisted living facility and using one real-world dataset where subjects performed ADLs. We compared the performance of DGBO, with BO and two widely-used baselines, i.e., genetic and greedy algorithms. Our result confirmed that DGBO finds high-quality sensor placements at a significantly lower cost.
2309.07193
A Robust SINDy Approach by Combining Neural Networks and an Integral Form
The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempts, noisy and scarce data still pose a severe challenge to the success of the SINDy approach. In this work, we discuss a robust method to discover nonlinear governing equations from noisy and scarce data. To do this, we make use of neural networks to learn an implicit representation based on measurement data so that not only it produces the output in the vicinity of the measurements but also the time-evolution of output can be described by a dynamical system. Additionally, we learn such a dynamic system in the spirit of the SINDy framework. Leveraging the implicit representation using neural networks, we obtain the derivative information -- required for SINDy -- using an automatic differentiation tool. To enhance the robustness of our methodology, we further incorporate an integral condition on the output of the implicit networks. Furthermore, we extend our methodology to handle data collected from multiple initial conditions. We demonstrate the efficiency of the proposed methodology to discover governing equations under noisy and scarce data regimes by means of several examples and compare its performance with existing methods.
Ali Forootani, Pawan Goyal, Peter Benner
2023-09-13T10:50:04Z
http://arxiv.org/abs/2309.07193v1
# A Robust SINDy Approach by Combining Neural Networks and an Integral Form ###### Abstract The discovery of governing equations from data has been an active field of research for decades. One widely used methodology for this purpose is sparse regression for nonlinear dynamics, known as SINDy. Despite several attempts, noisy and scarce data still pose a severe challenge to the success of the SINDy approach. In this work, we discuss a robust method to discover nonlinear governing equations from noisy and scarce data. To do this, we make use of neural networks to learn an implicit representation based on measurement data so that not only it produces the output in the vicinity of the measurements but also the time-evolution of output can be described by a dynamical system. Additionally, we learn such a dynamic system in the spirit of the SINDy framework. Leveraging the implicit representation using neural networks, we obtain the derivative information--required for SINDy--using an automatic differentiation tool. To enhance the robustness of our methodology, we further incorporate an integral condition on the output of the implicit networks. Furthermore, we extend our methodology to handle data collected from multiple initial conditions. We demonstrate the efficiency of the proposed methodology to discover governing equations under noisy and scarce data regimes by means of several examples and compare its performance with existing methods. Keywords:Spare regression, discovering governing equations, neural networks, nonlinear system identification, Runge-Kutta scheme. + Footnote †: journal: Journal of Computational and Graphical Statistics ## 1 Introduction System identification is a crucial aspect of understanding and modeling the dynamics of various physical, chemical, and biological systems. Over the years, various powerful and efficient system identification techniques have been developed, and these methods have been applied in a wide range of applications, see, e.g., [1, 2, 3]. Traditionally, system identification techniques rely on prior model hypotheses. With a linear model hypothesis, several methodologies have been proposed; see, e.g., [1, 2]. However, for nonlinear system identification, defining a prior is challenging, and it is often done with the help of practitioners. Despite several earlier works [4, 5, 6], nonlinear system identification is still an active and exciting research field. Towards automatic nonlinear system identification, generic algorithms and symbolic regression have shown their effectiveness and promises in discovering governing nonlinear equations using measurement [7, 8]. However, their computational expenses remain undesirable. Instead of building suitable functions in the spirit of symbolic regression, there has been a focus on sparsity-promoting approaches for nonlinear system identification [9, 10, 11]. They rely on the assumption that nonlinear dynamics can be defined by a few nonlinear basis functions from a dictionary with a large collection of nonlinear basis functions. Such a technique enables the discovery of interpretable, parsimonious, and generalizable models that balance precision and performance. It is nowadays widely referred to as SINDy[11]. SINDy has been employed for a handful number of challenging model discovery problems such as fluid dynamics [12], plasma dynamics [13], turbulence closures [14], mesoscale ocean closures [15], nonlinear optics [16], computational chemistry [17], and numerical integration [18]. Moreover, the results of the SINDy have been extended widely to many applications, such as nonlinear model predictive control [19], rational functions [20, 21], enforcing known conservation laws and symmetries [12], promoting stability [22], generalizations for stochastic dynamics [23], from Bayesian perspective [24]. Often, SINDy approaches require a reliable estimate of the derivative information, making them very challenging for noisy and scare data regimes. Blending numerical methods [21, 25, 26, 27, 28] and weak formulations of differential equations [28] avoid these requirements, but their performance still deteriorates for low signal-to-noise measurements. In addition, the method in [28] relies on the choice of basis functions that allow to write differential equations in a weak formulation. The work in [29] utilizes the concepts of an ensemble to improve the predictions, but it still requires reliable estimates of derivatives to some extent. To discover governing equations from noisy data, the authors in [30] proposed a scheme that aims to decompose the noisy signals into clean signals and the noise using a Runge-Kutta-based integration method. However, the scheme explicitly estimates the noise, making it harder to scale, and requires all the dependent variables to be available at the same time grid. Recently, applications of deep neural networks (DNN) have received attention in sparse regression model discovery methods. For instance, in [31], a deep learning-based discovery algorithm has been employed to identify underlying (partial) differential equations. However, therein, only a single trajectory is considered to recover governing equations, but in many complex processes, we might require data for different parameters and initial conditions to explore rich dynamics, thus, the reliable discovery of governing equations. Furthermore, the work [31] discovers governing equations based on estimating derivative information using automatic differential tools. However, we know that differential equations can also be written in the integral form, whereby the numerical approaches can employed as well, see, e.g., [21, 32]. In this paper, we discuss an approach, namely, iNeural-SINDy, for the data-driven discovery of nonlinear dynamical systems using noisy data from the tens of SINDy. For this, we make use of DNN to learn an implicit representation based on the given data set so that the network outputs denoised data, which is later utilized for the sparse regression to discover governing equations. To solve the sparse regression problem, we make use of not only automatic differential tools but also integral forms for differential equations. As a result, we observe a robust discovery of governing equations. We note that such a concept has recently been used in the context of neural ODEs in [33] to learn black-box dynamics using noisy and scarce data. We further discuss how to incorporate the data coming from multiple initial conditions. The rest of this paper is organized as follows. Section 2 briefly recalls the SINDy approach [11]. In Section 3, we propose a novel methodology for sparse regression to learn underlying governing equations by making use of DNN, automatic differential tools, and numerical methods. Furthermore, in Section 4, we discuss its extension for multiple initial conditions and different parameters. In Section 5, we demonstrate the proposed framework by means of various synthetic noisy measurements and present a comparison with the current state-of-the-art approaches. Finally, Section 6 concludes the paper with a brief summary and future research avenues. ## 2 A Brief Recap of SINDy The SINDy algorithm is a nonlinear system identification approach which is based on the hypothesis that governing equations of a nonlinear system can be given by selecting a few suitable basis functions, see, e.g., [11]. Precisely, it aims at identifying a few basis functions from a dictionary, containing a large number of candidate basis functions. In this regard, sparsity-promoting approaches can be employed to discover parsimonious nonlinear dynamical systems to have a good trade-off for model complexity and accuracy [34, 35]. Consider the problem of discovering nonlinear systems of the form: \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}(t)), \tag{1}\] where \(\mathbf{x}(t)=\left[\mathbf{x}_{1}(t),\mathbf{x}_{2}(t),\dots,\mathbf{x}_{n}( t)\right]^{\top}\in\mathbb{R}^{n}\) denotes the state at time \(t\), and \(\mathbf{f}(\mathbf{x}):\mathbb{R}^{n}\rightarrow\mathbb{R}^{n}\) is a nonlinear function of the state \(\mathbf{x}(t)\). Towards discovering the function \(\mathbf{f}\) in (1), which defines the vector field or dynamics of the underlying system, we start by collecting time-series data of state \(\mathbf{x}(t)\). Let us further assume to have time derivative information of the state. If it is not readily available, we can approximate it using numerical methods, e.g., a finite difference scheme. Thus, consider that the data \(\{\mathbf{x}(t_{0}),\dots,\mathbf{x}(t_{\mathcal{N}})\}\) and its derivative \(\{\dot{\mathbf{x}}(t_{0}),\dots,\dot{\mathbf{x}}(t_{\mathcal{N}})\}\) are given. In the next step, we assemble the data in matrices as follows: \[\mathbf{X}=\begin{bmatrix}\mathbf{x}(t_{1})^{\top}\\ \mathbf{x}(t_{2})^{\top}\\ \vdots\\ \mathbf{x}(t_{\mathcal{N}})^{\top}\end{bmatrix}=\begin{bmatrix}\mathbf{x}_{1} (t_{1})&\mathbf{x}_{2}(t_{1})&\cdots&\mathbf{x}_{n}(t_{1})\\ \mathbf{x}_{1}(t_{2})&\mathbf{x}_{2}(t_{2})&\cdots&\mathbf{x}_{n}(t_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \mathbf{x}_{1}(t_{\mathcal{N}})&\mathbf{x}_{2}(t_{\mathcal{N}})&\cdots& \mathbf{x}_{n}(t_{\mathcal{N}})\end{bmatrix}, \tag{2}\] where each row represents a snapshot of the state. Similarly, we can write the time derivative as follows: \[\dot{\mathbf{X}}=\begin{bmatrix}\dot{\mathbf{x}}(t_{1})^{\top}\\ \dot{\mathbf{x}}(t_{2})^{\top}\\ \vdots\\ \dot{\mathbf{x}}(t_{\mathcal{N}})^{\top}\end{bmatrix}=\begin{bmatrix}\dot{ \mathbf{x}}_{1}(t_{1})&\dot{\mathbf{x}}_{2}(t_{1})&\cdots&\dot{\mathbf{x}}_{n }(t_{1})\\ \dot{\mathbf{x}}_{1}(t_{2})&\dot{\mathbf{x}}_{2}(t_{2})&\cdots&\dot{\mathbf{x}} _{n}(t_{2})\\ \vdots&\vdots&\ddots&\vdots\\ \dot{\mathbf{x}}_{1}(t_{\mathcal{N}})&\dot{\mathbf{x}}_{2}(t_{\mathcal{N}})& \cdots&\dot{\mathbf{x}}_{n}(t_{\mathcal{N}})\end{bmatrix}. \tag{3}\] The next key building block in the SINDy algorithm is the construction of a dictionary \(\Theta(\mathbf{y})\), containing candidate basis functions (e.g., constant, polynomial or trigonometric functions). For instance, our dictionary matrix can be given as follows: \[\Theta(\mathbf{X})=\begin{bmatrix}\begin{array}{cccccc}\vline&\vline& \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline\\ \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline& \vline\\ \vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline&\vline& \vline&\vline\\ \end{array}, \tag{4}\] assume \(\Theta(\mathbf{X})\in\mathbb{R}^{m\times D}\), and in the above formulations polynomial terms are denoted by \(\mathbf{X}^{\mathbb{P}_{2}}\) or \(\mathbf{X}^{\mathbb{P}_{3}}\); to be more descriptive \(\mathbf{X}^{\mathbb{P}_{2}}\) denotes the quadratic nonlinearities of the state \(\mathbf{X}\) as follows: \[\mathbf{X}^{\mathbb{P}_{2}}=\begin{bmatrix}\mathbf{x}_{1}^{2}(t_{1})&\mathbf{ x}_{1}(t_{1})\mathbf{x}_{2}(t_{1})&\cdots&\mathbf{x}_{2}^{2}(t_{1})&\mathbf{x}_{2}(t_{1} )\mathbf{x}_{3}(t_{1})&\cdots&\mathbf{x}_{2}^{2}(t_{1})\\ \mathbf{x}_{1}^{2}(t_{2})&\mathbf{x}_{1}(t_{2})\mathbf{x}_{2}(t_{2})&\cdots& \mathbf{x}_{2}^{2}(t_{2})&\mathbf{x}_{2}(t_{2})\mathbf{x}_{3}(t_{2})&\cdots& \mathbf{x}_{n}^{2}(t_{2})\\ \vdots&\vdots&\ddots&\vdots&\vdots&\ddots&\vdots\\ \mathbf{x}_{1}^{2}(t_{\mathcal{N}})&\mathbf{x}_{1}(t_{\mathcal{N}})\mathbf{x}_ {2}(t_{\mathcal{N}})&\cdots&\mathbf{x}_{2}^{2}(t_{\mathcal{N}})&\mathbf{x}_{2}( t_{\mathcal{N}})\mathbf{x}_{3}(t_{\mathcal{N}})&\cdots&\mathbf{x}_{n}^{2}(t_{ \mathcal{N}})\end{bmatrix}.\] In this setting, each column of the dictionary \(\Theta(\mathbf{x})\) denotes a candidate function in defining the function \(\mathbf{f}(\mathbf{x})\) in (1). We are interested in identifying a few candidate functions from the dictionary \(\Theta\) so that a weighted sum of these selected functions can describe the function \(\mathbf{f}\). For this, we can set up a sparse regression formulation to achieve this goal. Precisely, we seek to identify a sparse vector \(\Xi=[\xi_{1},\ \xi_{2},\dots,\ \xi_{n}]\), where \(\xi_{i}^{\top}\in\mathbb{R}^{m}\) with \(m\) denoting the number of columns in \(\Theta\), that determines which features from the dictionary are active and their corresponding coefficients. The SINDy algorithm formulates the sparse regression problem as an optimization problem as follows. Given a set of observed data \(\mathbf{X}\) and the corresponding time derivatives \(\dot{\mathbf{X}}\), the goal is to find the sparsest matrix \(\Xi\) that fulfills the following: \[\dot{\mathbf{X}}=\Theta(\mathbf{X})\Xi.\] However, finding such a matrix is an NP hard problem. Therefore, there is a need to come up a sparsity promoting regularization, and in this category, LASSO is a widely known approach [36, 37]. Despite its success, it is unable to yield matrix which is the sparsest, or the approaches, e.g., discussed in [38, 39], require a prior information about how many non-zeros elements are expected in the matrix \(\Xi\), which is not known. On the other hand, the authors in [11] discuss a sequential thresholding approach, where simple least squares problems are solved iteratively, and at each step, coefficients below a given tolerance are pruned. Analysis of such an algorithm is discussed in [40]. We summarize the SINDy approach in Algorithm 1. Moreover, we mention that other regularization schemes or heuristics are discussed in [21, 22, 25, 41, 42], but in this work, we focus only on the sequential thresholding approach, similar to in Algorithm 1 due to its simplicity. ``` 0: Dictionary \(\Theta\), time-series data \(\mathbf{X}\), time derivative information \(\mathbf{X}\), threshold value tol, and maximum iterations max-iter. 0: Estimated coefficients \(\Xi\) that define governing equations for nonlinear systems. 1:\(\Xi=(\Theta^{\top}\Theta)\backslash\Theta^{\top}\mathbf{\hat{X}}\)\(\triangleright\) For initial guess, solving a least-squares problem 2:\(k=1\) 3:while\(k<\texttt{max-iter}\)do 4:\(\texttt{small\_inds}=(\texttt{abs}(\Xi)<\texttt{tol})\)\(\triangleright\) identifying small coefficients 5:\(\Xi(\texttt{small\_inds})=0\)\(\triangleright\) excluding small coefficients 6: Solve \(\Xi=(\Theta^{\top}\Theta)\backslash\Theta^{\top}\mathbf{\hat{X}}\) subject to \(\Xi(\texttt{small\_inds})=0\) 7:\(k=k+1\) ``` **Algorithm 1** SINDy algorithm [29] ## 3 iNeural-SINDy: Neural Networks and Integrating Schemes Assisted SINDy Approach A challenge in the classical SINDy approach discussed in the previous section is the availability of an accurate estimate of the derivative information. If the derivative information is inaccurate, the resulting sparse model may not accurately capture the underlying system dynamics. In this section, we present an approach that combines SINDy framework with a numerical integration scheme and neural networks in a particular way so that a robust discovery of governing equations can be made amid poor signal-to-noise ratio and irregularities in data. The methodology is inspired by the work [33]. The main components of the methodology are as follows. For given noisy data, we aim to learn an implicit representation using a neural network based on the noisy data so that the network yields denoised data but still in the vicinity of the noisy collected data, and governing equations describing the dynamics of the denoised data can be obtained by employing SINDy. For SINDy, we utilize automatic differential tools to obtain the derivative information via the network and also make use of an integral form of the differential equations. In the following, we make these discussions more precise. Consider noisy data \(\mathbf{y}(t)\in\mathbb{R}^{n}\) at the time instances \(\{t_{0},\ldots,t_{\mathcal{N}}\}\), i.e., \(\{\mathbf{y}(t_{0}),\ldots,\mathbf{y}(t_{\mathcal{N}})\}\). Moreover, \(\mathbf{y}(t)=\mathbf{x}(t)+\epsilon(t)\), where \(\mathbf{x}(t)\) and \(\epsilon(t)\) denote clean data and noise, respectively. Under this setting, we aim to discover the structure of vector field \(\mathbf{f}\) by identifying the most active terms in the dictionary \(\Theta\) so that it satisfies as follows: \[\dot{\mathbf{x}}(t)=\mathbf{f}(\mathbf{x}). \tag{5}\] Note that we do not know \(\epsilon\)'s. In order to learn \(\mathbf{f}\) from \(\mathbf{y}\), we blend three ingredients together, which are discussed in the following. 1. **Sparse regression assumption:** In our setting, we utilize the principle of SINDy, which we discussed in the previous section. This means that the system dynamics (or vector field defining dynamics) can be represented by a few suitable terms from a dictionary of candidate functions. This allows us to obtain a parsimonious representation of dynamical systems and reduces the model complexity, leading to better generalization and interpretability of the models. 2. **Automatic differential to estimate derivative information:** As mentioned earlier, SINDy algorithm requires accurate derivative information for the system, which can be challenging to obtain from experiments or to estimate using numerical methods. To cope with this issue, we make use of neural networks with its automatic differentiation (AD) feature, which is a technique used to estimate derivative information. The use of a DNN in combination with the SINDy algorithm was earlier discussed in [31], where it has been shown that the discovery of nonlinear system dynamics without explicit need of accurate derivative information [31]. We make use of a DNN to parameterize a nonlinear mapping from time \(t\) to the dependent variable \(\mathbf{y}(t)\). To that end, let us denote a DNN by \(\mathcal{G}_{\theta}\), where \(\theta\) contains DNN parameters. The input to \(\mathcal{G}_{\theta}\) is time \(t\), and its output is \(\mathbf{y}(t)\), i.e., \(\mathbf{y}(t)=\mathcal{G}_{\theta}(t)\). However, in the case of noisy measurement \(\mathbf{y}(t)\) at time \(\{t_{0},\ldots,t_{\mathcal{N}}\}\), we expect \(\mathcal{G}_{\theta}\) to predict outputs in the proximity of \(\mathbf{y}\), i.e., \[\mathbf{y}(t)\approx\mathbf{x}(t)=\mathcal{G}_{\theta}(t),\quad t=\{t_{0}, \ldots,t_{\mathcal{N}}\}.\] With the sparse regression hypothesis, we aim to learn a dynamical model for \(\mathbf{x}\), as it can be seen as a denoised version of \(\mathbf{y}\). For this, we construct a dictionary of possible candidate functions using \(\mathbf{x}\), which we denote by \(\Theta\big{(}\mathbf{X}(t)\big{)}\). Next, we require the derivative information of \(\mathbf{x}\) with respect to time \(t\). Since we have an implicit representation of \(\mathbf{x}(t)\) using a DNN, we can employ AD to obtain the required information. Having dictionary and derivative information, we set up a sparse regression problem as follows: \[\dot{\mathbf{X}}(t)=\Theta\big{(}\mathbf{X}(t)\big{)}\Xi,\] (6) where \(\Xi\) is the sparsest possible matrix, which selects the most active terms from the dictionary to define dynamics. Finding the sparsest solution is computationally infeasible; we, thus, utilize the sequential thresholding approach as discussed in Algorithm 1 with minor modifications. Instead of solving least-squares problems in Steps 1 and 5 in Algorithm 1, we have a loss function as follows: \[\mathcal{L}:=\min_{\theta,\Xi}\sum_{i=0}^{\mathcal{N}}\lambda_{1}\|\mathbf{y}( t_{i})-\mathbf{x}(t_{i})\|+\lambda_{2}\|\dot{\mathbf{x}}(t)-\Theta\big{(} \mathbf{x}(t)\big{)}\Xi\|,\] (7) where \(\mathbf{x}(t_{i}):=\mathcal{G}_{\theta}(t_{i})\), and \(\lambda_{1}\) and \(\lambda_{2}\) are hyperparameters. 3. **Numerical integration scheme:** A dynamical system is a particle or an ensemble of particles whose state varies over time and thus obeys differential equations involving time derivatives [43]. To predict the evolution of the dynamical system, it is necessary to have an analytical solution of such equations or their integration over time through computer simulations. Therefore, we aim to incorporate the information contained in the form of integration of dynamical systems while discovering governing equations via sparse regression, which is expected to make the process of discovering equations robust to the noise and scarcity of data. When differential equations are written in an integral form, then we do not require derivative information as well; however, the resulting optimization problem involves an integral form. In this regard, one can employ the principle of Neural-ODEs [44] to solve efficiently such optimization problems. One can also approximate the integral form using suitable integrating schemes [45], and recently, fourth-order Runge-Kutta (RK4) scheme [21] and linear multi-step methods [46] are combined with SINDy. In this work, we make use of the RK4 scheme to approximate an integral. Following [21], our goal is to predict the state of a dynamic system \(\mathbf{x}(t_{k+1})\) at time \(t=t_{k+1}\) from the state \(\mathbf{x}(t_{k})\) at time \(t=t_{k}\), where \(k\in\{0,1,\ldots,\mathcal{N}-1\}\). By employing the RK4 scheme, \(\mathbf{x}(t_{k+1})\) can be computed as a weighted sum of four components that are the product of the time-step and gradient field information \(\mathbf{f}(\cdot)\) at the specific locations. These components are computed as follows: \[\mathbf{x}(t_{k+1})\approx\mathbf{x}(t_{k})+\frac{1}{6}h_{k}(\mathbf{a}_{1}+2 \cdot\mathbf{a}_{2}+2\cdot\mathbf{a}_{3}+\mathbf{a}_{4}),\quad h_{k}=t_{k+1}-t _{k},\] (8) where, \[\mathbf{a}_{1}=\mathbf{f}(\mathbf{x}(t_{k})),\quad\mathbf{a}_{2}=\mathbf{f} \Big{(}\mathbf{x}(t_{k})+h_{k}\frac{\mathbf{a}_{1}}{2}\Big{)},\quad a_{3}= \mathbf{f}\Big{(}\mathbf{x}(t_{k})+h_{k}\frac{\mathbf{a}_{2}}{2}\Big{)},\quad \mathbf{a}_{4}=\mathbf{f}\Big{(}\mathbf{x}(t_{k})+h_{k}\mathbf{a}_{3}\Big{)}.\] For the sake of simplicity with a slight abuse of a notation, the right-hand side of (8) is denoted by \(\mathcal{F}_{\texttt{Rk4}}\big{(}f,\mathbf{x}(t_{k}),h_{k}\big{)}\), i.e., \[\mathbf{x}(t_{k+1})=\mathbf{x}(t_{k}+h_{k})\approx\mathcal{F}_{\texttt{Rk4}} \big{(}\mathbf{f},\mathbf{x}(t_{k}),h_{k}\big{)}. \tag{9}\] Like the SINDy algorithm, we collect samples from the dynamical system at time \(t=\{t_{0},\ldots,t_{\mathcal{N}}\}\) and define the time step as \(h_{k}:=t_{k+1}-t_{k}\). With sparse regression assumption, we can write \(\mathbf{f}(\mathbf{x})=\Theta(\mathbf{x})\Xi\), where \(\Theta(\mathbf{x})\) is a dictionary and \(\Xi\) is a sparse matrix. Then, we can set up a sparse regression as follows. We seek to identify the sparsest matrix \(\Xi\) so that the following is minimized: \[\sum_{k}\left\|\mathbf{x}(t_{k+1})-\mathcal{F}_{\texttt{Rk4}}\big{(}\Theta( \mathbf{x})\Xi,\mathbf{x}(t_{k}),h_{k}\big{)}\right\|.\] When the RK4 scheme is merged with the previously discussed DNN framework, we apply a one-time ahead prediction based on RK4-SINDy to the output of our DNN, i.e., \[\mathbf{x}_{\texttt{Rk4}}(t_{k+1})\approx\mathcal{F}_{\texttt{Rk4}}\Big{(} \mathbf{f},\mathbf{x}(t_{k}),h_{k}\Big{)}.\] Having all these ingredients, we combine them to define a loss function to train our DNN structure, as well as to discover governing equations describing underlying dynamics. To that end, we have the following loss function: \[\mathcal{L}=\mu_{1}\mathcal{L}_{\texttt{MSE}}+\mu_{2}\mathcal{L}_{\texttt{ deri}}+\mu_{3}\mathcal{L}_{\texttt{Rk4}},\quad\mu_{1},\mu_{2},\mu_{3}\in[0,1], \tag{10}\] where \(\mathcal{L}_{\texttt{MSE}}\) is the mean square error (MSE) of the output of the DNN \(\mathcal{G}_{\theta}\) (denoted by \(\mathbf{\hat{x}}\)) with respect to the collected data \(\mathbf{y}\), and \(\{\mu_{1},\mu_{2},\mu_{3}\}\) are positive constants, determining the weight of different losses in the total loss function. It is given as \[\mathcal{L}_{\texttt{MSE}}=\frac{1}{\mathcal{N}}\sum_{k=1}^{\mathcal{N}} \left\|\mathbf{y}(t_{k})-\mathbf{x}(t_{k})\right\|_{2}^{2}. \tag{11}\] It forces the DNN to produce output in the vicinity of the measurements, and \(\mu_{1}\) is its weight. \(\mathcal{L}_{\texttt{deri}}\) is inspired by the sparse regression and aims to compute the sparse coefficient matrix \(\Xi\). It is computed as follows: \[\mathcal{L}_{\texttt{deri}}=\frac{1}{\mathcal{N}}\sum_{k=1}^{\mathcal{N}} \left\|\hat{\mathbf{x}}(t_{k})-\Theta\big{(}\mathbf{x}(t_{k})\big{)}\Xi \right\|_{2}^{2}, \tag{12}\] The term \(\mathcal{L}_{\texttt{Rk4}}\) encodes the capabilities of the vector field to predict the state at the next time step. This is the MSE of the output of the RK4 scheme and the output of DNN, given as follows: \[\mathcal{L}_{\texttt{Rk4}}=\frac{1}{\mathcal{N}-1}\sum_{k=1}^{\mathcal{N}} \left\|\frac{1}{h_{k}}\left(\mathbf{x}(t_{k+1})-\mathcal{F}_{\texttt{Rk4}} \left(\Theta\big{(}\mathbf{x}(t_{k})\big{)}\Xi,\mathbf{x}(t_{k}),h_{k}\big{)} \right)\right\|_{2}^{2}. \tag{13}\] It is worth highlighting that the coefficient matrix \(\Xi\) will be updated alongside the weights and biases of the DNN, and the dictionary terms are calculated by (4). Furthermore, after a certain number of epoch training, we employ sequential thresholding on \(\Xi\) to remove small coefficients as sketched in Algorithm 1, and update the remaining parameters thereafter. We summarize the procedure in Algorithm 2. Additional steps in Algorithm 2 are as follows. We train our network for initial iterations (denoted by init-iter) without employing sequential thresholding; this helps the DNN to learn the underlying dynamics of the dataset. Afterward, we employ sequential thresholding every \(q\) iterations. In the rest of the paper, the proposed methodology is referred to as iNeural-SINDy. ``` 0: Data set \(\{\mathbf{y}(t_{0}),\mathbf{y}(t_{1}),\ldots,\mathbf{y}(t_{\mathcal{N}})\}\), tol for sequential thresholding, a dictionary containing candidate functions \(\Theta\), a neural network \(\mathcal{G}_{\theta}\) (parameterized by \(\theta\)), initial iteration (init-iter), maximum iterations max-iter, and parameters \(\{\mu_{1},\mu_{2},\mu_{3}\}\). 0: Estimated coefficients \(\Xi\), defining governing equations. 1: Initialize the DNN module parameters, and the coefficients \(\Xi\) 2:\(k=1\) 3:while\(k<\texttt{max-iter}\)do 4: Feed time \(t_{i}\) as an input to the DNN (\(\mathcal{G}_{\theta}\)) and predict output \(\mathbf{x}\). 5: Compute the derivative information \(\hat{\mathbf{x}}\) using automatic differentiation. 6: Compute the cost function (10). 7: Update the parameters of DNN (\(\theta\)) and the coefficient \(\Xi\) using gradient descent. 8:for\(k\%q=0\) & \(k>\texttt{init-iter}\)do\(\triangleright\) Employing sequential thresholding after \(q\) iterations 9:\(\texttt{small\_inds}=(\texttt{abs}(\Xi)\)\(<\texttt{tol})\)\(\triangleright\) identifying small coefficients 10:\(\Xi(\texttt{small\_inds})=0\)\(\triangleright\) excluding small coefficients 11: Update the parameters of DNN (\(\theta\)) and the coefficient \(\Xi\) using gradient descent, while ensuring \(\Xi(\texttt{small\_inds})\) remains zero. 12:\(k=k+1\). ``` **Algorithm 2** iNeuralSINDy: SINDy combined with neural network and integral scheme for nonlinear system identification. ## 4 Extension to Multi-trajectories Data Thus far, we have presented the discovery of governing equations using a single trajectory time series data set using a single initial condition. However, for complex dynamical processes, a single trajectory is not sufficient to describe underlying dynamics completely. Therefore, it is necessary to collect data using multiple trajectories; hence, we need to adopt our proposed methodology to account for multiple trajectories. To achieve this goal, we augment the input time \(t\) with an initial condition so that a DNN can capture the nonlinear behavior of the system with respect to different initial conditions. To that end, let us consider \(\mathcal{M}\) different trajectories with initial conditions \(y_{0}^{[j]}\), where \(j\in\{1,\ldots,\mathcal{M}\}\). To reflect the multi-trajectories in our framework, we modify the architecture of the DNN, which now takes \(t_{k}\) and \(y_{0}^{[j]}\) as inputs, and intend to predict \(y_{k}^{[j]}\)--that is, the state at time \(t_{k}\) with respect to the initial condition \(y_{0}^{[j]}\). Then, we also adapt our loss function (10) as follows: \[\mathcal{L}=\mu_{1}\sum_{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{BSE}}^{[j]}+ \mu_{2}\sum_{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{deri}}^{[j]}+\mu_{3}\sum _{j=1}^{\mathcal{M}}\mathcal{L}_{\texttt{RRA}}^{[j]},\quad\mu_{1},\mu_{2},\mu _{3}\in[0,1], \tag{14}\] where \[\mathcal{L}_{\texttt{BSE}}^{[j]} =\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{\mathcal{M}} \sum_{k=1}^{\mathcal{N}}\left\|\mathbf{y}^{[j]}(t_{k})-\mathbf{x}^{[j]}(t_{k} )\right\|_{2}^{2},\] \[\mathcal{L}_{\texttt{deri}}^{[j]} =\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{\mathcal{M}} \sum_{k=1}^{\mathcal{N}}\left\|\Theta\big{(}\mathbf{x}^{[j]}(t_{k})\big{)} \hat{\Xi}-\hat{\mathbf{x}}(t_{k})\right\|_{2}^{2},\] \[\mathcal{L}_{\texttt{RRA}}^{[j]} =\frac{1}{h}\frac{1}{\mathcal{M}\cdot\mathcal{N}}\sum_{j=1}^{ \mathcal{M}}\sum_{k=1}^{\mathcal{N}}\left\|\mathbf{x}^{[j]}(t_{k+1})-\mathbf{x }_{\texttt{RRA}}^{[j]}(t_{k})\right\|_{2}^{2}\ \ \text{with}\ \ h=t_{k+1}-t_{k}.\] We depict a schematic diagram of our proposed approach in Figure 1 for such a case. ## 5 Numerical Experiments In this section, we demonstrate the proposed methodology, the so-called iNeural-SINDy, by means of several numerical examples and present a comparison with existing methodologies. For the comparison, we primarily consider two approaches, namely DeePyMoD[31], and RK4-SINDy[21]. DeePyMoD utilizes only automatic-differential tools to estimate derivative information by constructing an implicit representation of the noisy data, while RK4-SINDy embeds a numerical integration scheme to avoid computation of derivative information. The proposed methodology iNeural-SINDy can be viewed as a combination of DeePyMoD and RK4-SINDy. For the chaotic Lorenz example, we also present a comparison with Weak-SINDy [28]. To quantify the performance of the considered methodologies, we define the following coefficient error measure for each state variable \(\mathbf{x}_{i}\): \[\mathcal{E}(\mathbf{x}_{i})=\left\|\Xi_{\mathbf{x}_{i}}^{\texttt{truth}}-\Xi_{ \mathbf{x}_{i}}^{\texttt{set}}\right\|_{1}, \tag{15}\] where \(\Xi_{\mathbf{x}_{i}}^{\texttt{truth}}\) and \(\Xi_{\mathbf{x}_{i}}^{\texttt{set}}\) are, respectively, the true and estimated coefficients, corresponding to the state variable \(\mathbf{x}_{i}\), and \(\|\cdot\|_{1}\) denotes the \(l_{1}\)-norm. A motivation to quantity each state variable separately is that their dynamics can be of different scales; thus, their coefficients might also be in a different order. Therefore, to better understand the quality of the discovered models, we analyze them separately. Furthermore, to observe the performance of the methodologies under the noisy data, which is often the case in real-world scenarios, we artificially generate noisy data by corrupting the clean data. For this, we use a white Gaussian noise \(\mathcal{N}(\mu,\sigma^{2})\) with a zero mean \(\mu=0\) and variance \(\sigma^{2}\), where \(\sigma\) denotes the standard deviation. The noise level in the data is controlled by \(\sigma\), i.e., larger \(\sigma\) implies more noise present in the data. Additionally, since iNeural-SINDy and DeePyMoD both involve neural networks, we also compare their performance sensitivity in two scenarios as follows: * Scene_A: In the first scenario, we consider having a single initial condition and a fixed number of neurons in the hidden layers but vary the amount of training data and noise levels. * Scene_B: In the second one, we consider having a single initial condition and a fixed number of training data but vary the number of neurons in the hidden layers and noise levels. In addition, in the following, we further clarify common implementation and reproducibility details that are considered for all the examples. **Data generation.** We have generated the data synthetically by using solve_ivp function from scipy.integrate package to solve a given set of differential equations and produce the data set. When an identification approach terminates based on the considered methodologies (e.g., iNeural-SINDy, DeePyMoD, RK4-SINDy, or Weak-SINDy), we multiply the dictionary \(\Theta\) by estimated coefficient matrix Figure 1: A schematic diagram of the approach iNeural-SINDy. (a) noisy measurement data, (b) feeding the initial condition \(\left(\mathbf{y}_{1,0},\ \mathbf{y}_{2,0}\right)\) and the time \(t\) to the DNN, (c) using the output of the DNN, construct a polynomial dictionary, (d) estimating the parameters of the DNN and sparse vector \(\Xi\) by considering a loss function. \(\Xi^{\texttt{set}}\) to obtain the discovered governing equations. We then make use of the solve_ivp function from scipy.integrate to obtain time-evolution dynamics. Moreover, we perform a data-processing step before feeding to a neural network by mapping the minimum and maximum values to \(-1\) and \(1\), respectively. The hyper-parameters \(\mu\)'s in (14) are set to \(\mu_{1}=1\), \(\mu_{2}=0.1\) and \(\mu_{3}=0.1\) for iNeural-SINDy. Note that we can drive RK4-SINDy and DeePyMoD approaches by setting \(\mu_{3}=0\) and \(\mu_{2}=0\), respectively, in (14). Architecture.We use multi-layer perception networks with periodic activation functions, namely, SIREN[47], to learn an implicit representation based on measurement data. The numbers of hidden layers and neurons will be discussed for each example separately. Hardware.For training neural networks and parameter estimations for discovering governing equations, we have used Nvida(r)RTX A4000 GPU with 16 GB RAM, and for CPU computations (e.g., for generating data), we have used a 12th Gen Intel(r) Core(tm)i5-12600K processor with 32 GB RAM. Training set-up.We use the Adam optimizer [48] to update the coefficient matrix \(\Xi\) that is trained alongside the DNN parameters. The threshold value (tol), learning rate of the optimizer, maximum iterations(max-iter), initial iterations (init-iter), the iteration \(q\) for employing sequential thresholding for Algorithm 2 will be mentioned for each example separately. However, we note that after each thresholding step in Algorithm 2, we reset the learning rate \(5\times 10^{-6}\) for DNN parameters and \(1\times 10^{-2}\) for the coefficient matrix \(\Xi^{\texttt{set}}\) except for the Lorenz example, which is explicitly mentioned in the Lorenz example. ### Two-dimensional damped oscillators In our first example, we consider the discovery of a two-dimensional linear oscillatory damped system using data. The dynamics of the oscillator can be given by \[\begin{split}\dot{\mathbf{x}}_{1}(t)&=-0.1\mathbf{x }_{1}(t)+2.0\mathbf{x}_{2}(t),\\ \dot{\mathbf{x}}_{2}(t)&=-2.0\mathbf{x}_{1}(t)-0.1 \mathbf{x}_{2}(t).\end{split} \tag{16}\] Simulation setup:To generate the training data set, we consider three initial conditions in the range \([-2,2]\) for \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\), and for each initial condition, we take \(400\) equidistant points in the time interval \(t\in[0,\ 10]\). Our DNN architecture has three hidden layers, each having \(32\) neurons. We set the number of epochs \(\texttt{max-iter}=15,000\) and threshold value \(\texttt{tol}=0.05\). The initial iteration init-iter is set to \(5,000\) with the learning rate of \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{set}}\), and after \(q=2,000\) iterations, we employ the sequential thresholding. Moreover, we construct a dictionary containing polynomials of degrees up to two. Results:Figure 2 demonstrates the performance of different algorithms in the presence of noise. We consider additive white Gaussian noise with different standard variances \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.08\}\). It shows that as we increase the noise level, the RK4-SINDy fails to estimate the coefficients. However, iNeural-SINDy and DeePyMoD are robust in discovering the underlying equations accurately, even for high noise levels, and both exhibit similar performances. In Table 1 (in the appendix), we also report learned governing equations from data with various noise levels, which again illustrate that both iNeural-SINDy and DeePyMoD have similar performance, and RK4-SINDy fails to recover governing equations from highly noisy data. Furthermore, in Figure 3, the convergence of the non-zero coefficients for the different methods is shown as the training progresses. It can be seen that iNeural-SINDy has a faster convergence rate compared to DeePyMoD and RK4-SINDy. Next, we discuss the performance of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B. * Scene_A: We consider a DNN architecture with three hidden layers, each having \(32\) neurons. For comparison, we consider noise levels with standard variance \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\), and take the number of samples \(\{30,\ 40,\ 50,\ 100,\ 200,\ 300,\ 400\}\) in the time interval \([0,10]\) for a single initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(5,2)\). The rest of the settings are the same as mentioned earlier in the simulation setup. By varying the noise levels and the number of samples, we report the quality of the learned governing equations in Figure 4. Note that the error criterion defined in (15) is used. Each cell shows the error corresponding to sample sizes and noise levels. By comparing the simulation results, we notice that DeePyMoD performs better for low data regime, but as the number of data is increased, both iNeural-SINDy and DeePyMoD perform similarly. * Scene_B: In this case, we consider a DNN architecture with three hidden layers but vary the number of neurons at each layer from 2 to 64. Again, we consider various noise levels. We take 400 samples in the time interval \([0,10]\) for a single arbitrary initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(5,2)\). The rest of the settings are the same as mentioned earlier in the simulation setup. By varying the noise levels and number of neurons, we report a comparison between iNeural-SINDy and DeePyMoD in Figure 5, where each cell shows the error, corresponding to a specific number of neurons and noise level. These comparisons again show that both methodologies perform comparably and learn correct coefficients with similar performance for a large number of neurons as the DNN has more capacities to capture the dynamics present in the data. More interesting, we would like to highlight that both methods do not over-fit as the capacity of the DNN is increased. ### Cubic damped oscillator The cubic oscillatory system is given by the following equation: Figure 3: Linear oscillator: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. Figure 2: Linear oscillator: A comparison of the learned equations using different methods under various noise levels present in measurement with the ground truth. \[\dot{\mathbf{x}}_{1}(t) =-0.1\mathbf{x}_{1}^{3}(t)+2.0\mathbf{x}_{2}^{3}(t), \tag{17}\] \[\dot{\mathbf{x}}_{2}(t) =-2.0\mathbf{x}_{1}^{3}(t)-0.1\mathbf{x}_{2}^{3}(t).\] The system consists of two coupled, non-linear differential equations describing the time evolution of two variables, \(\mathbf{x}_{1}\) and \(\mathbf{x}_{2}\). Given noisy data, we aim to recover the governing equations and perform a similar analysis as done for the previous example. Simulation setup:To generate the training data set, we consider two initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=\{(2,2),(-2,-2)\}\) and collect 800 points in the time interval \(t\in[0,10]\). Our DNN architecture has three hidden layers, each having 32 neurons. We set the number of epoch max-iter\(=30,000\), threshold value \(\texttt{tol}=0.05\). The initial training iteration (init-iter) is set to \(15,000\) with the learning rate \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{next}}\). After the initial training, for Figure 4: Linear oscillator: A comparison of iNeural-SINDy and DePyMoD under Scene_A. Figure 5: Linear oscillator: A comparison of iNeural-SINDy and DePyMoD under Scene_B every \(q=5,000\) iterations afterward, we employ the sequential thresholding and update the DNN parameters and \(\Xi^{\texttt{next}}\). The dynamical system is estimated in the space of polynomials up to order three. Results:To see the performance of these different methodologies under the presence of noise, we consider a Gaussian noise with the standard variance \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\). We report the obtained results in Figure 6 and in Table 2 (see Appendix), and we notice that RK4-SINDy performs poorly for high noise levels, but iNeural-SINDy and DeePyMoD have competitive performance. Further, in Figure 7, we plot the convergence of the non-zero coefficients as the training progresses for the noise-free case. Here, we again observe a faster convergence for iNeural-SINDy as compared to the other two approaches. Next, we investigate performances of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B, which are discussed in the following. * Scene_A: We fix a DNN architecture with three hidden layers, each having 32 neurons. We consider a set of noise level with \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\) and a set of sample size \(\{30,40,\ 50,\ 100,\ 200,\ 300,\ 400\}\). The data are collected using a random initial condition in the interval \([1,4]\) for \(\{\mathbf{x}_{1},\mathbf{x}_{2}\}\). The rest of the settings are the same as mentioned earlier in the simulation setup. The results are shown in Figure 8, where we notice that for a smaller data set, iNeural-SINDy performs slightly better as compared to DeePyMoD, whereas for larger data set, it is otherwise. * Scene_B: For this case, we fix the sample size to 400 but consider a DNN architecture with three hidden layers with the number of neurons ranging from 2 to 64. Furthermore, we consider a set of noise levels with \(\sigma=\{0,\ 0.02,\ 0.04,\ 0.06\}\). The data are generated as in Scene_A and the training setting is also to be as above. The results are depicted in Figure 9, where we observe that iNeural-SINDy performs better as compared to DeePyMoD for fewer neurons, and as we increase the number of neurons, both methods perform similarly. \[\dot{\mathbf{x}}_{1}(t) =1.0\mathbf{x}_{1}(t)-1.0\mathbf{x}_{2}(t)-\frac{1}{3}\mathbf{x}_{1} ^{3}(t)+0.1, \tag{18}\] \[\dot{\mathbf{x}}_{2}(t) =0.1\mathbf{x}_{1}(t)-0.1\mathbf{x}_{2}(t).\] Simulation setup:For this simulation example, we consider two initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=\{(2,1.5),(1.5,2)\}\) and take \(400\) data points in the time interval \(t\in[0,\ 200]\). The DNN architecture has three hidden layers with \(32\) neurons. We set the number of epoch max-iter\(=50,000\) and threshold value tol\(=0.05\). The number of iterations for the initial training is set to \(15,000\) with the learning rate \(10^{-4}\) for the DNN parameters and \(10^{-3}\) for the coefficient matrix \(\Xi^{\texttt{next}}\). After the initial training, we employ the sequential thresholding after each \(q=5,000\) iterations. We aim to learn the underlying governing equations in the space of polynomials with degrees up to order three. Results:Converse to the results that we earned in the previous two examples, for the FHN, iNeural-SINDy has a slower convergence rate compared to DeePyMoD and RK4-SINDy, see Figure 10. For this example, we again make a similar observation (see Figure 11, and Table 3 in Appendix), where we notice that iNeural-SINDy and DeePyMoD exhibit similar performances for lower noise levels, but for the higher noise values, (see the results for \(\sigma=0.08\) Table 3), iNeural-SINDy tends to outperform DeePyMoD. Moreover, RK4-SINDy clearly fails for high noise levels. However, converse to the results reported in the previous two examples, for this example, we notice a slower convergence Figure 8: Cubic oscillator: A comparison of iNeural-SINDy and DeePyMoD under Scene_A. Figure 7: Cubic oscillator: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. of iNeural-SINDy compared to DeePyMoD and RK4-SINDy; see, Figure 10. But we highlight that iNeural-SINDy can identify governing equations for highly noisy data, as stated earlier. Next, we compare the performances of iNeural-SINDy and DeePyMoD under Scene_A and Scene_B. * Scene_A: In this case, we consider a fixed DNN architecture with three hidden layers, each consisting of 32 neurons. The different noise levels \(\{0.0,\ 0.02,\ 0.04,0.06\}\) are considered, while the sample size is considered in the range from 150 to 450 with an increment of 50. Here, a single initial condition is used for data collection; that is, \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(3,2)\). The rest of the settings are the same as mentioned earlier for this example. The results are shown in Figure 12, where we notice that DeePyMoD outperforms iNeural-SINDy and has a better performance. * Scene_B: Here, we conduct a study where we keep the number of samples fixed at 400, obtained Figure 10: Fitz-Hugh Nagumo: Estimated coefficients during the training loop for iNeural-SINDy, DeePyMoD and RK4-SINDy. Figure 9: Cubic oscillator: A comparison of iNeural-SINDy and DeePyMoD under Scene_B using the initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0))=(3,2)\). The DNN architecture is designed to have three hidden layers. We aim to explore how iNeural-SINDy and DeePyMoD perform under different combinations of neurons for each layer and noise level. The training settings for each case remain the same, as mentioned earlier. The outcomes are presented in the heat-map depicted in Figure 13, where we notice that both DeePyMoD and iNeural-SINDy almost have the same performance in all the settings. ### Chaotic Lorenz system The chaotic Lorenz system is a set of three differential equations as follows [49]: \[\dot{\mathbf{x}}_{1}(t) =\gamma\big{(}\mathbf{x}_{2}(t)-\mathbf{x}_{1}(t)\big{)}, \tag{19a}\] \[\dot{\mathbf{x}}_{2}(t) =\mathbf{x}_{1}(t)\big{(}\rho-\mathbf{x}_{3}(t)\big{)}-\mathbf{x} _{2}(t),\] (19b) \[\dot{\mathbf{x}}_{3}(t) =\mathbf{x}_{1}(t)\mathbf{x}_{2}(t)-\beta\mathbf{x}_{3}(t), \tag{19c}\] Figure 11: Fitz-Hugh Nagumo: Comparison of the estimation with different techniques and noise level Figure 12: Fitz-Hugh Nagumo: A comparison of iNeural-SINDy and DeePyMoD under Scene_A. where the parameters \(\gamma\), \(\rho\), and \(\beta\) are positive constants with associated standard values \(\gamma=10,\ \rho=28,\ \beta=\frac{8}{3}\). The Lorenz system is a classic example of a chaotic system, which means that small differences in the initial conditions can lead to vastly different outcomes over time. It is a widely used benchmark example for discovering governing equations [11]. Simulation setup:We collect our data in the time interval \(t\in[0,10]\) with a sample size of \(200\) for three different initial conditions \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0),\mathbf{x}_{3}(0))=\{(-8,7,27),(-6,6,25),(-9,8,22)\}\). The DNN architecture has three hidden layers, each having \(64\) neurons. We set the number of iterations max-iter\(=35,000\), and threshold value \(\mathtt{tol}=0.2\). We set the number of iterations for the initial training to init-iter\(=10,000\), and the learning rate \(7\cdot 10^{-4}\) for the DNN parameters and \(10^{-2}\) for the coefficient matrix \(\Xi^{\mathtt{est}}\). After finishing the initial iterations, we employ sequential thresholding after every \(q=3,000\) iterations. Moreover, after each sequential thresholding, we reset the learning rate for DNN parameters to \(5\cdot 10^{-6}\) and for the coefficient matrix \(\Xi^{\mathtt{est}}\) to \(10^{-2}\). The governing equations are estimated by constructing a dictionary with polynomials up to degree two. Since the magnitude of \(\{\mathbf{x}_{1},\mathbf{x}_{2},\mathbf{x}_{3}\}\) for the Lorenz example can be large, we consider scaling the \(\mathbf{x}\)'s using a scaling factor \(\alpha\). Note that such scaling does not affect the interaction between different \(\mathbf{x}\)'s; thus, the sparsity pattern remains the same as well. However, it is observed that improving the condition number of the dictionary matrix enhances the estimate of the coefficients and helps us to determine the right governing equations. Results:We conduct experiments using a scaling factor \(\alpha=0.1\). Further, we aim to learn governing equations from the noisy data with noise levels of \(\sigma=\{0,\ 0.04,\ 0.1,\ 0.2,\ 0.4\}\). We report the obtained results in Table 4, where we notice that iNeural-SINDy and DeePyMoD yield similar performance except for the case of higher noise level (e.g., see the results for \(\sigma=0.4\)), where iNeural-SINDy recovered the equations better. However, RK4-SINDy performs poorly for the higher noise levels. Next, we conduct a performance analysis of iNeural-SINDy and DeePyMoD for Scene_A and Scene_B. We note that in both scenarios, the training data are generated using a single initial condition \((\mathbf{x}_{1}(0),\mathbf{x}_{2}(0),\mathbf{x}_{3}(0))=(-8,\ 7,\ 27)\). * Scene_A: We compare iNeural-SINDy and DeePyMoD under Scene_A. We also investigate the effect of the scaling factor \(\alpha\) and consider two values of it, i.e., \(\alpha=\{0.1,\ 1\}\). We fix the DNN architecture to have three hidden layers, each having \(64\) neurons. We consider different sample sizes and noise levels to compare the performance of iNeural-SINDy and DeePyMoD. For \(\alpha=0.1\), Figure 13: Fitz-Hugh Nagumo: A comparison of iNeural-SINDy and DeePyMoD under Scene_B we show the results in Figure 14, where we notice that iNeural-SINDy outperforms DeePyMoD in most cases. A similar observation is made for \(\alpha=1\), which is reported in Figure 15. For these experiments, it is hard to conclude the effect of the scaling factors, as we notice that in some cases, the scaling improves the performance, and in some cases, it is not the case. * Scene_B: In this case, we fix the sample size to \(400\). We also fix the number of hidden layers for the DNN architecture to three but vary the number of neurons in each layer. We also conduct experiments to see the effect of the scaling factor in this case as well. The results for \(\alpha=1\) and \(\alpha=0.1\) are shown in Figure 16 and Figure 17, respectively. These heat maps indicate the outperformance of iNeural-SINDy in most cases. We also observe that for a larger number of neurons, the scaling factor \(\alpha=0.1\) slightly performs better compared to scaling factor \(\alpha=1\) in both iNeural-SINDy as well as DeePyMoD. A comparison of iNeural-SINDy with WEAK-SINDy:Beside our previous comprehensive study, we next compare iNeural-SINDy with Weak-SINDy, which also does not require any estimate of Figure 16: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_B with the scaling factor \(\alpha=0.1\). Figure 14: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_A with the scaling factor \(\alpha=0.1\) Figure 15: Lorenz example: A comparison of iNeural-SINDy and DeePyMoD under Scene_A with the scaling factor \(\alpha=1\) derivatives using noisy data; for more details on Weak-SINDy, we refer to [28]. For this study, we again consider the same initial condition as used for Scene_A and Scene_B. We take 2000 data points in the time interval \([0,10]\), which are corrupted using different noise levels \(\sigma=\{0,0.02,0.08,0.1\}\). To discover the governing equations, we consider a dictionary of polynomials up to degree two. For training iNeural-SINDy we use the same setting as discussed in Section 5.4. For Weak-SINDy, we consider the code provided by the authors1. We report the results in Table 5, which indicates that iNeural-SINDy outperforms Weak-SINDy in the presence of high noise. Footnote 1: [https://github.com/MathBioCU/WSINDy_ODE/tree/master](https://github.com/MathBioCU/WSINDy_ODE/tree/master) ## 6 Conclusions In this work, we proposed a methodology, namely iNeural-SINDy, to discover governing equations using noisy and scarce data. It consists of three main components--these are: (a) learning an implicit representation based on given noisy data using a deep neural network, (b) setting up a sparse regression problem inspired by SINDy[11], discovering governing equations, and (c) utilize an integral form of differential equations. We have combined all these components innovatively to learn governing equations from noisy data. Particularly, we highlight that we leverage the implicit representation using neural networks to estimate the derivative using automatic differential to avoid any numerical derivative estimation using noisy data. We have shown how iNeural-SINDy can be employed when data are collected using multiple trajectories. Furthermore, we have presented an extensive comparison of the proposed methodology with RK4-SINDy[21] and DeePyMoD[31], where we noticed that iNeural-SINDy clearly out-performed RK4-SINDy, and in many cases, iNeural-SINDy also yielded better or comparable results as compared to DeePyMoD, expect for the FHN example. We also compared iNeural-SINDy with Weak-SINDy using the Lorenz example, where we noticed a better performance of iNeural-SINDy. In the future, we would like to extend the proposed framework to the identification of parametric and control-driven dynamical systems. We also like to combine the idea of the ensemble discussed in [29] to further improve the quality of learned governing equations.
2309.17380
Testing neutrino electromagnetic properties at current and future dark matter experiments
We analyze data from the dark matter direct detection experiments PandaX-4T, LUX-ZEPLIN and XENONnT to place bounds on neutrino electromagnetic properties (magnetic moments, millicharges, and charge radii). We also show how these bounds will improve at the future facility DARWIN. In our analyses we implement a more conservative treatment of background uncertainties than usually done in the literature. From the combined analysis of all three experiments we can place very strong bounds on the neutrino magnetic moments and on the neutrino millicharges. We show that even though the bounds on the neutrino charge radii are not very strong from the analysis of current data, DARWIN could provide the first measurement of the electron neutrino charge radius, in agreement with the Standard Model prediction.
Carlo Giunti, Christoph A. Ternes
2023-09-29T16:29:21Z
http://arxiv.org/abs/2309.17380v2
# Testing neutrino electromagnetic properties at current and future dark matter experiments ###### Abstract We analyze data from the dark matter direct detection experiments PandaX-4T, LUX-ZEPLIN and XENONnT to place bounds on neutrino electromagnetic properties (magnetic moments, millicharges, and charge radii). We also show how these bounds will improve at the future facility DARWIN. In our analyses we implement a more conservative treatment of background uncertainties than usually done in the literature. From the combined analysis of all three experiments we can place very strong bounds on the neutrino magnetic moments and on the neutrino millicharges. We show that even though the bounds on the neutrino charge radii are not very strong from the analysis of current data, DARWIN could provide the first measurement of the electron neutrino charge radius, in agreement with the Standard Model prediction. ## I Introduction The investigation of neutrino properties is one of the most active research fields in particle physics. In the Standard Model neutrinos are massless particles which interact only via weak interactions. Through the observation of neutrino oscillations we know, however, that at least two neutrinos are massive particles. Therefore, the Standard Model has to be extended to account for neutrino masses. In some of the extensions of the Standard Model neutrinos can acquire electromagnetic properties through quantum loop effects. Therefore, some models of physics beyond the Standard Model predict the interaction of neutrinos with electromagnetic fields and electromagnetic interactions with charged particles. Moreover, even the Standard Model predicts non-zero neutrino charge radii due to radiative corrections. For detailed reviews on the theories of neutrino electromagnetic properties we refer the reader to Refs. [1; 2]. In this paper we test the neutrino electromagnetic properties (magnetic moments, millicharges, and charge radii) by analyzing data from dark matter direct detection experiments. These experiments aim to measure the nuclear or electron recoils from dark matter interacting with the material in the detector, which is Xenon for all experiments under consideration in this work. In this paper we consider the data from the PandaX-4T experiment [3], from LUX-ZEPLIN (LZ) [4], and from XENONnT [5]. We also show the sensitivity of the future experiment DARWIN [6]. One of the background sources for the dark matter search in these experiments is the elastic scattering of solar neutrinos on the electrons in the detector material. Therefore, if some new physics model changes the standard electron-neutrino elastic scattering (E\(\nu\)ES) cross section, it can be tested at dark matter direct detection experiments. One possibility to alter this process is the presence of neutrino electromagnetic properties. Our paper is structured as follows: In Section II we discuss how neutrino magnetic moments, neutrino millicharges and neutrino charge radii alter the E\(\nu\)ES process. In Section III we detail the analysis procedure of the experiments under consideration. We proceed to present and discuss our results in Section IV, before concluding in Section V. ## II Theoretical framework In this paper we obtain bounds on neutrino electromagnetic properties from the data of Xenon dark matter experiments through the elastic scattering of solar neutrinos with electrons (E\(\nu\)ES). These experiments are sensitive to the neutrino electromagnetic properties through their contributions to E\(\nu\)ES in addition to the Standard Model E\(\nu\)ES cross section. Therefore, we first present the Standard Model E\(\nu\)ES cross section in Subsection II.1, and then we discuss the cross sections due to the neutrino electromagnetic properties that we consider: magnetic moments in Subsection II.2, electric charges in Subsection II.3, and charge radii in Subsection II.4. These three electromagnetic properties are the observable effective electromagnetic properties of ultrarelativistic neutrinos, for which the effective magnetic moments include possible electric moments and the effective charge radii include possible anapole moments [1; 2; 7]. Solar neutrinos oscillate and arrive at a detector on Earth as a mixture of \(\nu_{e}\), \(\nu_{\mu}\), and \(\nu_{\tau}\), whose fluxes are given by \[\Phi^{i}_{\nu_{e}}=\Phi^{i\,\odot}_{\nu_{e}}P_{ee},\quad\Phi^{i}_{\nu_{\mu}}= \Phi^{i\,\odot}_{\nu_{e}}\left(1-P_{ee}\right)\cos^{2}\vartheta_{23},\quad \Phi^{i}_{\nu_{\tau}}=\Phi^{i\,\odot}_{\nu_{e}}\left(1-P_{ee}\right)\sin^{2} \vartheta_{23},, \tag{1}\] where \(\Phi^{i\,\odot}_{\nu_{e}}\) are the fluxes of \(\nu_{e}\) produced by thermonuclear reactions in the center of the Sun, with \(i=pp\), \({}^{7}\)Be, etc., and \(P_{ee}\) is the survival probability of \(\nu_{e}\) at the Earth. In our analysis we use the solar spectra taken from Ref. [8; 9; 10; 11] using the normalizations for the high metallicity model taken from the review in Ref. [12]. We consider \(pp\) and \({}^{7}\)Be neutrinos, which give the main contribution to the event rates of the experiments under consideration. For these low-energy solar neutrinos \[P_{ee}\simeq\left(1-\frac{1}{2}\,\sin^{2}2\vartheta_{12}\right)\cos^{4} \vartheta_{13}+\sin^{4}\vartheta_{13}. \tag{2}\] Using \(\sin^{2}\vartheta_{12}=0.318\pm 0.016\) and \(\left.\sin^{2}\vartheta_{13}\right|_{\text{NO}}=0.02200^{+0.00069}_{-0.00062}\) or \(\left.\sin^{2}\vartheta_{13}\right|_{\text{IO}}=0.02225^{+0.00064}_{-0.00070}\) obtained in the global fit of Ref. [13] (see also the similar results obtained in Refs. [14; 15]) in the case of normal ordering and inverted ordering of the neutrino masses, we obtain \(P_{ee}=0.542\pm 0.011\). The fluxes of \(\nu_{\mu}\) and \(\nu_{\tau}\) depend on the mixing angle \(\vartheta_{23}\), which is close to maximal mixing (\(\vartheta_{23}=\pi/4\)) [13; 14; 15]. For simplicity, we consider \(\sin^{2}\vartheta_{23}=0.5\), which implies \(\Phi^{i}_{\nu_{\mu}}=\Phi^{i}_{\nu_{\tau}}\). Therefore, we obtain equal constraints on the electromagnetic properties of \(\nu_{\mu}\) and \(\nu_{\tau}\). ### The Standard Model E\(\nu\)ES cross section The Standard Model E\(\nu\)ES cross section per Xenon atom is given by \[\frac{d\sigma^{\text{SM}}_{\nu_{\ell}-\text{Xe}}}{dT_{\text{e}}}(E_{\nu},T_{ \text{e}})=Z^{\text{Xe}}_{\text{eff}}(T_{e})\,\frac{G^{2}_{\text{F}}m_{e}}{2 \pi}\left[\left(g^{\nu_{\ell}}_{V}+g^{\nu_{\ell}}_{A}\right)^{2}+\left(g^{\nu _{\ell}}_{V}-g^{\nu_{\ell}}_{A}\right)^{2}\left(1-\frac{T_{e}}{E_{\nu}}\right) ^{2}-\left((g^{\nu_{\ell}}_{V})^{2}-(g^{\nu_{\ell}}_{A})^{2}\right)\frac{m_{e} T_{e}}{E^{2}_{\nu}}\right], \tag{3}\] where \(G_{\text{F}}\) is the Fermi constant, \(m_{e}\) is the electron mass, \(E_{\nu}\) is the neutrino energy, and \(T_{e}\) is the observable electron recoil energy. The neutrino-electron couplings \(g^{\nu_{\ell}}_{V,A}\) depend on the neutrino flavor: \[g^{\nu_{e}}_{V}=2\sin^{2}\vartheta_{W}+1/2, g^{\nu_{e}}_{A}=1/2, \tag{4}\] \[g^{\nu_{\mu,\tau}}_{V}=2\sin^{2}\vartheta_{W}-1/2, g^{\nu_{\mu,\tau}}_{A}=-1/2, \tag{5}\] with \(\sin^{2}\vartheta_{W}=0.23863\pm 0.00005\)[16]. The coefficient \(Z_{\rm eff}^{\rm Xe}(T_{e})\) quantifies the effective number of electrons which can be ionized at \(T_{e}\)[17]. We calculate \(Z_{\rm eff}^{\rm Xe}(T_{e})\) using Table II of Ref. [18]. ### Neutrino magnetic moments Neutrino magnetic moments are predicted by many theories with massive neutrinos beyond the Standard Model (BSM) and have been constrained by many observations (see, e.g., Refs. [1; 2; 7; 16]). The most stringent experimental limit \(|\mu_{\nu_{e}}|<2.9\times 10^{-11}\,\mu_{B}\) at 90% C.L., where \(\mu_{B}\) is the Bohr magneton, was obtained in the GEMMA experiment [19] through the E\(\nu\)ES of reactor \(\bar{\nu}_{e}\). This bound is more than eight orders of magnitude larger than the prediction of neutrino magnetic moments in the minimal extension of the SM with right handed neutrinos and Dirac neutrino masses [20; 21; 22]. However, in more elaborate models (see, e.g., the review in Ref. [1]), the neutrino magnetic moments can be larger and can be probed in current experiments. The contribution of the neutrino magnetic moments to E\(\nu\)ES is incoherent with the Standard Model contribution because the neutrino magnetic moment interaction flips the helicity of ultrarelativistic neutrinos, whereas the Standard Model weak interaction is helicity-conserving. Therefore, for each flavor neutrino \(\nu_{\ell}\) the total cross section is given by the sum of the Standard Model cross section (3) and the magnetic moment cross section \[\frac{d\sigma^{\rm MM}_{\nu_{\ell}\times{\rm Xe}}}{dT_{\rm e}}(E_{\nu},T_{ \rm e})=Z_{\rm eff}^{\rm Xe}(T_{\rm e})\frac{\pi\alpha^{2}}{m_{e}^{2}}\left( \frac{1}{T_{\rm e}}-\frac{1}{E_{\nu}}\right)\left|\frac{\mu_{\nu_{\ell}}}{\mu_ {\rm B}}\right|^{2}, \tag{6}\] where \(\alpha\) is the fine-structure constant. For the low values of \(T_{\rm e}\) in the experiments under consideration (\(T_{\rm e}\lesssim 30\,\)keV), the cross section (6) is approximately proportional to \(T_{\rm e}^{-1}\). Therefore, the effects of the neutrino magnetic moments are probed by the observation of an event excess near the \(T_{\rm e}\) threshold. ### Neutrino electric charges In the Standard Model neutrinos are neutral, but in BSM theories they can have small electric charges, often called "millicharges" (see, e.g., the review in Ref. [1]), which can be probed in neutrino scattering experiments. In general, the three flavor neutrinos can have the millicharges \(q_{\nu_{e}}\), \(q_{\nu_{\mu}}\), and \(q_{\nu_{\tau}}\), and there can be also the three transition electric charges \(q_{\nu_{e}\mu}\), \(q_{\nu_{e}\tau}\), and \(q_{\nu_{\mu}\tau}\). Since the electric charge interaction conserves the helicity of ultrarelativistic neutrinos and the millicharges of the flavor neutrinos conserve the neutrino flavor, they contribute coherently with the Standard Model interaction, which is helicity and flavor conserving. On the other hand, since the transition electric charges induce a change of flavor they contribute incoherently with the Standard Model interaction. Therefore, the total E\(\nu\)ES cross section is given by \[\frac{d\sigma^{\rm SM+EC}_{\nu_{\ell}\rightarrow{\rm Xe}}}{dT_{\rm e}}=\left( \frac{d\sigma^{\rm SM+EC}_{\nu_{\ell}\rightarrow{\rm Xe}}}{dT_{\rm e}}\right) _{q_{\nu_{\ell}}}+\sum_{\ell^{\prime}\neq\ell}\left(\frac{d\sigma^{\rm EC}_{\nu _{\ell}\rightarrow{\rm Xe}}}{dT_{\rm e}}\right)_{q_{\nu_{\ell}\ell^{\prime}}}, \tag{7}\] where \(\left(d\sigma^{\rm SM+EC}_{\nu_{\ell}\rightarrow{\rm Xe}}/dT_{\rm e}\right)_{q _{\nu_{\ell}}}\) is given by Eq. (3) with \[g_{V}^{\nu_{\ell}}\to g_{V}^{\nu_{\ell}}-\frac{\sqrt{2}\pi \alpha}{G_{\rm F}m_{e}T_{e}}\,q_{\nu_{\ell}}, \tag{8}\] and \[\left(\frac{d\sigma^{\rm EC}_{\nu_{\ell}-{\rm Xe}}}{dT_{\rm e}}\right)_{q_{\nu_{ \ell}\ell^{\prime}}}=Z^{\rm Xe}_{\rm eff}(T_{e})\,\frac{\pi\alpha^{2}}{m_{e}T_{ \rm e}^{2}}\left[1+\left(1-\frac{T_{\rm e}}{E_{\nu}}\right)^{2}-\frac{m_{e}T_{ \rm e}}{E_{\nu}^{2}}\right]|q_{\nu_{\ell\ell^{\prime}}}|^{2}, \tag{9}\] for \(\ell^{\prime}\neq\ell\). Hence, E\(\nu\)ES gives full information on the charges \(q_{\nu_{\ell}}\) of the flavor neutrinos, including their sign, whereas only the absolute values of the transition electric charges \(q_{\nu_{\ell}\nu^{\prime}}\) can be probed. Note also that the effects of the electric charges are enhanced at small values of \(T_{e}\), leading to the possibility to probe very small electric charges in low-threshold experiments. ### Neutrino charge radii Even if neutrinos are neutral, they can have charge radii. Indeed, even in the Standard Model the massless and neutral flavor neutrinos have tiny charge radii induced by radiative corrections, which are given by [23; 24; 25] (with the definition of the charge radii in Refs. [26; 27]) \[\langle r_{\nu_{\ell}}^{2}\rangle_{\rm SM}=-\frac{G_{\rm F}}{2\sqrt{2}\pi^{2}} \left[3-2\ln\left(\frac{m_{\ell}^{2}}{m_{W}^{2}}\right)\right], \tag{10}\] where \(m_{W}\) and \(m_{\ell}\) are, respectively, the \(W\) boson and charged lepton masses (\(\ell=e,\mu,\tau\)). Numerically, we have \[\langle r_{\nu_{e}}^{2}\rangle_{\rm SM} =-0.83\times 10^{-32}\,{\rm cm}^{2}, \tag{11}\] \[\langle r_{\nu_{\mu}}^{2}\rangle_{\rm SM} =-0.48\times 10^{-32}\,{\rm cm}^{2},\] (12) \[\langle r_{\nu_{\tau}}^{2}\rangle_{\rm SM} =-0.30\times 10^{-32}\,{\rm cm}^{2}. \tag{13}\] These diagonal charge radii of the flavor neutrinos are the only charge radii that exist in the Standard Model, where neutrino flavor is conserved. In BSM theories neutrinos can have also the transition charge radii \(\langle r_{\nu_{e\mu}}^{2}\rangle\), \(\langle r_{\nu_{e\tau}}^{2}\rangle\), and \(\langle r_{\nu_{\mu\tau}}^{2}\rangle\) which induce flavor transitions in scattering processes. We consider the general case with both diagonal and transition charge radii. As for the electric charges discussed in Subsection II.3, the diagonal charge radii contribute to the E\(\nu\)ES cross section coherently with the Standard Model interaction, because both conserve the helicity of ultrarelativistic neutrinos and neutrino flavors, whereas the transition charge radii contribute incoherently. Therefore, the total E\(\nu\)ES cross section is given by \[\frac{d\sigma^{\rm SM+CR}_{\nu_{\ell}-{\rm Xe}}}{dT_{\rm e}}=\left(\frac{d \sigma^{\rm SM+CR}_{\nu_{\ell}-{\rm Xe}}}{dT_{\rm e}}\right)_{\langle r_{\nu_ {\ell}^{2}}^{2}\rangle}+\sum_{\ell^{\prime}\neq\ell}\left(\frac{d\sigma^{\rm CR }_{\nu_{\ell}-{\rm Xe}}}{dT_{\rm e}}\right)_{\langle r_{\nu_{e}\ell^{\prime}}^ {2}\rangle}, \tag{14}\] where \(\left(d\sigma^{\rm SM+CR}_{\nu_{\ell}-{\rm Xe}}/dT_{\rm e}\right)_{\langle r_{ \ell}^{2}\rangle}\) is given by Eq. (3) with \[g_{V}^{\nu_{\ell}}\to g_{V}^{\nu_{\ell}}+\frac{\sqrt{2}\pi\alpha}{3G_{\rm F}} \,\langle r_{\nu_{\ell\ell^{\prime}}}^{2}\rangle, \tag{15}\] and \[\left(\frac{d\sigma^{\rm CR}_{\nu_{\ell}-{\rm Xe}}}{dT_{\rm e}}\right)_{ \langle r_{\ell\ell^{\prime}}^{2}\rangle}=Z^{\mathcal{A}}_{\rm eff}(T_{e})\, \frac{\pi\alpha^{2}m_{e}}{9}\left[1+\left(1-\frac{T_{e}}{E_{\nu}}\right)^{2}- \frac{m_{e}T_{e}}{E_{\nu}^{2}}\right]|\langle r_{\nu_{\ell\ell^{\prime}}}^{2} \rangle|^{2}, \tag{16}\] for \(\ell^{\prime}\neq\ell\). As for the electric charges discussed in Subsection II.3, E\(\nu\)ES gives full information on the diagonal charge radii \(\langle r_{\nu_{\ell}}^{2}\rangle\) of the flavor neutrinos, including their sign, and only information on the absolute values of the transition electric charges \(\langle r_{\nu_{\ell\ell^{\prime}}}^{2}\rangle\). ## III Data analysis ### Current experiments In the analyses presented in this paper we include data from LZ [4], XENONnT [5] and PandaX-4T [3]. In this section we discuss the details of each analysis. For an experiment \(X\) the overall predicted number of events in a given energy-bin \(k\) is given by \[R_{k}^{X}=R_{k}^{E\nu ES}+\sum_{i}R_{k}^{i}\,, \tag{17}\] where \(R_{k}^{E\nu ES}\) is the contribution from solar neutrinos which elastically scatter on electrons, while \(R_{k}^{i}\) are the remaining background components. We have extracted the contributions \(R_{k}^{i}\) for the different experiments from Refs. [3; 5; 28] and \(R_{k}^{E\nu ES}\) is obtained from \[R_{k}^{E\nu ES}=N\ \int_{T_{e}^{k}}^{T_{e}^{k+1}}dT_{e}\ \int_{0}^{\infty}dT_{e}^{ \prime}\ R(T_{e},T_{e}^{\prime})\ A(T_{e}^{\prime})\sum_{i=pp,^{\prime}\text{ Be}}\int_{E\text{$\rho$}^{\text{max}}}^{E_{\nu,i}^{\text{max}}}dE_{\nu}\ \sum_{\ell}\ \Phi_{\nu_{\ell}}^{i}(E_{\nu})\ \frac{d \sigma_{\nu_{\ell}}}{dT_{e}^{\prime}}\,. \tag{18}\] In this expression \(T_{e}\) and \(T_{e}^{\prime}\) are the reconstructed and true electron recoil energies, while \(E_{\nu}\) is the neutrino energy. The electron-neutrino cross section for a given neutrino flavor \(\nu_{\ell}\) is given by \(d\sigma_{\nu_{\ell}}/dT_{e}^{\prime}\) and \(\Phi_{\nu_{\ell}}^{i}(E_{\nu})\) are the solar neutrino fluxes from Eq. (1). The minimal neutrino energy to produce an electron recoil of \(T_{e}^{\prime}\) is given by \(E_{\nu}^{\text{min}}=(T_{e}^{\prime}+\sqrt{2m_{e}T_{e}^{\prime}+T_{e}^{\prime 2 }})/2\), while the maximal neutrino energy \(E_{\nu,i}^{\text{max}}\) depends on the production process, indicated with the index \(i\). Next, \(R(T_{e},T_{e}^{\prime})\) and \(A(T_{e}^{\prime})\) are the detector resolution and efficiency and are different for all experiments. Finally, \(N\) is a normalization constant which takes into account the exposure and detector volume. Also this quantity is different for each experiment. For the analysis of the experiments under consideration we use the detector efficiencies from Refs. [3; 5; 29]. For the energy resolution at LZ we use the same function that has been used in Refs. [18; 30]. In the case of XENONnT we use the resolution function of Ref. [31] and for PandaX-4T we use the one from Ref. [3]. The predicted number of events in Eq. (17) has to be compared with the data \(D^{X}\) accumulated in each experiment. We use the data presented in Fig. 6 of Ref. [4] for LZ1 and the data in Fig. 3 of Ref. [3] for PandaX-4T. For XENONnT we use the data from Fig. 4 (5) of Ref. [5] for recoil energies above (below) 30 keV. Footnote 1: We do not use the timing data from Ref. [29], which could result only in a 5% improvement in the bounds. Due to the low statistics in some of the bins, for LZ and PandaX-4T we use the Poissonian least-squares function \[\chi_{X}^{2}=\min_{\vec{\alpha},\vec{\beta}}\left\{2\left(\sum_{k}R_{k}^{X}- D_{k}^{X}+D_{k}^{X}\ \log D_{k}^{X}/R_{k}^{X}\right)+\sum_{i}(\alpha_{i}/\sigma_{\alpha_{i}})^{2}+ \sum_{i}(\beta_{i}/\sigma_{\beta_{i}})^{2}\right\}\,, \tag{19}\] where \(\alpha_{i}\) are normalization constants multiplied to each single background component in Eq. (17). For LZ, the uncertainties \(\sigma_{\alpha_{i}}\) are extracted from Ref. [28]. For PandaX-4T, they are taken from Ref. [3]. Note that some of them are left to vary freely in the analysis. Also included are uncertainty coefficients \(\beta_{i}\) of the solar neutrino fluxes, with uncertainties \(\sigma_{\beta_{i}}\) taken from Ref. [12]. In the case of XENONnT data, we use instead \[\chi_{\text{XENONnT}}^{2}=\min_{\vec{\alpha},\vec{\beta}}\left\{\sum_{k}\left( \frac{R_{k}^{\text{XENONnT}}-D_{k}^{\text{XENONnT}}}{\sigma_{k}}\right)^{2}+ \sum_{i}(\alpha_{i}/\sigma_{\alpha_{i}})^{2}+\sum_{i}(\beta_{i}/\sigma_{\beta_{ i}})^{2}\right\}\,, \tag{20}\] where the uncertainties in each bin \(\sigma_{k}\) are extracted from Ref. [5]. The remaining components are equivalent to the corresponding ones for LZ and PandaX-4T. We also perform a combined analysis of all three experiments. In this case we correlate the uncertainties regarding the neutrino flux among the experiments. In addition, several of the background components are common to at least two of the three experiments. In these cases we also correlate the normalizations of the background components. While such a combined analysis has not been performed yet, we have noticed that this correlated analysis produces only slightly tighter bounds than simply summing up the individual \(\chi^{2}\). ### DARWIN sensitivity We also compute the sensitivity to electromagnetic neutrino properties for the future experiment DARWIN. The calculation of the event rate at DARWIN is basically the same as for the current experiments, given in Eqs. (17) and (18), but we also include the contributions from solar N, O and \(pep\) neutrinos. Note, however, that their contribution is mostly negligible in comparison with some of the background contributions, as shown in Fig. 1 of Ref. [6]. We include them, nevertheless, since we use the full spectrum as shown in Ref. [6]. The energy dependence of the neutrino oscillation probability is taken into account for these higher-energy neutrinos. For the individual background components we use the spectra given in Ref. [6], which need to be normalized to the considered exposure. Due to lack of more detailed information, we use the same resolution function and detector efficiency as for XENONnT. We assume the efficiency to remain constant for \(T_{e}>T_{e,\mathrm{max}}^{\mathrm{XENONnT}}\). With these assumptions, we are able to reproduce the E\(\nu\)ES spectra for all five neutrino species in Fig. 1 of Ref. [6], which validates our choice of efficiency and resolution. As was done in Ref. [6], we consider different scenarios for the DARWIN sensitivity. First we assume an exposure of 30 ty. Next, we assume an exposure of 300 ty. Finally, we assume an exposure of 300 ty again, but also that the \({}^{136}\)Xe background component is depleted to 1%. When generating the mock data, always compatible with the Standard Model, we use 51 logarithmically spaced bins between 1 and 1500 keV recoil energy. Note that the spectrum at higher energies is not sensitive to any BSM effect considered in this paper, because the E\(\nu\)ES rate is much smaller than some of the background rates. We still use the full spectrum, since the inclusion of events at high energies can help to control the effect of background uncertainties. ## IV Results In this section we present the bounds that can be obtained from current data and the sensitivity at DARWIN to neutrino electromagnetic properties. ### Neutrino magnetic moments We first discuss neutrino magnetic moments. Our results are presented in Fig. 1 and Table 1. In the first column of Table 1 we present the bounds on the magnetic moment \(\mu_{\nu_{e}}\). In the second we present the results for \(\mu_{\nu_{\mu}}\) and \(\mu_{\nu_{\tau}}\), which are the same in our analysis. Slightly different bounds for the two moments have been found, e.g., in Refs. [18; 30], where non maximal atmospheric mixing (\(\sin^{2}\theta_{23}\neq 0.5\)) was assumed to calculate the neutrino oscillation probability. Since we chose maximal atmospheric mixing, the expected number of events is the same for a non-zero \(\mu_{\nu_{\mu}}\) or \(\mu_{\nu_{\tau}}\), hence producing the same bound. Finally, we consider the case of a single effective magnetic moment, \(\mu_{\nu}^{eff}=\mu_{\nu_{e}}=\mu_{\nu_{\mu}}=\mu_{\nu_{\tau}}\). Our analysis of PandaX-4T updates the analysis of Ref. [32], where the Panda collaboration analyzed the data from a smaller version of the current detector. In any case, due to the larger background rates observed in this experiment the bounds are always weaker than those obtained from LZ or XENONnT. It should be noted that the LZ bound obtained in our analysis is a bit weaker than those obtained in Refs. [18; 30]. This is due to the more conservative use of systematic uncertainties in our analysis, since we treat each background component individually with an individual nuisance parameter. Also our XENONnT bound is slightly weaker than that from the collaboration [5] and the one from Ref. [33]. This is not worrying since we used a different statistical method and since our bound lies in any case within the XENONnT sensitivity band, see Ref. [5]. Also in the case of XENONnT, we are more conservative in the treatment of systematic uncertainties than other phenomenological analyses [30; 33]. The bounds obtained from the combined analysis, which at 90% confidence level read \[|\mu_{\nu_{e}}| < 10.3\times 10^{-12}\ \mu_{B}\,, \tag{21}\] \[|\mu_{\nu_{\mu/\tau}}| < 15.6\times 10^{-12}\ \mu_{B}\,,\] (22) \[|\mu_{\nu}^{eff}| < 7.5\times 10^{-12}\ \mu_{B}\,, \tag{23}\] Figure 1: \(\Delta\chi^{2}\) profiles for the different neutrino magnetic moments obtained from the analyses of the data of PandaX (magenta), LZ (blue), XENONnT (orange) and from the combination of all experiments (green). Also shown is the sensitivity range for DARWIN (red shaded), where the worst (best) case scenario corresponds to the analysis with 30 ty (300 ty) exposure without (with) depleted \({}^{136}\)Xe background. The dashed line corresponds to 300 ty exposure without depleted \({}^{136}\)Xe background. are among the strongest laboratory bounds on neutrino magnetic moments [1; 2; 7; 16]. They are about one or two orders of magnitude stronger than bounds from COHERENT or Dresden-II data [34; 35; 36; 37; 38], or Borexino data [39]. They are also stronger than older bounds reported in the literature [40; 41; 42; 43; 44; 45; 46], as the best bound on \(\mu_{\nu_{e}}\) obtained in the GEMMA experiment (\(|\mu_{\nu_{e}}|<2.9\times 10^{-11}\,\mu_{B}\) at 90% C.L.) [19], the best bound on \(\mu_{\nu_{\mu}}\) obtained in the LSND experiment (\(|\mu_{\nu_{\mu}}|<6.8\times 10^{-10}\,\mu_{B}\) at 90% C.L.) [44], and the best bound on \(\mu_{\nu_{\tau}}\) obtained in the DONUT experiment (\(|\mu_{\nu_{\tau}}|<3.9\times 10^{-7}\,\mu_{B}\) at 90% C.L.) [45]. As also shown in Fig. 1 and Table 1, these bounds can be further improved by about a factor of 2-5 by the DARWIN experiment (see also the sensitivity analysis in Ref. [47]) which would make dark matter direct detection experiments competitive with astrophysical observations, which constrain the neutrino magnetic moments below a few \(10^{-12}\,\mu_{B}\)[48; 49; 50; 51; 52]. ### Neutrino millicharge Dark matter direct detection experiments are sensitive to all of the neutrino millicharges. Note that due to the choice \(\sin^{2}\theta_{23}=0.5\) the bounds on the muon and tau neutrino millicharges are going to be the same for the same reason as explained in the previous section. The results from our analyses are presented in Figs. 2 and 3 and in Table 2. An interesting feature can be seen in Fig. 2. While the preferred regions of the current experiments have circular shape (left panel), the region of DARWIN does not (right panel). The reason for these shapes is the following: If one substitutes Eq. (8) into Eq. (3), one sees that the cross section depends on terms which are proportional to \(q_{\nu_{\alpha}}\) and to \(q_{\nu_{\alpha}}^{2}\). In the case of \(q_{\nu_{\mu}/\tau}\), the dominating new physics contribution to the cross section comes always from the terms which are proportional to \(q_{\nu_{\mu}/\tau}^{2}\). Also in the case of electron neutrinos the dominating contribution comes from the terms which are proportional to \(q_{\nu_{e}}^{2}\) when considering current experiments. Therefore, no cancellations can occur among the new physics parameters and the preferred regions obtain the circular shape. However, in the case of DARWIN, which is sensitive to smaller millicharges, the contributions of both terms are of similar strength for electron neutrinos. Hence, correlations can appear among the parameters, which is reflected in the shape of the allowed regions. In our analyses we have taken the possible correlations among the parameters into account. As can be seen in Figs. 2 and 3, again the PandaX-4T bound is weaker than the one from LZ, while this one is slightly weaker than that from XENONnT. Due to the conservative approach of background treatment, our bounds are again slightly weaker than those obtained in Refs. [18; 30]. \begin{table} \begin{tabular}{|c||c|c|c|} \hline Experiment & \(|\mu_{\nu_{e}}|\)\([10^{-12}\mu_{B}]\) & \(|\mu_{\nu_{\mu/\tau}}|\)\([10^{-12}\mu_{B}]\) & \(|\mu_{\nu}^{eff}|\)\([10^{-12}\mu_{B}]\) \\ \hline PandaX-4T & \(<38.7\) & \(<58.6\) & \(<28.3\) \\ \hline LZ & \(<17.1\) & \(<25.9\) & \(<12.5\) \\ \hline XENONnT & \(<11.5\) & \(<17.5\) & \(<8.4\) \\ \hline combined & \(<10.3\) & \(<15.6\) & \(<7.5\) \\ \hline \hline DARWIN 30 ty & \(<4.0\) & \(<6.0\) & \(<2.9\) \\ \hline DARWIN 300 ty & \(<2.3\) & \(<3.5\) & \(<1.7\) \\ \hline DARWIN 300 ty depl. & \(<2.1\) & \(<3.2\) & \(<1.5\) \\ \hline \end{tabular} \end{table} Table 1: The 90% bounds (\(\Delta\chi^{2}=2.71\)) that can be obtained on the different neutrino magnetic moments. At 90% confidence level we obtain from the combined analysis: \[q_{\nu_{e}} \in (-2.0,7.0)\times 10^{-13}\ e\,, \tag{24}\] \[q_{\nu_{\mu/\tau}} \in (-7.5,7.3)\times 10^{-13}\ e\,,\] (25) \[|q_{\nu_{e\mu/\tau}}| < 4.1\times 10^{-13}\ e\,,\] (26) \[|q_{\nu_{\mu\tau}}| < 5.2\times 10^{-13}\ e\,. \tag{27}\] Our bound on \(q_{\nu_{e}}\) is stronger than the bounds from the analyses of the data from TEXONO (\(|q_{\nu_{e}}|<1.0\times 10^{-12}\,e\) at 90% C.L.) [53; 54; 55; 41] or GEMMA (\(|q_{\nu_{e}}|<1.5\times 10^{-12}\,e\) at 90% C.L.) [56; 19]. Regarding \(q_{\nu_{\mu/\tau}}\), the bound is more than one order of magnitude more stringent than that obtained from solar neutrinos by the XMASS collaboration (\(|q_{\nu_{\mu/\tau}}|<1.1\times 10^{-11}\,e\) at 90% C.L.) [57]. In the case of the non-diagonal millicharges, we improve the DRESDEN-II CE\(\nu\)NS bound [34] on \(|q_{\nu_{e\mu/\tau}}|\) by more than one order of magnitude and we improve the CE\(\nu\)NS COHERENT bound [34] on \(|q_{\nu_{\mu\tau}}|\) by about three orders of magnitude. All of these bounds will be further improved significantly by the DARWIN experiment, up to a factor of more than 20 in the most optimistic scenario in the case of \(q_{\nu_{e}}\), making dark matter direct detection experiments again competitive with astrophysical observations which constrain the neutrino millicharges below a few \(10^{-14}\,e\)[58]. ### Neutrino charge radius Finally, we can use the data to place bounds on the neutrino charge radii. Regarding the charge radii, similar correlations as discussed in the context of millicharges have to be taken into account, as can be seen in Fig. 4. The results of the analyses of PandaX-4T, LZ and XENONnT and from the combined analysis are shown in Table 3, in the left panels of Figs. 4 and 5 for the diagonal charge radii, and Fig. 6 for the non-diagonal ones. As can be seen in Table 3, the bound on \(\langle r_{\nu_{e}}^{2}\rangle\) is stronger than that on \(\langle r_{\nu_{\mu/\tau}}^{2}\rangle\). Unfortunately, current dark matter direct detection experiments are not competitive Figure 2: Left: The 90% C.L. (2 d.o.f.) allowed regions in the \(q_{\nu_{e}}-q_{\nu_{\mu}}\)-plane from PandaX (magenta), LZ (blue), XENONnT (orange) and from the combination of all experiments (green) in comparison with the DARWIN sensitivity assuming 30 ty of exposure. Right: The 90% C.L. (2 d.o.f.) expected sensitivity of DARWIN for different exposures and background assumptions. The star denotes the SM value. with the bounds from other experiments [34; 42; 59] for neither of the two charge radii and they are far from the Standard Model values in Eqs. (11)-(13). Also the bounds on the non-diagonal parameters are weaker than those obtained in CE\(\nu\)NS experiments [34]. However, we have found that DARWIN will improve significantly the allowed region of parameter space. In the left panel of Fig. 4 we show the expected allowed region from DARWIN for a 30 ty exposure. As can be seen, DARWIN could significantly reduce the volume of the allowed parameter space. With this relatively small exposure it would be possible to set the strongest upper limit on \(\langle r_{\nu_{e}}^{2}\rangle\), although the lower limit would remain weaker than the most stringent current bound obtained in the TEXONO experiment (\(\langle r_{\nu_{e}}^{2}\rangle\in(-4.2,6.6)\times 10^{-32}\) cm\({}^{2}\) at 90% C.L.) [59]. The size of the sensitivity region shrinks when considering a larger exposure or the better background model, as shown in the right panel of Fig. 4. In the analysis with a 300 ty exposure, values around \(\langle r_{\nu_{e}}^{2}\rangle\approx-10\times 10^{-32}\) cm\({}^{2}\) and \(\langle r_{\nu_{\mu/\tau}}^{2}\rangle\approx-35\times 10^{-32}\) cm\({}^{2}\) become more disfavored, but there is still a secondary minimum present, as can be seen in the right panels of Figs. 4 and 5. The secondary solution remains even when considering the better background model. The bounds that could be Figure 3: \(\Delta\chi^{2}\) profiles for the different neutrino millicharges obtained from the analysis of data of PandaX (magenta), LZ (blue), XENONnT (orange) and from the combination of all experiments (green). Also shown is the sensitivity range for DARWIN (red shaded), where the worst (best) case scenario corresponds to the analysis with 30 ty (300 ty) exposure without (with) depleted \({}^{136}\)Xe background. The dashed line corresponds to 300 ty exposure without depleted \({}^{136}\)Xe background. obtained at 90% confidence level are \[\langle r_{\nu_{e}}^{2}\rangle \in (-45.3,0.6)\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN\ 30\ \mathrm{ty}}\,, \tag{28}\] \[\langle r_{\nu_{e}}^{2}\rangle \in \{(-32.9,-14.8)\ \&\ (-3.6,-0.2)\}\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN\ 300\ \mathrm{ty}}\,,\] (29) \[\langle r_{\nu_{e}}^{2}\rangle \in \{(-29.1,-20.7)\ \&\ (-1.6,-0.3)\}\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN\ 300\ \mathrm{ty}},\,\mathrm{depleted}\,. \tag{30}\] Thus, DARWIN 300 try could improve the current best limit on \(\langle r_{\nu_{e}}^{2}\rangle\) obtained in the TEXONO experiment (\(\langle r^{2}\rangle_{\nu_{e}}\in(-4.2,6.6)\times 10^{-32}\) at 90% C.L.) [59] (taking into account that the secondary solution is excluded by TEXONO and other bounds; see, e.g., Refs. [1; 2; 7; 16]) and could indicate that \(\langle r_{\nu_{e}}^{2}\rangle\) is negative. This would be the first indication of a non-zero value of a neutrino charge radius, in agreement with the Standard Model prediction (11). In the lower panels of Fig. 5 we show the sensitivity of DARWIN to the charge radius \(\langle r_{\nu_{\mu}}^{2}\rangle\). Although not as strong as for \(\langle r_{\nu_{e}}^{2}\rangle\), DARWIN could provide strong bounds on this quantity, which \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Experiment & \(q_{\nu_{e}}\)\([10^{-13}\ e]\) & \(q_{\nu_{\mu}}\)\([10^{-13}\ e]\) & \(|q_{\nu_{e\mu/\pi}}|\)\([10^{-13}\ e]\) & \(|q_{\nu_{\mu\pi}}|\)\([10^{-13}\ e]\) \\ \hline \hline \(\mathrm{PandaX\text{-}4T}\) & \((-12.6,16.4)\) & \((-22.3,22.2)\) & \(<12.2\) & \(<15.7\) \\ \hline \(\mathrm{LZ}\) & \((-4.6,9.9)\) & \((-11.5,11.3)\) & \(<6.3\) & \(<8.1\) \\ \hline \(\mathrm{XENONnT}\) & \((-2.5,7.4)\) & \((-8.1,8.0)\) & \(<4.4\) & \(<5.7\) \\ \hline combined & \((-2.0,7.0)\) & \((-7.5,7.3)\) & \(<4.1\) & \(<5.2\) \\ \hline \hline \(\mathrm{DARWIN\ 30\ \mathrm{ty}}\) & \((-0.4,1.0)\) & \((-4.1,4.1)\) & \(<2.3\) & \(<2.9\) \\ \hline \(\mathrm{DARWIN\ 300\ \mathrm{ty}}\) & \((-0.2,0.4)\) & \((-2.4,2.5)\) & \(<1.3\) & \(<1.7\) \\ \hline \(\mathrm{DARWIN\ 300\ \mathrm{ty}}\) & \((-0.1,0.3)\) & \((-2.2,2.3)\) & \(<1.2\) & \(<1.6\) \\ \hline \end{tabular} \end{table} Table 2: The 90% bounds (\(\Delta\chi^{2}=2.71\)) that can be obtained on the different neutrino millicharges. Figure 4: Left: The 90% C.L. (2 d.o.f.) allowed regions in the \(\langle r_{\nu_{e}}^{2}\rangle-\langle r_{\nu_{\mu}}^{2}\rangle\)-plane from \(\mathrm{PandaX}\) (magenta), \(\mathrm{LZ}\) (blue), \(\mathrm{XENONnT}\) (orange) and from the combination of all experiments (green) in comparison with the DARWIN sensitivity assuming 30 ty of exposure. Right: The 90% C.L. (2 d.o.f.) expected sensitivity of DARWIN in the \(\langle r_{\nu_{e}}^{2}\rangle-\langle r_{\nu_{\mu}}^{2}\rangle\)-plane for different exposures and background assumptions. The star denotes the SM value, while the black circle denotes the position of a secondary minimum found in the analysis. read at 90% confidence level \[\langle r_{\nu_{\mu}}^{2}\rangle \in (-62.9,30.4)\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN}\ 30\ \mathrm{ty}\,, \tag{31}\] \[\langle r_{\nu_{\mu}}^{2}\rangle \in \{(-59.5,-44.6)\ \&\ (-19.9,11.7)\}\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN}\ 300\ \mathrm{ty}\,,\] (32) \[\langle r_{\nu_{\mu}}^{2}\rangle \in \{(-57.8,-51.4)\ \&\ (-8.6,5.7)\}\times 10^{-32}\ \mathrm{cm}^{2},\ \mathrm{DARWIN}\ 300\ \mathrm{ty},\,\mathrm{depleted}\,. \tag{33}\] Such bounds would be complementary to the bounds obtained in other experiments [60; 61; 34; 42]. Note that the secondary minimum obtained in our DARWIN analyses requires both charge radii to be different from the Standard Model values. We could eliminate the secondary minimum by using external constraints on \(\langle r_{\nu_{e}}^{2}\rangle\) (e.g the TEXONO bound [59]) or \(\langle r_{\nu_{\mu}}^{2}\rangle\) (e.g. the one from Ref. [42]). Regarding the transition charge radii, the DARWIN sensitivity is shown in comparison with the current bounds in Fig. 6 and in the last two columns of Table 3. In this case the bounds that DARWIN could set assuming an exposure of 30 ty are similar in strength to those obtained with CE\(\nu\)NS in Ref. [34]. Only with a larger exposure and a better background model the current bounds could be improved by a factor of about 2-3. Figure 5: \(\Delta\chi^{2}\) profiles for the diagonal charge radii obtained from the analyses of current data (left panels) and the DARWIN sensitivity (right panels). For comparison, we also show in the left panels the DARWIN-sensitivity corresponding to an exposure of 30 ty. ## V Conclusions In this paper we have analyzed the electron recoil data from the dark matter direct detection experiments PandaX-4T, LZ and XENONnT to place constraints on neutrino electromagnetic properties. We also explored the sensitivity of the future experiment DARWIN. We implemented Figure 6: \(\Delta\chi^{2}\) profiles for the non-diagonal charge radii obtained from the analyses of current data (left panels) and the DARWIN sensitivity (right panels). For comparison, we also show in the left panels the DARWIN-sensitivity corresponding to an exposure of 30 ty. \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline Experiment & \(\langle r_{\nu_{e}}^{2}\rangle\)\([10^{-32}]\) cm\({}^{2}\) & \(\langle r_{\nu_{\mu/\tau}}^{2}\rangle\)\([10^{-32}]\) cm\({}^{2}\) & \(|\langle r_{\nu_{e/\tau}}^{2}\rangle|\)\([10^{-32}]\) cm\({}^{2}\) & \(|\langle r_{\nu_{\mu_{\tau}}}^{2}\rangle|\)\([10^{-32}]\) cm\({}^{2}\) \\ \hline PandaX-4T & \((-134.5,48.2)\) & \((-135.3,141.3)\) & \(<76.2\) & \(<97.8\) \\ \hline LZ & \((-110.4,26.4)\) & \((-101.8,105.5)\) & \(<57.1\) & \(<73.3\) \\ \hline XENONnT & \((-113.7,34.1)\) & \((-112.9,112.3)\) & \(<62.2\) & \(<79.9\) \\ \hline combined & \((-99.5,12.8)\) & \((-82.2,88.7)\) & \(<47.3\) & \(<60.7\) \\ \hline \hline DARWIN 30 ty & \((-45.3,0.6)\) & \((-62.9,30.4)\) & \(<28.6\) & \(<37.9\) \\ \hline DARWIN 300 ty & \((-32.9,-14.8)\) & \((-59.5,-44.6)\) & \(<28.6\) & \(<37.9\) \\ & \(\&\)\((-3.6,-0.2)\) & \(\&\)\((-19.9,11.7)\) & \(<28.6\) & \(<37.9\) \\ \hline DARWIN 300 ty depl. & \((-29.1,-20.7)\) & \((-57.8,-51.4)\&(-8.6,5.7)\) & \(<12.0\) & \(<15.7\) \\ \hline \end{tabular} \end{table} Table 3: The 90% bounds (\(\Delta\chi^{2}=2.71\)) that can be obtained on the different neutrino charge radii. a more realistic treatment of systematic uncertainties than in other phenomenological analyses previously performed in the literature, following closer what is done by the collaborations. We have set strong limits on the neutrino magnetic moments and on the neutrino millicharges. These are still weaker than those from astrophysical probes. However, we have shown that the next generation experiment DARWIN will put the sensitivity of dark matter direct detection experiments into the same ballpark. We have shown that in the case of the neutrino charge radii current dark matter direct detection experiments are not competitive with other types of experiments. However, this situation will change dramatically for DARWIN. Under ideal circumstances a measurement of \(\langle r_{\nu_{e}}^{2}\rangle\) could be possible for the first time. An interesting feature of dark matter direct detection experiments which can detect solar neutrinos is that part of the neutrinos arrives as \(\nu_{\tau}\), hence allowing to set bounds also on the \(\nu_{\tau}\) related quantities, which is not possible with other experiments. Summarizing, we have shown that current and particularly future dark matter direct detection experiments provide a powerful tool to test neutrino electromagnetic properties. ###### Acknowledgements. We are very thankful to Dimitris Papoulias for helpful discussions. C.G. and C.A.T. acknowledge support from Departments of Excellence grant awarded by MIUR and the research grant TAsP (Theoretical Astroparticle Physics) funded by Istituto Nazionale di Fisica Nucleare (INFN).
2309.11929
Index Modulation-based Information Harvesting for Far-Field RF Power Transfer
While wireless information transmission (WIT) is evolving into its sixth generation (6G), maintaining terminal operations that rely on limited battery capacities has become one of the most paramount challenges for Internet-of-Things (IoT) platforms. In this respect, there exists a growing interest in energy harvesting technology from ambient resources, and wireless power transfer (WPT) can be the key solution towards enabling battery-less infrastructures referred to as zero-power communication technology. Indeed, eclectic integration approaches between WPT and WIT mechanisms are becoming a vital necessity to limit the need for replacing batteries. Beyond the conventional separation between data and power components of the emitted waveforms, as in simultaneous wireless information and power transfer (SWIPT) mechanisms, a novel protocol referred to as information harvesting (IH) has recently emerged. IH leverages existing WPT mechanisms for data communication by incorporating index modulation (IM) techniques on top of the existing far-field power transfer mechanism. In this paper, a unified framework for the IM-based IH mechanisms has been presented where the feasibility of various IM techniques are evaluated based on different performance metrics. The presented results demonstrate the substantial potential to enable data communication within existing far-field WPT systems, particularly in the context of next-generation IoT wireless networks.
M. Ertug Pihtili, Mehmet C. Ilter, Ertugrul Basar, Risto Wichman, Jyri Hämäläinen
2023-09-21T09:43:42Z
http://arxiv.org/abs/2309.11929v2
# Index Modulation-based Information Harvesting for Far-Field RF Power Transfer ###### Abstract While wireless information transmission (WIT) is evolving into its sixth generation (6G), maintaining terminal operations that rely on limited battery capacities has become one of the most paramount challenges for Internet-of-Things (IoT) platforms. In this respect, there exists a growing interest in energy harvesting technology from ambient resources, and wireless power transfer (WPT) can be the key solution towards enabling battery-less infrastructures referred to as zero-power communication technology. Indeed, eclectic integration approaches between WPT and WIT mechanisms are becoming a vital necessity to limit the need for replacing batteries. Beyond the conventional separation between data and power components of the emitted waveforms, as in simultaneous wireless information and power transfer (SWIPT) mechanisms, a novel protocol referred to as information harvesting (IH) has recently emerged. IH leverages existing WPT mechanisms for data communication by incorporating index modulation (IM) techniques on top of the existing far-field power transfer mechanism. In this paper, a unified framework for the IM-based IH mechanisms has been presented where the feasibility of various IM techniques are evaluated based on different performance metrics. The presented results demonstrate the substantial potential to enable data communication within existing far-field WPT systems, particularly in the context of next-generation IoT wireless networks. Wireless power transfer, information harvesting, green communication, index modulation, energy harvesting. ## I Introduction In the 6G era, IoT platforms are required to support a massive number of devices and to tackle energy demands in addition to existing challenges in data transmission. In this respect, existing discussions in standardization frameworks are underway in ambient IoT scenarios [1], where IoT devices are able to harvest ambient energy from external sources. In this direction, zero-power communication technology [2] has been emerged as the next logical step in future networks and promises information transmission where its energy is harvested from surrounding radio frequency energy, thus reducing the need for replacing batteries [3]. To reach greater sustainability and environmentally friendly architectures, the synergistic integration of various frameworks connecting energy harvesting with communication infrastructure appears to be an inevitable development in the near future. The origin of WPT can be traced back to the seminal work of N. Tesla [4]. His idea was based on radiating the energy through a medium (i.e., air) with the help of antenna elements. Nowadays, the WPT has been extensively studied for radio frequency (RF) signals [5, 6] and for visible light communications [7], which is an emerging 6G paradigm. Particularly, the WPT can be categorized into two main classes: near-field and far-field. The former applies power transfer over short distances by magnetic fields using inductive coupling or electric fields between the energy harvester and the power transmitter, which is out of the scope of this paper. The latter utilizes ambient RF waves in the surrounding indoor/outdoor environment resulting from existing RF transmission along with backscattering techniques and provides an ideal green power source for longer ranges [8]. In this regard, the far-field power transfer introduces viable solutions to tackle the battery depletion problem for wireless nodes/devices. Energy harvesting enables wireless nodes to scavenge energy from the environment without requiring battery replacement and tethering to electricity grids [9]. The ongoing surge in interest prevails across a broad spectrum of applications, which are remote environmental monitoring, consumer electronics, biomedical implants, building/home automation, Industry 4.0, and logistics solutions [10]. In the far-field power transfer, the primary emphasis has centered on enhancing RF-to-DC conversion efficiency where the rectennas, which combine rectifiers and antennas, are key elements in the energy harvester. Consequently, the development of efficient rectennas was a longstanding focus in earlier literature [8]. These investigations have highlighted the significance of tailoring rectenna designs to match precise operating frequencies and input power levels, thus presenting a significant challenge due to the inherent nonlinear characteristics in practical rectenna implementations [11]. Furthermore, the efficiency of RF energy harvesting also depends on the choice of a selected WPT waveform at the power transmitter. For instance, it was shown that deploying a multi-sine waveform increases the efficiency of RF-to-DC conversion, so the output DC power [12] for flat-fading channels. Aside from grid-based and lattice-based constellations used in conventional communication systems where low peak-to-average-power-ratio (PAPR) is typically desired due to nonlinearity in power amplifiers, real Gaussian signals, flash signaling, and linear frequency-modulated signals are preferred in earlier far-field power harvesting mechanisms [8] at the cost of higher complexity and power consumption in the transmitter due to their higher PAPR. Interestingly, the effects of different modulation techniques, such as amplitude shift keying (ASK), quadrature amplitude modulation (QAM), phase-shift keying (PSK), and frequency shift keying (FSK) on battery charging time, were measured and theoretically analyzed in [13]. Wireless powered communication networks (WPCN) were initially proposed for reducing the operational workload of battery replacement/recharging over RF-based energy harvesting systems [14]. The network elements in WPCN first harvest energy from the signals transmitted by RF energy sources and then consume this harvested energy for their upcoming communication periods. Rather than only radiating energy via RF signaling, incorporating data communication into wireless power transfer emerged during the last decade. This is mainly referred to as simultaneous wireless information and power transfer (SWIPT) over RF-based mechanisms [15] and, more recently, as simultaneous light information and power transfer (SLIPT) for optical-based ones have been introduced [16]. In those systems, the power and information components are mostly separable from each other over different domains, which can be the power domain (power splitting), time domain (time splitting), and space domain (antenna splitting) [17]. From this aspect, there exists a trade-off between information transfer and energy transfer based on the design preferences of the data and power transmission. There is an extensive body of work exploring these preferences within the trade-off [18], and SWIPT studies can also be found in commercial RFID systems, particularly in the context of communication from reader to RFID tags [19]. Additionally, the SWIPT mechanism has been extended into multi-user cases, referred to as multi-user (MU)-SWIPT, where a multi-antenna transmitter simultaneously transmits wireless information and energy through spatial multiplexing to multiple single-antenna receivers [20]. In order to secure ongoing data transmission in conjunction with a long-range wireless power transfer mechanism, it should be kept in mind that most devices are simple nodes, i.e., RedCap devices [21] at the level of several kbps, having limited computational capabilities, thus sending information and power simultaneously makes the SWIPT mechanism disadvantageous in practice due to different device capability requirements. The results presented in [22] demonstrate that power transfer efficiency also drops dramatically over the transmission distance in such systems when security enhancement was added to the design. To overcome this constraint, a distributed antenna-based SWIPT protocol was proposed in [23], and [24], where information bits are embedded in the tone index of multi-sine waveforms. Alternatively, an approach involving the creation of a modulation signal within a vacant resource block of communication in an orthogonal frequency division multiplexing (OFDM) block has also been introduced in [25]. Thus, there is a need for complementary solutions to combat this potential interception of confidential information at unintended users. Although conventional encryption techniques are useful in many cases for protecting data transmission, there are some use cases/protocols in which additional encryption cannot be applied [27]. In this respect, artificial noise (AN)-based physical layer security (PLS) techniques [28] turned into a powerful tool due to its ability to generate orthogonal noise through the channel state information between the transmitter and intended receiver. Considering its advantages against eavesdropping, the AN technique has been applied to multiple-input single-output (MISO) [29] and multiple-input multiple-output (MIMO) systems [30]. To address the power transfer efficiency losses and implementation concerns arising from varying sensitivities between the energy harvester and the data recipient mentioned above, an alternative approach has been proposed in [26], allowing for information transfer alongside existing wireless power transfer without disrupting the ongoing WPT mechanism between the power transmitter and the energy harvester. This approach is referred to as _Information Harvesting (IH)_, and Fig. 1 illustrates the differences in the integration of data and power elements in these scenarios compared with the SWIPT scenarios. As seen in the figure, the SWIPT employs a splitting mechanism, while IH focuses on harnessing the available WPT mechanism. This approach enables information transfer through WPT without compromising the range of the wireless power transfer service area, a common limitation in SWIPT systems. This is accomplished by encoding information into the indices of transmitter entities through the implementation of index modulation (IM) techniques. The generalized space shift keying (GSSK)-based IH mechanism and its performance in terms of secrecy capacity and average harvested power was first investigated in [31]. Then, [32] introduced a new IH mechanism that relies on quadrature spatial shift keying (QSSK). Another superiority of the IM-based IH lies in its reduced pilot overhead since the MU-SWIPT system necessitates the channel state information (CSI) of both the information receiver (IR) and the energy harvester (EH) links. In contrast, the IH systems do not require CSI of the EH link, potentially leading to a reduced pilot overhead for channel estimation. Additionally, the IH mechanism distributes the total power among the active transmit antennas, a strategy arising from embedding information within the transmit entities of the system. This distinctive feature positions the IH mechanism as Fig. 1: The design aspects of (a) SWIPT and (b) Information Harvesting [26] mechanisms which show how data and power parts are integrated together. a promising solution, expanding the service area of the far-field WPT mechanism without excluding simple nodes, especially IoT devices in need of energy. In this paper, we introduce the IH architecture within a unified and comprehensive framework that encompasses a broader spectrum of IM techniques. We investigate the feasibility of these techniques while establishing unified performance criteria. In this aspect, our contributions can be summarized as follows: * Currently, the IH mechanisms were limited to space shift keying-based mechanisms and lacked of presenting a wider perspective. In this paper, IM-based IH mechanisms are presented in a comprehensive manner where not only space shift keying-based techniques with no modulated symbol but also spatial modulation ones have been included. * Unlike only concentrating on certain performance metrics, this paper introduces a wider scope for performance evaluation where the different IM-based IH mechanism has been investigated through energy harvesting capability, bit error rates, and ergodic secrecy rates. In addition, a unified framework for the calculation of the theoretical error performance of the proposed schemes and provide useful insights. The simulated error rate results are validated by analytical error rate expressions derived for the proposed IM-based IH mechanisms. * The superiority against existing multi-user SWIPT mechanism [20] has been presented along with other practical advantages of the IM-based IH mechanisms. The remainder of this paper is organized as follows: In Section II, we present the general structure of IM-based IH model where the different implementations in the power transmitter based on the choice of IM technique, how the energy harvester and information receiver operate based on the chosen IM technique are introduced. The unified error performance analysis is given in Section III where the average bit error rate in the information receiver and the eavesdropper is derived, respectively. Section IV presents the ergodic capacity analysis in a similar manner, and the feasibility of the IM-based IH mechanism is investigated in Section V via a variety of simulation results along with the validation from the analysis. Section VI completes the paper with concluding remarks. For clarity, the abbreviations and notations in the text are listed in Table I. ## II IM-based Information Harvesting Model The block diagram of the IM-based IH mechanism is illustrated in Fig. 2. In this configuration, it is assumed that the energy harvester (EH) and information receiver (IR) are located within the service area of the wireless power transmitter (WPTx) node, along with a potential malicious node, Eve. Initially, WPTx, comprising a total of \(N_{\mathrm{T}}\) transmit antennas, serves as the node responsible for transmitting multitone WPT waveforms, which utilize \(N\) distinct subbands, into the area for RF energy harvesting. It is assumed that identification and synchronization between WPTx and an EH device are successfully established. When an IR device enters the service area, WPTx aims to transmit available information blocks to the IR. This is achieved by mapping the information blocks into vectors corresponding to active transmit antennas, a process called _information seeding_. Meanwhile, the EH device, equipped with \(N_{\mathrm{EH}}\) receive antennas, conducts recharging operations without any disruption. Notably, when the IR device enters the service area, there already exists a far-field WPT mechanism facilitated by a wireless link denoted as \(\mathbf{G}_{\mathrm{EH}}\in\mathbb{C}^{N_{\mathrm{IR}}\times N_{\mathrm{T}} \times N}\). Subsequently, upon deployment, the IR device, which is equipped with \(N_{\mathrm{IR}}\) receive antennas, initiates an information seeding cycle. This initiation is performed by transmitting a request-for-information (RFI) signal over a dedicated link represented by \(\mathbf{H}_{\mathrm{IR}}\in\mathbb{C}^{N_{\mathrm{R}}\times N_{T}\times N}\). The RFI signal may also include configuration parameters on the information seeding cycle, such as the frequency of changing the transmitting entity based on the required data rate within the IR and the necessity for additional PLS protocols [26]. The information seeding process is activated upon detecting the RFI signal at the WPTx. During this phase, the IR aims to capture variations resulting from the activation of various transmit antenna sets, which are determined by the information bits. This process is referred to as _information harvesting_. Importantly, it was demonstrated in [26] that activating more transmit antennas in WPTx during the power transfer period reduces the probability of detecting information seeding activity by devices located even in close proximity to the EH. Note that the channels between the WPTx and other network elements in Fig.2 are assumed to be stationary, so the time dependency of the channel coefficients is emitted from the channel coefficient and weight factors in the rest of the paper. ### _EH: Energy harvesting_ During the period when the WPTx emits power transfer waveforms into a service area (regardless of the activation of information seeding cycle), the EH converts the received RF signal into DC output power thanks to its rectennas. Particularly, the EH consists of a receive antenna chain along with \(N_{\mathrm{EH}}\) receive antennas, battery charging unit, and battery. Herein, the battery charging unit is configured to establish a link between the receive antenna chain and battery, wherein the manner in which power is transferred from the wireless power transceiver is controlled according to the parameters and/or state information assigned by the power management unit [32]. The EH device operates in two stages: firstly, the received RF power is converted into DC power using rectennas. Note that due to the nature of the wireless medium, there exists a fraction of time where the rectenna cannot perform harvesting since input RF power lies below certain RF power that is called as _rectenna sensitivity_ and after certain received signal power level at harvester, _rectenna saturation power_, the harvested energy stays constant as shown in [33]. After considering these practicalities as in [34], the output DC power for rectenna \(q\) can be formulated as a function of the received RF signal, which is, \[v_{out,q}=\left\{\begin{aligned} 0,&\quad P_{r}^{t}\in[0, \Gamma_{in}]\\ \beta_{2}P_{r}^{t}+\beta_{4}{P_{r}^{t}}^{2},&\quad P_{r}^{t} \in[\Gamma_{in},\Gamma_{sat}]\\ \beta_{2}\delta_{sat}+\beta_{4}\delta_{sat}^{2},& \quad P_{r}^{t}\in[\Gamma_{in},\Gamma_{sat}]\end{aligned}\right. \tag{1}\] where \(P_{r}^{t}\) is the received input power during coherence period \(t\), \(P_{r}^{t}=\mathbb{E}[\left|y\left(t\right)\right|^{2}]\), \(\Gamma_{in}\) refers the harvester sensitivity, \(\Gamma_{sat}\) denotes the saturation level, \(\beta_{2}\) and \(\beta_{4}\) are the parameters of the nonlinear rectifier model. Then, the outputs of all rectennas are aggregated to obtain the cumulative harvested DC power at the combiner's output. This technique is referred to as DC combining, as discussed in [34]. Then, the total DC output power can be obtained from \(\sum_{q=1}^{N_{\mathrm{EH}}}v_{out,q}^{2}/R_{L}\) where \(R_{L}\) refers to the resistive load used to determine the output DC power. In the nonlinear rectenna model, the calculation of the harvested DC power is not straightforward. For this purpose, [35] introduced a new technique where Taylor series expansion was applied into the nonlinear diode model. This approach provides insight into how the choices of the signal modulation or the input distributions affect the energy harvesting capability along with the configuration of the rectenna. Inspired from it, in this paper, the energy harvesting capability relates with a variable of \(z_{DC}\) which establishes a direct connection with (1) and reflects the harvested power when the signal operates within the bounds of the linear and saturation regions. Mathematically, \(z_{DC}\) is formulated as [35] \[z_{DC}=k_{2}R_{ant}\mathbb{E}[|y(t)|^{2}]+k_{4}R_{ant}^{2}\mathbb{E}[|y(t)|^{4}]. \tag{2}\] In a steady-state response, an ideal rectifier maintains a constant output voltage over time, and the level of this output voltage depends on the peaks of the input voltage. Therefore, \(z_{DC}\) can be improved as the number of subbands increases in the emitted waveform from the WPTx. This enhancement is attributed to the generation of larger peaks facilitated by multitone signals, despite their average power being equivalent to that of continuous wave (CW) signals. The impact of multitone signals, which results in the generation of larger peaks, leads to a higher PAPR. Consequently, higher PAPR enhances the efficiency of the RF-to-DC conversion process in the nonlinear rectifier, contributing to an improved energy harvesting capability. This enhancement is particularly associated with the incorporation of a greater number of subbands in the emitted waveform. In this respect, (2) demonstrates a positive correlation with PAPR, suggesting that modulation schemes or input distributions characterized by higher PAPR values offer advantages to the nonlinear rectenna model [36]. Fig. 2: The block diagram of the IM-based IH mechanism where the WPTx serves EH while sending its information to the IR by utilizing IM techniques with the existence of Eve in a service area. ### _WPTx: Information seeding_ In this subsection, we discuss the integration of different IM schemes into IH mechanism along with detailed description of the information seeding cycle. IM represents a unique approach to information transmission, achieved by selectively activating specific elements for conveying information. When information seeding is initiated in WPTx alongside the specified IM scheme, the resulting set of waveforms emitted from the \(N_{a}\) activated WPTx antennas, obtained by combining an IM waveform at the WPTx, is as follows: \[\mathbf{s}=\left[\frac{s_{1}(t)}{\sqrt{N_{a}}},0,\ldots,0,\frac{s_{m}(t)}{ \sqrt{N_{a}}},0,\ldots,0,\frac{s_{N_{a}}(t)}{\sqrt{N_{a}}}\right], \tag{3}\] at a given time instant \(t\). Hereby, \(s_{m}(t)\) is directly associated with the selection of the WPT waveform, \(m\in\{1,\ldots,N_{T}\}\). For instance, when employing a multitone signal with \(N\) distinct subbands, the expression for \(s_{m}(t)\) can be formulated as: \[s_{m}(t)=\Re\Big{\{}\sum_{n=1}^{N}\omega_{n,m}e^{j2\pi f_{n}t}\Big{\}}. \tag{4}\] Hereby, \(N\) represents the number of subbands in the generation of multitone WPT signals and \(f_{n}\) denotes the center frequency of the \(n\)th subband waveform. More specifically, \(\omega_{n,m}=a_{n,m}e^{j\theta_{n,m}}\) where \(a_{n,m}\) and \(\theta_{n,m}\) are the \(n\)th subband amplitude and phase components of baseband WPT signal and these values are based on the chosen IM scheme. Subsequently, the received passband signal at the IR, without the additive white Gaussian noise (AWGN) term, can be mathematically expressed as given by [35], \[\mathbf{y}_{\text{IR}}(t)=\Re\Big{\{}\sum_{n=1}^{N}\mathbf{G}_{\text{EH}a} \mathbf{w}_{n}e^{j2\pi f_{n}t}\Big{\}}, \tag{5}\] where \(\mathbf{G}_{\text{EH},n}\) is the channel between WPTx and IR for \(n\)th subband and \(\mathbf{w}_{n}=\left[\omega_{n,1}\ldots\omega_{n,N_{T}}\right]^{T}\), where the power of \(\mathbf{w}_{n}\), denoted as \(\lambda_{2}^{2}/N\) (i.e., \(\mathbb{E}[\mathbf{w}_{n}^{H}\mathbf{w}_{n}]=\lambda_{2}^{2}/N\) ), is arranged in accordance with the values of \(N\) and \(N_{a}\). #### Iii-B1 GSSK/SSK-based IH In the GSSK-based IH mechanism, available information bits are only mapped into a transmit antenna vector, which determines the number of active transmit antennas, denoted as \(N_{\text{a}}\), with the constraint \(N_{\text{a}}\leq N_{\text{T}}\)[37], when \(N_{\text{a}}=1\), it simplifies to the SSK-based IH mechanism. The number of information bits that can be mapped into the transmit antenna indices for GSSK modulation is given by: \[\eta_{GSSK}=\left\lfloor\log_{2}\begin{pmatrix}N_{T}\\ N_{a}\end{pmatrix}\right\rfloor, \tag{6}\] where \(\lfloor\cdot\rfloor\) indicates floor operation. Under this assumption, the emitted WPT waveform does not contain any information components itself; instead, the antennas simply emit power transfer waveforms to the service area. From the EH perspective, there is no difference in the GSSK/SSK-based IH mechanism, as no signal modulation is employed, leading to \(a_{n,m}=1\) and \(\theta_{n,m}=0\). Note that \(L=2^{q}\) different active antenna combinations exist when \(N_{a}\) antennas are active out of total \(N_{T}\) transmit antennas, which is equal to the cardinality of GSSK codebook. #### Iii-B2 GSM/SM-based IH In the case of GSM-based IH, WPTx begins embedding information within the emitted symbol. This differs from the SSK-based techniques mentioned earlier, where only unmodulated WPT waveforms exist. Consequently, the number of transmitted information bits can be expressed as: \[\eta_{GSM}=\left\lfloor\log_{2}\begin{pmatrix}N_{T}\\ N_{a}\end{pmatrix}\right\rfloor+\log_{2}\left(M\right), \tag{7}\] where \(M\) represents the modulation order of \(M\)-QAM constellation used in the WPTx. In the GSM-based IH mechanism, \(a_{n,m}\) and \(\theta_{n,m}\) should be configured according to the employed QAM. For instance, in case of 4-QAM is utilized for GSM/SM-based IH mechanism, the \(a_{n,m}\) and \(\theta_{n,m}\) values can be set to \(\left\{\sqrt{2},\,7\pi/4\right\}\) for a bit sequence of \(\left[01\right]\) in addition to the bits mapped into active antenna indices. #### Iii-B3 GSSK/QSSK-based IH Quadrature-based implementations require additional signal processing before transmission to the information seeding stage. The transmitted symbols are decomposed into in-phase and quadrature components, resulting in two transmit antenna vectors. The first transmit antenna vector corresponds to the real part of a wireless power symbol, while the second corresponds to the imaginary part [38]. Therefore the number of information bits increases to \[\eta_{GQSSK}=2\eta_{GSSK}. \tag{8}\] Assuming \(s_{m}\left(t\right)\) is a complex WPT signal, and its the real part, \(\Re\{s_{m}\left(t\right)\}\), and its imaginary part, \(\Im\{s_{m}\left(t\right)\}\), are transmitted separately through the different antennas over cosine and sine carriers respectively [38]. In the GQSSK/QSSK-based IH mechanism, the transmitted symbol is fixed and equal to \(1+j\). Consequently, the corresponding \(a_{n,m}\) and \(\theta_{n,m}\) values are \(\left\{\sqrt{2},\pi/4\right\}\). #### Iii-B4 GQSM/QSM-based IH Similar to (7), now WPT signal carries the modulated symbol from \(M\)-ary constellation, and the real part modulates the in-phase part of the carrier, whereas the imaginary part modulates the quadrature component of the carrier signal. Then, the spectral efficiency of GQSM is given by \[\eta_{GQSM}=2\left\lfloor\log_{2}\begin{pmatrix}N_{T}\\ N_{a}\end{pmatrix}\right\rfloor+\log_{2}\left(M\right). \tag{9}\] Without loss of generality, \(a_{n,m}\) and \(\theta_{n,m}\) values are drawn from \(M-\)QAM constellation points in this paper. #### Iii-B5 AN generation AN is intentionally generated to safeguard against information leakage across the communication channel connecting WPTx and a potential eavesdropper (Eve) and characterized by the channel matrix \(\mathbf{G}_{\text{Eve}}\in\mathbb{C}^{N_{\text{IR}}\times T_{T}\times N}\). The primary aim of introducing this deliberately produced interference is to enhance the robustness of the PLS within the system. Later, we will provide comprehensive insights into how this interference also plays a pivotal role in advancing the energy harvesting process. For the AN generation, it is assumed that the IR has \(N_{\text{IR}}\) receive antennas such that \(N_{\text{IR}}<N_{T}\). Then, the singular value decomposition (SVD) can be implemented through each subband such that \(\mathbf{H}_{\text{IR},n}=\mathbf{U}_{n}\mathbf{\Lambda}_{n}\mathbf{V}_{n}^{H}\) where \(\mathbf{H}_{\text{IR},n}\) is a \(N_{\text{IR}}\times N_{T}\) channel matrix with \(r=\text{rank}\left(\mathbf{H}_{\text{IR},n}\right)\) and \(\left[\mathbf{v}_{1}^{n}\mathbf{v}_{2}^{n}\ldots\mathbf{v}_{r}^{n}\mathbf{v}_{r+1}^ {n}\ldots\mathbf{v}_{Nr}^{n}\right]\) includes the nullspaces of \(\mathbf{H}_{\text{IR},n}\), which are \(\mathbf{V}_{\perp,n}=\left[\mathbf{v}_{r+1}^{n}\ldots\mathbf{v}_{Nr}^{n}\right]\). Then, the AN waveform can be expressed as \[\boldsymbol{\varepsilon}_{n}=\sum_{i=r+1}^{N_{i}}\delta_{i}\mathbf{v}_{i}^{n}u _{i}. \tag{10}\] Herein, (10) is the jamming signal obtained from independent identically distributed (i.i.d.) Gaussian distribution, \(u_{i}\sim\mathcal{CN}\left(0,\lambda_{u}^{2}\right)\) along with \(\sum_{i=r+1}^{N_{i}}\delta_{i}^{2}=1\). Since \(\mathbf{H}_{\text{IR},n}\cdot\mathbf{V}_{\perp,n}=0\) holds, the generated AN on top of the existing WPT waveform does not affect on the received signal in IR while it leads to additional jamming power in Eve side due to \(\mathbf{G}_{\text{Ew},n}\cdot\mathbf{V}_{\perp,n}\neq 0\). Given that AN is generated based on the link between WPTx and IR, it also contributes to additional jamming power on the EH side; hence, \(\mathbf{G}_{\text{EH},n}\cdot\mathbf{V}_{\perp,n}\neq 0\). As a result, the AN-added received passband waveform at the EH can be expressed as \[\mathbf{y}_{\text{EH}}=\Re\left\{\sum_{n=1}^{N}\mathbf{G}_{\text{EH},n}\bigg{[} \mathbf{w}_{n}e^{j2\pi f_{n}t}+\boldsymbol{\varepsilon}_{n}e^{j2\pi f_{1}t} \bigg{]}\right\}. \tag{11}\] Herein, the AN waveform aligns with the frequency of the first subband, \(f_{1}\), and its power defined as \(\lambda_{u}^{2}/N\) for \(n\)th subband (i.e., \(\mathbb{E}[\boldsymbol{\varepsilon}_{n}^{H}\boldsymbol{\varepsilon}_{n}]= \lambda_{u}^{2}/N\)). In (11), AWGN term is omitted, assuming that the antenna noise is negligible and not substantial enough to be harvested. ### _IR: Information harvesting_ The primary objective of the IH mechanism is to extract information from the received WPT-IM waveform. To conduct this, the IR leverages both the mapping rule employed by the IM scheme and the CSI of the WPT-IR link. While energy can be directly harvested from the received RF signal, the IR must acquire the baseband version of the received signal for a specific subband \(n\). Following downconversion and filtering processes, the resulting baseband signal at the IR is mathematically expressed as described in [39] \[\mathbf{y}_{\text{IR},n}=\mathbf{H}_{\text{IR},n}\mathbf{x}_{n}+\mathbf{z}_{ \text{IR},n}. \tag{12}\] Here, \(\mathbf{x}_{n}\) includes the WPT-IM waveform \(\mathbf{w}_{n}\) along with the AN waveform \(\boldsymbol{\varepsilon}_{n}\) such that \(\mathbf{x}_{n}=\mathbf{w}_{n}+\boldsymbol{\varepsilon}_{n}\). In (12) \(\mathbf{z}_{\text{IR},n}\) denotes the noise vector, where its each element \(z_{\text{IR},n}\sim\mathcal{CN}\left(0,\sigma_{n}^{2}\right)\) has a zero mean and a variance \(\sigma_{n}^{2}\) for the \(n\)th subband, and \(\mathbf{z}_{\text{IR},n}\in\mathbb{C}^{N_{\text{IR}}\times 1}\). Consequently, the total received baseband signal at the IR can be represented as: \[\mathbf{y}_{\text{IR}}=\sum_{n=1}^{N}\left[\mathbf{H}_{\text{IR},n}\mathbf{x} _{n}+\mathbf{z}_{\text{IR},n}\right]. \tag{13}\] The IR employs a maximum likelihood (ML) detector to estimate transmitted bits from the WPT-IM waveform. Since various IM techniques can be employed for conveying information, the resulting ML representations vary based on the utilized IM scheme. Consequently, the optimum ML decoder representations are provided separately below. #### Iii-C1 GSSK/SSK-based IH The GSSK technique involves selecting specific columns from each subband component of the channel matrix, \(\mathbf{H}_{\text{IR},n}\) to transmit information bits. Accordingly, the product \(\mathbf{H}_{\text{IR},n}\mathbf{x}_{n}\) yields the term \(\mathbf{h}_{\text{IR},n}^{l}\), which corresponds to the channel coefficient set of the \(l\)th GSSK codebook out of \(L\) possible combinations in total. It is worth noting that in the case of SSK, only a single column of \(\mathbf{H}_{\text{IR},n}\) is chosen. Therefore, for mechanisms based on GSSK/SSK, the ML detector is designed to estimate the transmit antenna indices when given the value of \(N_{a}\). The ML procedure can be implemented as follows: \[\hat{l}_{\text{IR}}=\operatorname*{argmin}_{l\in\{1,\ldots,L\}}\sum_{n=1}^{N} \left\|\mathbf{y}_{\text{IR},n}-\mathbf{h}_{\text{IR},n}^{l}\right\|^{2}. \tag{14}\] Here, \(\hat{l}_{\text{IR}}\) represents the estimated antenna index. #### Iii-C2 GSM/SM-based IH GSM/SM schemes differ from GSSK/SSK ones through the use of signal modulation, specifically QAM, to enhance spectral efficiency. In cases where the WPT waveform includes a modulated symbol, (14) can be rewritten as follows: \[(\hat{l}_{\text{IR}},\hat{\omega})=\operatorname*{argmin}_{l\in\{1,\ldots,L\}, \boldsymbol{\omega}\in\psi}\sum_{n=1}^{N}\left\|\mathbf{y}_{\text{IR},n}- \mathbf{h}_{\text{IR},n}^{l}\omega\right\|^{2}. \tag{15}\] Here, \(\omega_{n,m}\) representation in (4) simplifies to a \(\omega\) as IM schemes transmit identical symbols from \(N_{a}\) active antennas across all subbands. Thus, \(\omega=ae^{j\theta}\) represents a symbol selected from an \(M\)-QAM constellation, \(\psi\), and \(\hat{\omega}\) is decoded transmitted symbol. #### Iii-C3 QSSK/GSSK-based IH Similar to the GSSK/SSK-based mechanism, the ML detector only needs to estimate the active antenna indices without performing symbol decoding. The distinction is that QSSK scheme activates two antennas at a time to convey quadrature components, whereas GQSSK might activate more than two antennas. Hence, the ML detector should jointly estimate in-phase and quadrature components, and it is given by \[\left(\hat{l}_{\text{IR}}^{\text{IR}},\hat{l}_{\text{IR}}^{\alpha}\right)= \operatorname*{argmin}_{\begin{subarray}{c}\mathbf{h}_{\text{IR}}^{\alpha,l},\\ l\in\{1,\ldots,L\}\end{subarray}}\sum_{n=1}^{N}\left\|\mathbf{y}_{\text{IR},n}- \left[\mathbf{h}_{\text{IR},n}^{\Re,l}+j\mathbf{h}_{\text{IR},n}^{\Im,l} \right]\right\|^{2}. \tag{16}\] Herein, \(\mathbf{h}_{\text{IR}}^{\Re,l}\) and \(\mathbf{h}_{\text{IR}}^{\Im,l}\) correspond to the \(l\)th potential codebook channel coefficient set of the WPTx-IR link out of \(L\) candidates for real part and imaginary part transmission, respectively. \(\left(\hat{l}_{\text{IR}}^{\text{IR}},\hat{l}_{\text{IR}}^{\Im}\right)\) refers the estimated indices of active transmit antennas at the IR. For a toy example, \(\left(\hat{l}_{\text{IR}}^{\text{IR}},\hat{l}_{\text{IR}}^{\Im}\right)=\left( \left[110\right],\left[011\right]\right)\) implies the \(1\)st and \(2\)nd transmit antennas are active for the real part and \(2\)nd and \(3\)rd for the imaginary part, respectively. #### Iii-C4 GSM/QSM-based IH GQSM/QSM augments QSSK/GQSSK techniques by incorporating \(M\)-QAM constellation symbols to transmit quadrature components, separately containing the in-phase and quadrature part of the \(M\)-QAM symbol. If the WPT-IM waveform contains a modulated symbol, (16) can be rewritten as follows: \[\left(\hat{I}_{\text{IR}}^{\Re},\hat{I}_{\text{IR}}^{\Im},\hat{ \omega}\right)=\] (17) \[\underset{\begin{subarray}{c}\mathbf{h}_{\text{IR},i}^{\Re,j}, \lambda,l\\ l\in\{1,\ldots,L\},\omega\in\psi\end{subarray}}{\arg\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! mechanism was considered. In this respect, each transmit antenna set for the real and the imaginary parts is selected with the same probability, \(\dfrac{1}{L}\). Then, the received signal at the IR through \(N\) band, \(\mathbf{y}_{\mathrm{IR}}\), obeys the following distribution [41] \[p\left(\mathbf{y}_{\mathrm{IR}}\right)=\dfrac{1}{L^{2}}\sum_{j=1}^{L}\sum_{k=1} ^{L}\dfrac{1}{\pi\sigma_{n}^{2}}e^{-\frac{\|\mathbf{r}_{\mathrm{IR}}\|^{2}}{ \sigma_{n}^{2}}}. \tag{23}\] Here, \(\mathbf{r}_{\mathrm{IR}}=\mathbf{y}_{\mathrm{IR}}-\mathbf{h}_{\mathrm{IR}}^{ \mathbf{B},\mathbf{J}}\Re\{\omega\}-j\mathbf{h}_{\mathrm{IR}}^{\mathbf{J}, \mathbf{K}}\Im\{\omega\}\) for a transmitted WPT signal, \(\omega\). Then, the mutual information, \(\mathcal{I}_{\mathrm{IR}}\left(\mathbf{r}_{\mathrm{IR}}\right)\), can be expressed in (24), which is a special case of [Eq. (14), 41]. Therein, \(\mathbf{d}_{l_{1},l_{2}}^{\mathbf{r}_{1},r_{2}}\) is defined as: \[\mathbf{d}_{l_{1},l_{2}}^{\mathbf{r}_{1},r_{2}}=\mathbf{h}_{\mathrm{IR}}^{ \mathbf{R},l_{2}\text{-}et}\Re\{\omega\}+j\mathbf{h}_{\mathrm{IR}}^{\mathbf{ Q},l_{1}\text{-}et}\Im\{\omega\}-\mathbf{h}_{\mathrm{IR}}^{\mathbf{R},r_{1} \text{-}et}\Re\{\omega\}-j\mathbf{h}_{\mathrm{IR}}^{\mathbf{Q},r_{2}\text{-}et }\Im\{\omega\}} \tag{24}\] where \(\mathbf{h}_{\mathrm{IR}}^{\mathrm{eff}}\) is the effective channel after incorporating only active transmit antenna set at the IR such that \(\mathbf{h}_{\mathrm{Eve}}^{i_{\mathrm{eff}}}=\mathbf{h}_{\mathrm{Eve}}^{i,1}+ \cdots+\mathbf{h}_{\mathrm{Eve}}^{i,N_{\mathrm{eve}}}\)[42]. Similar to (23), the received signal at the Eve obeys the following distribution [41] \[p\left(\mathbf{y}_{\mathrm{Eve}}\right)=\dfrac{1}{L^{2}}\sum_{j=1}^{L}\sum_{k= 1}^{L}\dfrac{1}{\pi\sigma_{n}^{2}}e^{-\frac{\|\mathbf{r}_{\mathrm{Eve}}\|^{2} }{\sigma_{n}^{2}}}. \tag{25}\] Therein, \(\mathbf{r}_{\mathrm{Eve}}=\mathbf{y}_{\mathrm{IR}}-\mathbf{g}_{\mathrm{Eve}}^ {\mathbf{J},\mathbf{J}}\Re\{\omega\}-j\mathbf{g}_{\mathrm{Eve}}^{\mathbf{Q}, k}\Im\{\omega\}\). After accounting for the existence of the AN at the Eve side, the mutual information at Eve, \(\mathbf{r}_{\mathrm{Eve}}\left(\mathbf{r}_{\mathrm{Eve}}\right)\), is given in (25), which is a special case of [Eq. (15), 41]. Here, \(\mathbf{z}_{\mathrm{Eve}}\) is the Gaussian noise at the Eve and \(\mathbf{\delta}_{l_{1},l_{2}}^{\mathbf{r}_{1},r_{2}}\) can be expressed as: \[\mathbf{\delta}_{l_{1},l_{2}}^{\mathbf{r}_{1},r_{2}}= \mathbf{C}_{\epsilon}^{-\frac{1}{2}}\left(\mathbf{g}_{\mathrm{Eve} }^{\Re,l_{2}\text{-}et}\Re\{\omega\}+j\mathbf{g}_{\mathrm{Eve}}^{\mathbf{Q}, l_{1}\text{-}et}\Im\{\omega\}\right) \tag{26}\] \[-\mathbf{C}_{\epsilon}^{-\frac{1}{2}}\left(\mathbf{g}_{\mathrm{Eve }}^{\Re,r_{1}\text{-}et}\Re\{\omega\}-j\mathbf{g}_{\mathrm{Eve}}^{\mathbf{Q}, r_{2}\text{-}et}\Im\{\omega\}\right),\] where \(\mathbf{g}_{\mathrm{Eve}}^{i_{\mathrm{eff}}}\) is the effective channel and \(\mathbf{C}_{\epsilon}\) refers a covariance matrix of interference plus noise term at Eve, which is, \[\mathbf{C}_{\epsilon}=\dfrac{\lambda_{u}^{2}}{N(N_{T}-r)}\mathbf{G}_{\mathrm{ Eve,n}}\left(\sum_{i=r+1}^{N_{T}}\mathbf{v}_{i}^{(n)}\mathbf{v}_{i}^{(n)^{H}} \right)\mathbf{G}_{\mathrm{Eve,n}}^{H}+\sigma_{n}^{2}\mathbf{I}. \tag{27}\] Note that the channel between the WPTx and the IR relies on GQSSK-based modulated antenna indices so the capacity analysis deducts into a capacity analysis for the discrete-input continuous-output memoryless channel (DCMC) [43] and it might be not a straightforward to obtain the closed-form analysis in most cases. In this respect, the secrecy rate of IR can be expressed as [42] \[\mathcal{R}_{\mathrm{IR}}=\max\{0,\ \mathcal{I}_{\mathrm{IR}}\left(\mathbf{y}_{ \mathrm{IR}}\right)-\mathcal{I}_{\mathrm{Eve}}\left(\mathbf{y}_{\mathrm{Eve}} \right)\}, \tag{28}\] Note that positive secrecy from (28) implies communication opportunity on top of the existing WPT mechanism, even if some information can be leaked to Eve in the service area. For the other IM schemes given in Section II.B, the ergodic secrecy rates can be obtained from (28) after modifying (24) and (25). For instance, in the case of GQSM/QSM-based IH, the summation limits should be replaced with \(L\to LM\) where \(M\) is the modulation order and \(\omega\) seen in (24) becomes a variable of the summation indices rather than constant waveform such that \(\omega_{l_{1}}\). For GQSSK/SSK schemes, the secrecy calculation requires one effective channel definition without dividing real and imaginary parts so the definition of \(\mathbf{r}_{\mathrm{IR}}\) in (23) is reformulated as \(\mathbf{r}_{\mathrm{IR,n}}=\mathbf{y}_{\mathrm{IR,n}}-\mathbf{h}_{j}^{n}\omega\)[42] which results in \[\mathcal{I}_{\mathrm{IR}}\left(\mathbf{y}_{\mathrm{IR}}\right)= \log_{2}\left(L\right) \tag{29}\] A similar approach is functional when considering the scenario involving Eve, where its mutual information can be expressed as follows: \[\mathcal{I}_{\mathrm{Eve}}\left(\mathbf{y}_{\mathrm{Eve}}\right)= \log_{2}\left(L\right) \tag{30}\] Herein, \(\mathbf{d}_{l_{1}}^{\mathbf{I}_{2}}\) and \(\mathbf{\varrho}_{l_{1}}^{\mathbf{I}_{2}}\) correspond to the channel distances between \(m_{1}\)th and \(m_{2}\)th antenna index combinations at the IR and the Eve for the \(n\)th subband, where \(N_{a}\) antennas out of \(N_{T}\) are active such that \(\mathbf{d}_{l_{1}}^{\mathbf{I}_{2}}=\mathbf{h}_{\mathrm{1},\text{eff}}-\mathbf{h }_{\mathrm{2},\text{eff}}\) and \(\mathbf{\varrho}_{l_{1}}^{\mathbf{I}_{2}}=\mathbf{g}_{\mathrm{1},\text{eff}}- \mathbf{g}_{\mathrm{2},\text{eff}}\), respectively. For GSM/SM-based IH, the summation limits should be replaced with \(L\to LM\) where \(M\) is the modulation order, the calculation of \(\mathbf{d}_{l_{1}}^{\mathbf{I}_{2}}\) takes into account \(\omega\) as in [42]. ## V Numerical Results In this section, we investigate the feasibility of IM-based IH schemes in terms of their energy harvesting capability, bit error rate (BER) performance, and ergodic secrecy rates (ESR) under different scenarios where the multitone signals are used during WPT transmissions. To do so, the channels between WPTx and EH, as well as between the WPTx and IR, are assumed Rayleigh fading. The simulation parameters regarding WPTx are given in Table II. Since the EH's ability to scavenge energy is directly linked to the distance between the transmitter and receiver [45], the following path loss model, formulated as \(35.3+37.6\log_{10}(d)\), where \(d\) is the distance between WPTx and EH, is considered. In computer simulations, the EH is assumed to consist of \(N_{\text{EH}}\) pair of receive antennas and rectennas. In rectennas, it is assumed that have perfect matching and an ideal low-pass filter, and \(R_{\text{ant}}\) is set to 50 \(\Omega\). The parameters \(k_{2}\) and \(k_{4}\) are selected as \(0.0034\) and \(0.3829\), respectively. In order to limit the maximum effective isotropic \[\mathcal{I}_{\text{IR}}\left(\mathbf{y}_{\text{IR}}\right)=\log_{2}\left(L^{2} \right)-\frac{1}{L^{2}}\sum_{l_{1}=1}^{L}\sum_{l_{2}=1}^{L}\mathbb{E}\,_{ \mathbf{z}_{\text{R}}}\left[\log_{2}\left(\sum_{r_{1}=1}^{L}\sum_{r_{2}=1}^{L}e ^{-\frac{\left\|\mathbf{d}_{l_{1},l_{2}}^{r_{1},r_{2}}+\mathbf{z}_{\text{IR}} \right\|^{2}-\left\|\mathbf{z}_{\text{IR}}\right\|^{2}}{\sigma_{\text{R}}^{2}} }\right)\right]. \tag{24}\] \[\mathcal{I}_{\text{Eve}}\left(\mathbf{y}_{\text{Eve}}\right)=\log_{2}\left(L^{2 }\right)-\frac{1}{L^{2}}\sum_{l_{1}=1}^{L}\sum_{l_{2}=1}^{L}\mathbb{E}\,_{ \mathbf{z}_{\text{R}_{\text{in}}}}\left[\log_{2}\left(\sum_{r_{1}=1}^{L}\sum _{r_{2}=1}^{L}e^{-\frac{\left\|\mathbf{\tilde{d}}_{l_{1},l_{2}}^{r_{1},r_{2}} +\mathbf{z}_{\text{Eve}}\right\|^{2}-\left\|\mathbf{z}_{\text{Eve}}\right\|^{ 2}}{\sigma_{\text{R}}^{2}}}\right)\right]. \tag{25}\] radiated power (EIRP) as specified in FCC Title 47, Part 15 regulations [44], the total transmit power is considered as \(P_{T}=36\) dBm at the WPTx. In a scenario where Eve possesses knowledge of the mapping rule utilized by the IM schemes, it could intercept the information transmission between WPTx and IR. In this situation, Eve is capable of conducting information harvesting, thus enabling it to decode the information originally intended for Bob. Therefore, to counter Eve's interception AN is added to the transmitted WPT-IM waveform. The overall transmit power is divided into two parts, which are AN and IM, based on the power allocation factor, \(\rho\), (\(0\leq\rho\leq 1\)). Specifically, \(\lambda_{u}^{2}=\rho P_{T}\) represents the transmit power of the AN waveform, and \(\lambda_{s}^{2}=(1-\rho)P_{T}\) represents the transmit power of the IM waveform. Consequently, as \(\rho\) increases, the transmit power allocated for the IM waveform decreases, and that of the AN waveform increases. The WPTx emits multitone IM waveforms where the available power is allocated equally between \(N_{a}\) active transmit antennas, whereas the generated AN which is emitted from all antennas. In order to boost the harvested DC power, the IM waveforms are combined with a multitone signal vector \(\mathbf{w}_{n}\). Note that WPTx-IR waveforms do not rely on CSI, although the CSI of the WPTx-IR link is available at the transmitter, and it is utilized in generating the AN. Multi-tone signals with a bandwidth denoted as \(B\), which is determined by the expression \((n-1)\Delta f_{N}\) where \(\Delta f_{N}\) corresponds to the inter-carrier frequency spacing, equivalent to 1 kHz, are utilized in simulations. The frequency of the first subband, \(f_{1}\), is set at 100 kHz, while the subsequent subbands adhere to the frequency relationship \(f_{N}=f_{1}+\Delta f_{N}\). ### _Energy harvesting capability_ To begin with, we aim to explore the benefits of the IM-based IH mechanism at the EH. To do so, Figs. 3 and 4 present comparative analysis, and they provide an exhaustive evaluation of the energy harvesting capabilities with varying parameters, different values of \(N\) and \(\rho\). The comparison involves diverse IM schemes, which are GSSK with \(N_{T}=24\), GSM with \(N_{T}=8\) and 16-QAM, SM with \(N_{T}=16\) and 16-QAM, and also SM with \(N_{T}=64\) and 4-QAM. Similarly, Fig. 4 comprises QSM with \(N_{T}=8\) and 4-QAM, QSSK with \(N_{T}\) = 16, GQSM with \(N_{T}=5\) and 4-QAM, and GQSSK with \(N_{T}=7\) and 4-QAM. Note that all IM schemes share a common spectral efficiency of \(\eta=8\) and in all scenarios, the parameters \(N_{a}\) and \(N_{\text{EH}}\) are set to 2 and 4, respectively. Notably, all IM schemes exhibit nearly identical results in the case of CW signals (\(N=1\)) for RF energy harvesting. In Fig. 3, a comparison is also conducted between SM schemes utilizing 4-QAM and 16-QAM. This comparison is aimed at illustrating how the nonlinearity of the rectifier is influenced by modulations with larger high-order moments. Notably, the modulation type selected has an impact on the system's energy harvesting capability, as also observed in [13]. Specifically, it is observed that 16-QAM outperforms 4-QAM in terms of energy harvesting capability due to higher \(M\) values inducing amplitude fluctuations, in turn, substantially enhance the efficiency of RF-to-DC conversion within the rectifier [35]. In contrast, modulation schemes like 4-QAM/PSK are unable to attain the same level of efficiency due to their characteristic constant envelope behavior, as mentioned in [34]. Another crucial factor significantly affecting the energy harvesting capability \(z_{DC}\) is the power allocation factor, \(\rho\). The introduction of AN in the IH system results in interference within the EH receiver. The highest attainable value of \(z_{DC}\) is achieved when \(\rho\) equals 1, which implies the exclusive transmission of AN without any concurrent information-bearing signals. This distinct interference phenomenon brings strategic benefits to EH, as assigning a greater portion of power to AN generation enhances the energy harvesting capability. Moreover, increasing the number of antennas in the WPTx system does not lead to enhanced energy harvesting capability due to the selected signaling scheme, which does not rely on CSI. For instance, when comparing SSK with QSSK at the same spectral efficiency, QSSK utilizes \(16\) antennas, while Fig. 3: The energy harvesting capability (\(z_{DC}\)) comparison of SM schemes with \(\eta=8\) for different \(N\) values with respect to varying power allocation factors, the EH is located at \(d=\) 1.5\(\it{m}\) from the WPTx. SSK employs \(256\) antennas in the WPTx system. However, QSSK, which activates only two antennas to convey quadrature components, outperforms SSK, which uses only one transmit antenna to convey information at a time. Additionally, from the information harvesting perspective, increasing the number of transmit antennas enhances the spectral efficiency of the IH system, enabling the transmission of more information. Nonetheless, the energy harvesting performance of the IH system remains unaffected by the increase in the number of antennas. Comparison with MU-SWIPTIn addition to the comparisons between different IM techniques, Fig. 4 illustrates a comparative analysis of IH systems and the MU-SWIPT system [20] in terms of energy harvesting capability. In the MU-SWIPT setting, \(N_{T}=6\) transmit antennas at the access point (AP) and a single receive antenna at both the IR and energy receiver (ER) are assumed along with matched filtering (MF) [36] for both the AP-IR and AP-ER links. Specifically, CSCG inputs are assumed for information signals, while complex Gaussian (CG) and real Gaussian (RG) signals are employed for energy signals. The power allocation is evenly distributed between the information and energy signals, along with the incorporation of a jamming signal into the transmit signal vector. As depicted in Fig. 4, the IH system outperforms the considered MU-SWIPT architecture as the parameter \(\rho\) increases, particularly in the scenario with \(N=5\). Within the MU-SWIPT system, allocating \(P_{T}\) between information and energy signals is required, both of which are transmitted across all the transmit antennas. Consequently, deploying MF demands an equivalent number of RF chains as the number of transmit antennas, leading to increased power consumption at the transmitter. Meanwhile, the power consumption of the IH mechanism remains unaffected with respect to the total number of antennas. Therefore, fewer RF chains can be enough to leverage the benefits of IM schemes, contributing to a simpler and power efficient transmitter design [46]. ### _Error performance_ In this subsection, we investigate the error performance of the proposed IM-based IH mechanism and provide numerical results to support its merits. In order to demonstrate the impact of multitone signals on the information harvesting performance of IH, the BER of the WPT-IM schemes is plotted in Figs. 5 and 6 with the similar IM schemes given in Fig. 3 and Fig. 4, respectively. In WPT-IM schemes, each subband experiences the same channel gains per antenna between the WPTx and IR. Furthermore, noise is added to subbands separately for each receive antenna, as the increase in \(N\) results in more noise in the received signal vector. This, in turn, has a detrimental effect during the information decoding process. The simulation results are validated with analytical results given in (20) for both IR and EH. As expected, the analytical curves become tighter as \(P_{T}/N_{0}\) increases. The BER of each IM scheme for single subband transmission is approximately 5 dB better than in the case where \(N=3\). Hence, there is a non-trivial tradeoff between the \(z_{DC}\) and BER performance of the information harvesting mechanism. Figs. 5 and 6 also illustrate that best performance results are attained when all bits are modulated in the spatial domain after SSK and QSSK consistently exhibit superior results compared to the others. Thus, increasing the number of modulated bits in the spatial domain generally enhances the performance of IM schemes. Nonetheless, generalized variants of these techniques, such as GSSK, GQSM, and GQSSK, yield inferior outcomes due to the fact that generalized IM techniques introduce spatial correlation in the spatial domain, adversely affecting the IH system's performance in the end. The impact of AN on Eve's BER performance is particularly investigated in Fig. 7, considering the use of GSSK, GSM, and QSSK schemes with \(N=1\) and \(N=3\). The alignment of the analytical curves becomes more apparent as the values of \(P_{T}/N_{0}\) increase. Specifically, within the high \(P_{T}/N_{0}\) range, particularly when \(P_{T}/N_{0}\) exceeds 5 dB, the simulation and analytical curves exhibit a close match. It is observed that Fig. 4: The energy harvesting capability (\(z_{DC}\)) comparison of GSM schemes with \(\eta=8\) for different \(N\) values with respect to varying power allocation factors, the EH is located at \(d=1.5m\) from the WPTx. Fig. 5: The ABER comparison at the IR for SM schemes with \(\eta=8\), \(\rho=0.2\) and for varying \(N\) parameters. the presence of AN disrupts Eve's ability to decode the transmitted data, leading to a substantial deterioration in Eve's BER performance due to the introduced interference. This observation highlights the protective effect of larger subbands against eavesdropping at Eve's end. ### _Secrecy analysis_ Now, the ESR of the proposed IM-based IH schemes is illustrated with respect to different power allocation scenarios. Figs. 8 and 9 present the analysis of ESR for GSSK, QSSK, QSM, and GQSM schemes, incorporating multitone signals where \(\rho=0.2\) and \(\rho=0.6\), respectively. The ESR evaluation for the WPT-IM schemes is conducted under the same conditions, where \(N_{T}=4\), \(N_{a}=2\), \(N_{\rm IR}=2\), and 4-QAM is used when necessary, leading to varying spectral efficiencies among the schemes. It is observed that for the case of \(\rho=0.2\), the GSM scheme provides higher secrecy compared to the other ones, while for \(\rho=0.6\), QSSK outperforms all the schemes over low \(P_{T}/N_{0}\) values with the expense of higher decoding complexity. In the high \(P_{T}/N_{0}\) region, the GQSM scheme yields the best performance. Also, allocating more power to generating AN significantly Fig. 8: The ESR of WPT-IM schemes (30) for \(\rho\) = 0.2 and for varying \(N\) values. Fig. 6: The ABER comparison at the IR for SM schemes with \(\eta=8\), \(\rho=0.2\) and for varying \(N\) parameters. Fig. 7: The ABER comparison of Eve with \(\eta=4\), \(\rho=0.025\) and for varying \(N\) parameters. Fig. 9: The ESR of WPT-IM schemes (30) for \(\rho\) = 0.6 and for varying \(N\) values. enhances the secrecy performance, so it prevents the eavesdropper from detecting injected data in WPT-IM waveform during information seeding. Additionally, the ESR performance of the WPT-IM schemes is adversely affected by an increasing number of subbands. Consequently, employing a higher number of subbands proves to be advantageous for wireless power transmission. However, it is crucial to consider the cumulative impact of channel gains and of Gaussian noise at the receiver side, both of which can potentially lead to a degradation in system performance in terms of wireless information transmission. ## VI Concluding remarks As we transition into the era of 6G technology, addressing the challenge of powering IoT devices with limited or no battery capacity becomes paramount. Energy harvesting technology, particularly far-field WPT, appears to be a promising solution to achieve zero-power communication goals. Consequently, we anticipate a proliferation of combinations of WPT and WIT mechanisms in the near future. These developments are instrumental in realizing green communication architectures that significantly reduce dependence on battery levels and lifetimes. Within this context, a novel protocol known as Information Harvesting (IH) has emerged. IH leverages existing WPT mechanisms to facilitate wireless communication, offering a promising avenue for powering and enabling IoT devices. This paper has presented a comprehensive framework for the IH mechanism, incorporating various IM techniques. Our simulations and analytical work have demonstrated the advantages of IM-based IH mechanisms, particularly in terms of energy harvesting capabilities at the EH and the reliability of communication at the IR. In summary, IH introduces a novel communication channel atop existing WPT mechanisms, which holds great potential for IoT devices in the 6G era. ## Acknowledgment This study has been supported by the Academy of Finland (grant number 334000).
2309.09800
AMuRD: Annotated Arabic-English Receipt Dataset for Key Information Extraction and Classification
The extraction of key information from receipts is a complex task that involves the recognition and extraction of text from scanned receipts. This process is crucial as it enables the retrieval of essential content and organizing it into structured documents for easy access and analysis. In this paper, we present AMuRD, a novel multilingual human-annotated dataset specifically designed for information extraction from receipts. This dataset comprises $47,720$ samples and addresses the key challenges in information extraction and item classification - the two critical aspects of data analysis in the retail industry. Each sample includes annotations for item names and attributes such as price, brand, and more. This detailed annotation facilitates a comprehensive understanding of each item on the receipt. Furthermore, the dataset provides classification into $44$ distinct product categories. This classification feature allows for a more organized and efficient analysis of the items, enhancing the usability of the dataset for various applications. In our study, we evaluated various language model architectures, e.g., by fine-tuning LLaMA models on the AMuRD dataset. Our approach yielded exceptional results, with an F1 score of 97.43\% and accuracy of 94.99\% in information extraction and classification, and an even higher F1 score of 98.51\% and accuracy of 97.06\% observed in specific tasks. The dataset and code are publicly accessible for further researchhttps://github.com/Update-For-Integrated-Business-AI/AMuRD.
Abdelrahman Abdallah, Mahmoud Abdalla, Mohamed Elkasaby, Yasser Elbendary, Adam Jatowt
2023-09-18T14:18:19Z
http://arxiv.org/abs/2309.09800v3
AMuRD: Annotated Multilingual Receipts Dataset for Cross-lingual Key Information Extraction and Classification ###### Abstract Key information extraction involves recognizing and extracting text from scanned receipts, enabling retrieval of essential content, and organizing it into structured documents. This paper presents a novel multilingual dataset for receipt extraction, addressing key challenges in information extraction and item classification. The dataset comprises \(47,720\) samples, including annotations for item names, attributes like (price, brand, etc.), and classification into \(44\) product categories. We introduce the InstructLLaMA approach, achieving an F1 score of \(0.76\) and an accuracy of \(0.68\) for key information extraction and item classification. We provide code, datasets, and checkpoints.1. Footnote 1: [https://github.com/Update-For-Integrated-Business-AI/AMuRD](https://github.com/Update-For-Integrated-Business-AI/AMuRD) keywords: Receipt extraction, Multilingual, Arabic, English, Dataset, OCR, Information Extraction + Footnote †: journal: Elsevier ## 1 Introduction Receipt extraction [1; 2; 3; 4; 5; 6] is a critical task with broad implications for automating business processes, enhancing financial analysis, and enabling efficient inventory management. By capturing and extracting vital information from scanned receipts, organizations can streamline operations, gain valuable insights, and make informed decisions. The effectiveness of receipt extraction systems crucially depends on the availability of high-quality datasets that mirror the complexities found in real-world receipts. In this paper, we present an innovative and extensive dataset tailored to the task of receipt extraction, enriched with two primary objectives: key information extraction and item classification from scanned receipts. While existing datasets have significantly contributed to the field, the need for multilingual datasets that encompass diverse linguistic and contextual nuances remains pressing. Our dataset addresses this gap by providing a comprehensive collection of receipts in both Arabic and English, offering a versatile resource for the advancement of Optical Character Recognition (OCR) and information extraction techniques. Our dataset encompasses \(47,720\) samples from a wide array of sources, including retail stores, restaurants, and supermarkets, ensuring its richness and practical relevance. To facilitate granular analysis and information extraction, meticulous annotations have been conducted on various receipt fields. These annotations include item names, classes, weight, number of units, total price, packaging details, units, and brand names. our dataset includes \(37,419\) unique item names and enables classification into \(44\) distinct product categories, enhancing the organization of items. It further provides detailed information on weight, number of units, size of units, price, and total price, offering insights into quantitative aspects of purchased items. By capturing packaging information and presence indicators, researchers gain the ability to study consumer behavior, pricing trends, and promotional strategies within receipts. Additionally, receipt extraction in multilingual settings, such as Arabic and English, introduces unique challenges stemming from such documents' inherent complexity and variability. In the subsequent sections, we explore some of these key challenges and discuss their implications on the accuracy and efficiency of information extraction systems in this domain. The following sections delve into the dataset's details, including its annotation process and data distribution. We believe this dataset serves as a valuable resource for advancing the field of receipt extraction, fostering innovation, and contributing to the development of intelligent systems proficient in accurately analyzing and extracting information from multilingual receipts. Our contributions to this paper are summarized in the following: 1. We introduce a novel multilingual dataset for receipt extraction, encompassing both Arabic and English content, addressing the need for diverse linguistic contexts. 2. Our dataset focuses on two critical challenges in receipt understanding: key information extraction and item classification. 3. careful annotations cover various receipt fields, including item names, classes, weight, number of units, price, brand, pack, and total price. 4. Our dataset includes classification into 44 distinct product categories, enhancing the organization of diverse items. 5. We propose a novel approach, InstructLLaMA, for information extraction and classification, achieving remarkable results with an F1 score of 0.76 and an accuracy of 0.68. . The rest of the paper follows a structured path to the domain of receipt extraction and information extraction techniques. Section 2 delves into related work in receipt and information extraction. Section 3 highlights the dataset and annotations. Section 4 describes our InstructLLaMA method. Section 5 shows our performance evaluation. Lastly, Section 6 concludes our dataset and method. ## 2 Related work In this section, we delve into the existing research and efforts in the field of Information Extraction. It's important to note that while Information Extraction from scanned receipts plays a vital role in numerous document analysis applications and holds great commercial potential, there has been relatively limited research and advancement in this specific area. The ICDAR 2019 Competition on Scanned Receipt OCR and Information Extraction (SROIE)[7] plays critical roles for many document analysis applications and holds great commercial potential, but very little research works and advances have been published in this area. This competition aimed to advancements of Scanned Receipt OCR and Information Extraction and offered three distinct tasks: First, Scanned Receipt Text Localisation focused on the localization of text regions within scanned receipt images. Participants were tasked with identifying the precise regions containing textual content. Second, Scanned Receipt OCR task. The objective of this task was to accurately recognize and transcribe text from the identified areas. Third, Key Information Extraction from Scanned Receipts extended the challenge to extract vital information from the recognized text. This encompassed crucial details like the merchant's name, total transaction amount, and the date of the receipt. The extraction of these key pieces of information was a pivotal aspect of the competition. Even though the SROIE competition was a big step forward, there's still a lot we can learn and improve, especially when it comes to dealing with receipts in different languages. Receipts often have text in multiple languages, making things tricky. In our paper, we are doing research for the third task. We're sharing a big dataset that has lots of items named in both Arabic and English and annotated by experts. These receipts are special because we have carefully marked important information like prices and product names. We are helping with better-reading receipts and figuring out what products are on them. ## 3 Dataset and Annotations A robust methodology was followed to ensure the creation of a high-quality dataset for receipt extraction in Arabic and English. The methodology involved several key steps, including data collection, annotation guidelines development, annotation process, and data validation. 1. Data Collection: A diverse set of hundred thousand receipts was uploaded by DISCO app2 users collected from various sources, including retail stores, restaurants, and supermarkets. The accepted receipts were carefully selected to cover a wide range of industries, products, and transaction types, ensuring the dataset's richness and practical relevance. Figure 1: Examples of receipts. 2. Annotation Guidelines Development: Detailed annotation guidelines were developed to ensure consistent and accurate annotations across the dataset. These guidelines provided instructions and definitions for each annotated field, including item names, classes, weight, number of units, size of units, price, total price, pack, units, and brand names. 3. Annotation Process: Domain experts were engaged to perform the meticulous annotation process. They carefully analyzed each receipt image, identifying and annotating the relevant information according to the predefined guidelines. The annotation process focused on achieving high accuracy and maintaining consistency throughout the dataset. 4. Data Validation: To ensure the quality and reliability of the dataset, a rigorous data validation process was conducted. This involved thorough reviews and cross-checking of annotations by multiple annotators and domain experts. Any discrepancies or uncertainties were discussed and resolved through consensus to ensure the dataset's integrity. Our dataset consists of \(47,720\) items, encompassing both Arabic and English content, and serves as a comprehensive resource for key information extraction. These items support various tasks and feature distinct annotations for each. Each receipt image contains approximately critical text fields, including item names, unit prices, and total prices. The annotated text predominantly comprises digits and English characters, making it a versatile dataset for a wide range of applications. For a visual representation, refer to Figure 1, which showcases sample receipt images. ### Dataset Statistics The language distribution in the dataset, as depicted in Figure 2, reveals an interesting and valuable insight into the composition of the data. Notably, the dataset exhibits a clear linguistic diversity, with \(58\)% of the items being in Arabic, while the remaining \(42\)% are in English. This language distribution reflects the multilingual nature of real-world receipts, where items and information may be presented in different languages. Understanding and processing this linguistic diversity is crucial for developing robust receipt extraction systems that can effectively handle both Arabic and English receipts. The prevalence of Arabic-language items in the dataset suggests a substantial presence of Arabic receipts, reflecting the importance of accommodating the Arabic language in receipt processing applications, especially in regions where it is commonly used. Conversely, the inclusion of English items acknowledges the global relevance of the dataset, as English is often used as a common language in various contexts. The class distribution in the dataset, as illustrated in Figure 3, provides valuable insights into the composition of items across different product categories. Understanding this distribution is crucial for various applications, including inventory management, recommendation systems, and market analysis. Let's look into the key observations and implications of this class distribution: 1. The dataset encompasses a wide array of product categories, ranging from _Soft Drinks & Juices_ to _Home Textile_. This diversity reflects the richness of products found in real-world receipts, catering to various consumer needs and preferences. 2. Certain categories show a higher frequency of occurrence in the dataset. _Soft Drinks & Juices_ and _Dairy & Eggs_ are among the most common categories, suggesting their popularity and frequent inclusion in receipts. 3. The distribution analyzes consumer preferences and consumption patterns. For instance, the prominence of _Sauces, Dressings & Condiments_ and _Biscuits & Cakes_ indicates the significance of these items in shopping baskets. Figure 2: Language Distribution in Item Names 4. Categories like _Dental Care, Footwear, and Mobile & Tablets_ have relatively low representation. While these categories may not be as frequent, they still play a role in the overall shopping landscape. 5. It's important to note that some categories have a significantly lower frequency compared to others. This class imbalance can pose challenges for machine learning models, particularly when building classifiers. Techniques such as oversampling, undersampling, or using class weights may be necessary to address this issue. Analyzing the price distribution within the dataset is essential for gaining insights into the economic aspects of the items it contains. The dataset's prices exhibit a wide range, from relatively low-cost items to more expensive ones. Here are some key observations from the price distribution: The mean price across all items in the dataset is approximately \(36.69\). The median price, which is \(23.95\), represents the middle point of the price range when all prices are sorted in ascending order. The lowest recorded price in the dataset is \(0.25\), suggesting the presence of relatively inexpensive items. At the upper end of the spectrum, the dataset includes items with prices as high as \(512.00\), indicating the existence of Figure 3: Class Distribution relatively expensive products as well. The KDE plot price distribution is visually represented in Fig. 4. ## 4 Method This section describes the LLaMA model used in our experiments, as illustrated in Figure 5. ### LLaMA V1 and LLaMA V2 LLaMA V1[8] and LLaMA V2 [9] represent two a collection of foundation language models ranging from 7B to 65B parameters. These models are trained on trillions of tokens, all while using publicly available datasets exclusively. LLaMA V1 with \(7\) billion to \(65\) billion parameters, demonstrating that it is possible to achieve state-of-the-art language modeling without relying on proprietary or inaccessible datasets. LLaMA V1 employs byte pair encoding (BPE) [10] for tokenization. Key features include pre-normalization with RMSNorm [11], the SwiGLU activation function [12], and rotary positional embedding [13]. This iteration laid the foundation for future advancements and outperformed larger models like GPT-3.5 \((175B)\)[14] on various benchmarks. Figure 4: Price Distribution Figure 5: LLaMA Model Both LLaMA V1 and V2 excel in their fine-tuning approaches. LLaMA V1 employs Supervised Fine-Tuning (SFT) [15] with careful annotation and auto-regressive training objectives. LLaMA V2 raises the fine-tuning process with Reinforcement Learning with Human Feedback (RLHF), introducing novel techniques for data collection and reward modeling. It optimizes for both helpfulness and safety, effectively balancing the trade-off between these critical aspects. ### Training & Prompting Instruction Tuning [16; 17] is the technique of fine-tuning Large Language Models with multi-task datasets presented in natural language descriptions. We fine-tuned the LLaMA 7.6B[8] model on an instruction-tuning dataset for keyword information extraction. The model is trained with supervision using the standard objective of predicting the next token given the previous ones. The dataset includes instruction, input, output fields. For such cases, we construct the prompt: "Below is an instruction that describes a task, paired with an input that provides further context. " "Write a response that appropriately completes the request." "****Instruction:{instruction}****### Input:{input}****Response:" At inference time, the same prompt is used to generate the answer. Only the text generated after _****Response:_ is used as the final output. We sample from the model using _top-p_ sampling [18] with a temperature of 0.2, \(p=0.75\), \(k=50\), and beam search with 4 beams. This work demonstrates the essentiality of generic and specific instruction tuning to LLaMA's keyword extraction and classification ability. There are also existing works that perform multitask instruction tuning of LLMs for specific scenarios, such as machine translation[19], information extraction[20], and QA [21; 22]. ## 5 Experiment ### Experiment setup This section outlines the experimental setup used for training the LLaMA V1 and V2 model, leveraging the DeepSpeed framework [23; 24]. We engage in fine-tuning using the official LLaMA model with the Huggingface Transformers toolkit [25]. Our primary goal during training is to fine-tune LLaMa to produce the reference response while minimizing the cross-entropy loss. Considering the time and computational resource limitations, we choose a parameter-efficient fine-tuning approach over full-model fine-tuning for most of our experiments by using DeepSpeed. DeepSpeed offers techniques and automated parameter tuning to optimize training efficiency and memory utilization. We have tailored the training process by utilizing DeepSpeed's configuration options to achieve the best results. we enabled mixed precision training with bfloat16 (bf16) precision, a technique that accelerates training while preserving model accuracy. We chose the AdamW optimizer [26] as our optimizer and let DeepSpeed automatically determine its hyperparameters. To regulate the learning rate, we employed the WarmupDecayLR scheduler. All experiments are carried out on 4 Nvidia A100 48GB GPUs. ### Evaluation Metrics Our evaluation involves comparing the text extracted from each item to the ground truth data. We use a binary classification approach for correctness: 1. Extracted text is marked as correct if both the content and category match the ground truth. 2. If either the content or category does not match, the extracted text is marked as incorrect. Additionally, we calculate the F1 score, which balances precision and recall. The F1 score serves as our primary ranking metric, reflecting the method's overall effectiveness in accurately extracting key information from scanned receipts. Our dataset was split into three subsets: a test set comprising 4773 samples, a training set with 38177 samples, and a validation set containing 4772 samples. This division ensures robust evaluation and training processes, preventing overfitting and enabling generalization to unseen data. In our experiments, we assess the performance of the LLaMA model with a focus on key information extraction and item classification. The metrics mentioned above provide comprehensive insights into the model's accuracy, precision, and recall, enabling us to gauge its effectiveness in processing multilingual receipts in Arabic and English. ### Experiment Result In this section, we present the experimental results of our approach for information extraction and classification using the **LLaMA V1** and **LLaMA V2** models. We evaluated our method using the F1 and Accuracy. Table 1 provides a detailed overview of the results achieved by our models on various aspects of information extraction and classification. Both **LLaMA V1** and **LLaMA V2**, each with 7 billion parameters, demonstrate impressive performance across multiple categories. For the _Class_ category, both models achieve high F1 scores (0.95 for **LLaMA V1** and 0.94 for **LLaMA V2**), indicating their ability to accurately classify items. In the _Brand_ category, both models perform exceptionally well, with F1 scores of 0.92 and above, demonstrating their proficiency in brand recognition. The _Weight_ category also showcases strong performance, with F1 scores of around 0.84 for both models. In terms of _Number of Units_, both models achieve F1 scores above 0.86, indicating their effectiveness in extracting this information. For _Size of Units_ and _Total Price_, the models exhibit moderate performance, with F1 scores ranging from 0.36 to 0.43. In the _Price_ and _Pack_ categories, the models perform well, with F1 scores above 0.75. Lastly, for _Units_ and in the overall assessment, the models achieve F1 scores ranging from 0.62 to 0.68, demonstrating their overall effectiveness in information extraction and classification. These results highlight the robustness and versatility of both **LLaMA V1** and **LLaMA V2** in handling various aspects of information extraction and classification across multiple categories. ### Ablation study _Few-shot Information Extraction._ involves the task of identifying new relationships and extracting pertinent data from unstructured text, even when there is a lack of annotated examples for training [27; 28; 29; 30]. Traditional information extraction methods often encounter difficulties when faced with limited data availability, particularly in the identification of emerging relationship types and the corresponding pairs of entities. To tackle these challenges, few-shot learning techniques harness the power of a small number of labeled samples to generalize and make predictions on unseen instances [31; 32; 33]. \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Models} & \multirow{2}{*}{parameters} & \multicolumn{2}{c}{Class} & \multicolumn{2}{c}{Brand} & \multicolumn{2}{c}{Weight} & \multicolumn{2}{c}{\# Units} & \multicolumn{2}{c}{S.Units} & \multicolumn{2}{c}{T.Price} & \multicolumn{2}{c}{Price} & \multicolumn{2}{c}{Pack} & \multicolumn{2}{c}{Units} & \multicolumn{2}{c}{Overall} \\ & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc \\ \hline **LLaMA V2** & 78 & 0.94 & 0.88 & 0.92 & 0.85 & 0.83 & 0.71 & 0.93 & 0.86 & 0.91 & 0.83 & 0.36 & 0.22 & 0.43 & 0.27 & 0.79 & 0.65 & 0.77 & 0.62 & 0.76 & 0.68 \\ **LLaMA V1** & 78 & 0.95 & 0.91 & 0.92 & 0.85 & 0.84 & 0.72 & 0.94 & 0.89 & 0.87 & 0.77 & 0.36 & 0.22 & 0.43 & 0.27 & 0.76 & 0.62 & 0.78 & 0.64 & 0.73 & 0.65 \\ \hline \end{tabular} \end{table} Table 1: Experimental Results for Information Extraction and Classification Using LLaMA V1 and LLaMA V2 Models \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c} \hline \multirow{2}{*}{Models} & \multirow{2}{*}{parameters} & \multicolumn{2}{c}{Class} & \multicolumn{2}{c}{Brand} & \multicolumn{2}{c}{Weight} & \multicolumn{2}{c}{\# Units} & \multicolumn{2}{c}{S.Units} & \multicolumn{2}{c}{T.Price} & \multicolumn{2}{c}{Pack} & \multicolumn{2}{c}{Units} & \multicolumn{2}{c}{Overall} \\ & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc & F1 & Acc \\ \hline **LLaMA V1** & 78 & 0.29 & 0.17 & 0.41 & 0.25 & 0.37 & 0.23 & 0.28 & 0.17 & 0.64 & 0.47 & 0.77 & 0.62 & 0.77 & 0.62 & 0.05 & 0.02 & 0.00 & 0.00 & 0.40 & 0.25 \\ **LLaMA V2** & 78 & 0.19 & 0.10 & 0.29 & 0.17 & 0.21 & 0.12 & 0.06 & 0.35 & 0.22 & 0.46 & 0.30 & 0.46 & 0.30 & 0.03 & 0.01 & 0.10 & 0.05 & 0.25 & 0.14 \\ **GFF-35** & 175B & 0.42 & 0.27 & 0.36 & 0.22 & 0.36 & 0.22 & 0.15 & 0.08 & 0.67 & 0.51 & 0.74 & 0.59 & 0.75 & 0.61 & 0.09 & 0.04 & 0.31 & 0.18 & 0.43 & 0.27 \\ **DedM** & 68 & 0.40 & 0.25 & 0.30 & 0.18 & 0.30 & 0.18 & 0.10 & 0.81 & 0.68 & 0.81 & 0.68 & 0.05 & 0.03 & 0.05 & 0.03 & 0.18 & 0.10 & 0.43 & 0.27 \\ \hline \end{tabular} \end{table} Table 2: Performance Metrics for Few-shot Information Extraction Across Multiple Categories In this section, we present the results of our few-shot information extraction experiments based on the GPT-3.5 [14], LLaMA V1 [8], LLaMA V2 [9], and DeciLM[34] provides a comprehensive overview of our experimental findings, showcasing the performance of these models across various information extraction and classification tasks. Notably, our observations reveal that the DeciLM large language model achieved equivalent accuracy compared to GPT-3.5 while demonstrating significantly improved efficiency, operating at 15 times the speed of the LLaMA model. This feat was accomplished with DeciLM's smaller parameter size of 6 billion, compared to GPT-3.5's 175 billion parameters. These results underscore the efficiency and competitiveness of DeciLM in the domain of few-shot information extraction and classification, even with reduced model complexity. ## 6 Conclusion In this paper, we introduced a novel multilingual dataset designed to tackle the challenges of key information extraction and item classification from scanned receipts. Our dataset, consisting of 47,720 samples in both Arabic and English, offers a comprehensive resource for advancing Optical Character Recognition (OCR) and information extraction techniques in the context of diverse linguistic and product contexts. By providing detailed annotations for item names, attributes, and classification into 44 product categories, our dataset fosters innovation in natural language processing, information extraction, and data processing. We also presented the InstructLLaMA approach, which leverages the LLaMA V1 and LLaMA V2 models, demonstrating remarkable results in key information extraction and item classification across various categories. The DeciLM model achieved competitive performance in few-shot information extraction tasks, even when compared to larger models like GPT-3 and LLaMA. ## Acknowledgments We extend our heartfelt gratitude to _Update For Data Analytics & AI_, the parent company of _DISCO_, for their invaluable support. Their provision of expert data annotators and access to receipt data significantly contributed to the success of our research. We greatly appreciate their partnership and collaboration throughout this project.
2304.00166
Thermal quantum gravity in a general background gauge
We calculate in a general background gauge, to one-loop order, the leading logarithmic contribution from the graviton self-energy at finite temperature $T$, extending a previous analysis done at $T=0$. The result, which has a transverse structure, is applied to evaluate the leading quantum correction of the gravitational vacuum polarization to the Newtonian potential. An analytic expression valid at all temperatures is obtained, which generalizes the result obtained earlier at $T=0$. One finds that the magnitude of this quantum correction decreases as the temperature rises.
F. T. Brandt, J. Frenkel, D. G. C. McKeon, G. S. S. Sakoda
2023-03-31T23:09:07Z
http://arxiv.org/abs/2304.00166v2
# Thermal quantum gravity in a general background gauge ###### Abstract We calculate in a general background gauge, to one-loop order, the leading logarithmic contribution from the graviton self-energy at finite temperature \(T\), extending a previous analysis done at \(T=0\). The result, which has a transverse structure, is applied to evaluate the leading quantum correction of the gravitational vacuum polarization to the Newtonian potential. An analytic expression valid at all temperatures is obtained, which generalizes the result obtained earlier at \(T=0\). One finds that the magnitude of this quantum correction decreases as the temperature rises. gauge theories; quantum gravity, finite temperature pacs: 11.15.-q,04.60.-m,11.10.Wx ## I Introduction Classical general relativity is a successful theory which provides a very good description of the gravitational interactions that occur at low energies. There have been many attempts to quantize gravity along the lines of other field theories and it was recognized that general relativity is not renormalizable [1; 2; 3; 4; 5; 6]. The contributions generated by Feynman loop diagrams to all orders require an infinite number of counter-terms to cancel all ultraviolet divergences, which leads to a lack of predictability of such a theory at high energies. A point of view now well established in many areas of physics is that physical predictions at low energies, that are well verified experimentally, can be made in non-renormalizable theories. The key ingredient of such predictions is the fact that these must be made within the context of an effective low energy theory, in powers of the energy divided by some characteristic heavy mass. Much work has been done to treat general relativity as an effective field theory [7; 8; 9; 10], which upon quantization may lead to predictive quantum corrections at low energies. A special class of low-energy corrections, involving non-local effects, appears to be quite important. The non-locality is manifest by a non-analytic behavior due, for example, to the presence of logarithmic corrections of the form \(\log(-k^{2})\), where \(k\) is some typical momentum transfer. Because these terms become large for small enough \(k^{2}\), they will yield the leading quantum corrections in the limit \(k^{2}\to 0\). Such terms arise from long distance propagations of massless gravitons. As shown in Refs [11; 12], these effects lead to calculable finite quantum corrections to the classical gravitational potential. (For an alternative treatment see Ref. [13]) In this framework, the background field method [14; 15; 16; 17; 18; 19] has been much employed in the computation of quantum corrections in quantum gravity since this procedure preserves the gauge invariance of the background field. It has been first shown by Hooft and Veltman [1] that on mass-shell, pure gravity is renormalizable to one-loop order. This analysis has been done in a particular background gauge, obtained by setting the gauge parameter equal to 1. In a previous work [20], we have examined this calculation in a general background gauge and deduced the corresponding effective Lagrangian. This result was then applied to evaluate, in this class of gauges, the quantum corrections generated by the gravitational vacuum polarization to the Newtonian potential at zero temperature. A useful extension of this approach is the calculation of graviton amplitudes at finite temperature \(T\). These are of interest in quantum gravity in their own right as well as for their cosmological applications [21; 22]. It has been shown that the \(\log(T^{2})\) contributions at high temperature have the same Lorentz covariant form as the \(\log(-k^{2})\) terms at zero temperature [23; 24; 25]. The purpose of this work is to extend these results to obtain the logarithmic contributions of the graviton self-energy at any temperature. We find an analytic expression which smoothly interpolates between the zero and the high temperature limits [Eq. (3.10)]. We use this form in the static case \(k_{0}=0\), to calculate the thermal corrections to the gravitational potential in a general background gauge. The corresponding result given in Eq. (4.9) generalizes the one previously obtained at zero temperature. In Sec. II we outline the properties of thermal quantum gravity in a general background gauge. In Sec. III we evaluate, to one-loop order, the leading logarithmic contributions of the graviton self-energy at finite temperature. As an application, we calculate in Sec. IV the corresponding quantum correction to the classical gravitational potential. We conclude the paper with a brief discussion in Sec. V. Some details of the computations are given in the Appendix. ## II Quantization in a general background gauge The theory of quantum gravity is based on the Einstein-Hilbert Lagrangian \[\mathcal{L}_{g}=\sqrt{-g}\frac{2}{\kappa^{2}}R, \tag{1}\] where \(R\) is the curvature scalar and \(\kappa^{2}=32\pi G\) (\(G\) is Newton's constant). The metric tensor \(g_{\mu\nu}\) is divided into a classical background field \(\bar{g}_{\mu\nu}\) and a quantum field, \(h_{\mu\nu}\) so that \[g_{\mu\nu}=\bar{g}_{\mu\nu}+\kappa h_{\mu\nu}, \tag{2}\] where the background field vanishes at infinity, but is arbitrary elsewhere. Expanding the Lagrangian (1) in powers of the quantum field, one obtains for the quadratic part the contribution [20] \[\mathcal{L}_{g}^{(2)} = \sqrt{-\bar{g}}\Bigg{[}\frac{1}{2}\bar{D}_{\alpha}h_{\mu\nu}\bar {D}^{\alpha}h^{\mu\nu}-\frac{1}{2}\bar{D}_{\alpha}h\bar{D}^{\alpha}h+\bar{D}_{ \alpha}h\bar{D}_{\beta}h^{\alpha\beta}-\bar{D}_{\alpha}h_{\mu\beta}\bar{D}^{ \beta}h^{\mu\alpha} \tag{3}\] \[+ \bar{R}\left(\frac{1}{4}h^{2}-\frac{1}{2}h_{\mu\nu}h^{\mu\nu} \right)+\bar{R}^{\mu\nu}\left(2h^{\alpha}_{\ \mu}h_{\nu\alpha}-hh_{\mu\nu}\right)\Bigg{]}.\] where \(h=h^{\lambda}_{\lambda}\), \(\bar{D}_{\alpha}\) is the covariant derivative with respect to the background field and \(\bar{R}_{\mu\nu}\) is the Ricci tensor associated with the background field. In order to quantize this theory one must fix the gauge of the quantum field in a way that preserves the gauge invariance under the background field transformation \[\delta\bar{g}_{\mu\nu}=\omega^{\gamma}\partial_{\gamma}\bar{g}_{\mu\nu}+\bar{ g}_{\mu\gamma}\partial_{\nu}\omega^{\gamma}+\bar{g}_{\nu\gamma}\partial_{\mu} \omega^{\gamma}=\bar{D}_{\mu}\omega_{\nu}+\bar{D}_{\nu}\omega_{\mu}, \tag{4}\] This can be accomplished by introducing the gauge-fixing Lagrangian \[\mathcal{L}_{gf}=\frac{1}{\xi}\sqrt{-\bar{g}}\left[\left(\bar{D}^{\nu}h_{\mu \nu}-\frac{1}{2}\bar{D}_{\mu}h\right)\left(\bar{D}_{\sigma}h^{\mu\sigma}- \frac{1}{2}\bar{D}^{\mu}h\right)\right], \tag{5}\] where \(\xi\) is a generic gauge parameter. When \(\xi=1\), the above Lagrangian reduces to the background harmonic gauge-fixing Lagrangian used in [1]. The corresponding ghost Lagrangian may be written in the form \[\mathcal{L}_{gh}=\sqrt{-\bar{g}}\;c^{\star\mu}\left[\bar{D}_{\lambda}\bar{D}^{ \lambda}\bar{g}_{\mu\nu}-\bar{R}_{\mu\nu}\right]c^{\nu}. \tag{6}\] We note that the above expressions are invariant under the background field transformation (4). The Feynman rules for propagators and interaction vertices are given in Appendix A of Ref. [20]. In order to extend this theory at finite temperature, we will employ the imaginary time formalism introduced by Matsubara and developed by several authors [26; 27; 28; 29]. The calculation of an amplitude in this formulation is rather similar to that at zero temperature. The only difference is that the energy, instead of taking continuous values, takes discrete values, that ensures the correct periodic boundary conditions for bosonic amplitudes (and anti-periodic for fermionic amplitudes). For example, in the case of graviton self-energy, one has \(p_{0}=2\pi inT\), \(n=0,\pm 1,\pm 2,\dots\). Consequently, when evaluating Feynman loops, the loop energy variable, rather than being integrated, is summed over all possible discrete values. This sum, to one loop, gives rise to a single Bose-Einstein statistical factor \[N\left(\frac{|\vec{p}|}{T}\right)=\frac{1}{\exp\left(\frac{|\vec{p}|}{T} \right)-1} \tag{7}\] The amplitude naturally separates into a zero-temperature and a temperature dependent part. The thermal part can be represented as a forward scattering amplitude, where the internal line is cut open to be on-shell with the corresponding statistical factor [30; 31; 32]. The real-time result can be obtained by an analytical continuation of the external energy \(k_{0}\rightarrow(1+i\epsilon)k_{0}\). This method is calculationally convenient, as will be illustrated in the next section for the graviton self-energy at finite temperature. ## III The thermal graviton self-energy The Feynman diagrams contributing at one loop to the graviton self-energy are indicated in Fig. (1). As shown in [20], at zero temperature the singular terms for \(d=4-2\epsilon\) may be written in the transverse form \[\Pi^{div}_{\mu\nu,\,\alpha\beta}(k)=\frac{\kappa^{2}}{32\pi^{2}}\left[\frac{1}{ \epsilon}-\log(-k^{2})\right]k^{4}\left\{4\,c_{1}(\xi)\,L_{\mu\nu}L_{\alpha \beta}+c_{2}(\xi)\left[L_{\mu\nu}L_{\alpha\beta}+\frac{1}{2}\left(L_{\alpha \mu}L_{\beta\nu}+L_{\alpha\nu}L_{\beta\mu}\right)\right]\right\}, \tag{11}\] where \(L_{\mu\nu}=\frac{k_{\mu}k_{\nu}}{k^{2}}-\eta_{\mu\nu}\) and \(c_{1}(\xi)\), \(c_{2}(\xi)\) are gauge-dependent constants given by \[c_{1}(\xi)=\left[\frac{1}{120}+\frac{(\xi-1)^{2}}{6}\right];\quad c_{2}(\xi)= \left[\frac{7}{20}+\frac{\xi(\xi-1)}{3}\right]. \tag{12}\] This expression has been obtained by using the fact that in a general background gauge, the result can be expressed in terms of combinations of the following three types of integrals (with \(a,b=1,2\)) \[I_{ab}(k)\equiv\int\frac{d^{d}p}{i(2\pi)^{d}}\frac{1}{(p^{2})^{a}[(p+k)^{2}] ^{b}}=\frac{(-k^{2})^{d/2-a-b}}{((4\pi)^{d/2}}\frac{\Gamma(a+b-d/2)}{\Gamma(a) \Gamma(b)}\frac{\Gamma(d/2-a)\Gamma(d/2-b)}{\Gamma(d-a-b)} \tag{13}\] and noticing that their singular contributions may be related in the following way \[I^{div}_{12}(k)=I^{div}_{21}(k)=\frac{k^{2}}{2}I^{div}_{22}(k)=-\frac{1}{k^{2} }I^{div}_{11}(k). \tag{14}\] Thus, the singular coefficient in Eq. (11) can be expressed just in terms of \(I^{div}_{11}\), where \[I^{div}_{11}(k)=\frac{1}{16\pi^{2}}\left[\frac{1}{\epsilon}-\log(-k^{2})\right]. \tag{15}\] In order to extend these results at finite temperature, we will express the corresponding contributions from the diagrams in Fig. 1 in terms of the forward scattering amplitudes shown in Fig. 2. These thermal contributions may be similarly evaluated in terms of the following three types of temperature dependent integrals (\(a,b=1,2\)) \[I^{T}_{ab}(k)=-\int\frac{\mathrm{d}^{3-2\epsilon}p}{(2\pi)^{3-2 \epsilon}}\left[\frac{1}{(a-1)!}\frac{\partial^{a-1}}{\partial p_{0}^{a-1}} \left(\frac{N(p_{0}/T)}{(p_{0}+|\vec{p}|)^{a}}\frac{1}{[(p+k)^{2}]^{b}}\right)\right.\] \[\left.\qquad\qquad\qquad\qquad+\frac{1}{(b-1)!}\frac{\partial^{b -1}}{\partial p_{0}^{b-1}}\left(\frac{N(p_{0}/T)}{(p_{0}+|\vec{p}|)^{b}} \frac{1}{[(p+k)^{2}]^{a}}\right)\right]_{p_{0}=|\vec{p}|}+(k\to-k) \tag{16}\] Figure 1: One-loop contributions to \(\langle\bar{h}\bar{h}\rangle\). The curly, wavy and dashed lines are associated with the background fields, the quantum fields and the ghost fields respectively. The arrows indicate the direction of momenta. Figure 2: Forward scattering amplitudes corresponding to Fig. 1. Crossed graphs (\(k\to-k\)) are to be understood. It is possible to evaluate exactly these integrals in terms of logarithmic functions and of Riemann's zeta functions [33; 34]. Here, our basic interest is to determine the leading thermal logarithmic contribution which reduces in the zero temperature limit to the \(\log(-k^{2})\) term in Eq. (3.1). As shown in Appendix A, it turns out that for such a contribution one finds analogous relations to those given in Eq. (3.4), namely \[I_{12}^{\log T}(k)=I_{12}^{\log T}(k)=\frac{k^{2}}{2}I_{22}^{\log T}(k)=-\frac {1}{k^{2}}I_{11}^{\log T}(k), \tag{3.7}\] where \[I_{11}^{\log T}(k)=-\frac{2}{(2\pi)^{3-2\epsilon}}\int\frac{d^{3-2\epsilon}p}{ 2|\vec{p}|}\left(\frac{1}{k^{2}+2k\cdot p}+\frac{1}{k^{2}-2k\cdot p}\right)_{p _{0}=|\vec{p}|}N\left(\frac{|\vec{p}|}{T}\right). \tag{3.8}\] This shows that in a general background gauge, the leading thermal logarithmic contribution of the graviton self-energy may be expressed just in terms of that arising from \(I_{11}^{\log T}\). After a straightforward calculation, outlined in the Appendix, we obtain for the corresponding contribution, the result \[I_{11}^{\log T}(k)=\frac{1}{16\pi^{2}}\left[\log(-k^{2})-\log(-k^{2}-8\pi ik_{ 0}T+16\pi^{2}T^{2}).\right] \tag{3.9}\] We note that the expression (3.9) vanishes in the zero temperature limit, as expected due to the behavior of the statistical factor in Eq. (3.8). Adding the contributions from Eqs. (3.5) and (3.9), one can see that the \(\log(-k^{2})\) terms cancel out since this thermal logarithmic contribution has the same Lorentz form as the one at \(T=0\)[23; 24; 25]. This property, together with the relations (3.4) and (3.7), lead to the conclusion that the total leading logarithmic contribution coming from the graviton self-energy can be directly obtained by the following extension of the zero-temperature result (3.1) \[\Pi_{\mu\nu,\,\alpha\beta}^{\log T}(k)=-\frac{G}{\pi}\log(-k^{2}-8\pi ik_{0}T +16\pi^{2}T^{2})k^{4}\left\{4\,c_{1}(\xi)\,L_{\mu\nu}L_{\alpha\beta}+c_{2}(\xi )\left[L_{\mu\nu}L_{\alpha\beta}+\frac{1}{2}\left(L_{\alpha\mu}L_{\beta\nu}+L_ {\alpha\nu}L_{\beta\mu}\right)\right]\right\}, \tag{3.10}\] where we have used that \(\kappa^{2}=32\pi G\). This expression has been explicitly verified for the \(\log T^{2}\) contribution which arises at high temperatures. ## IV Quantum corrections to the Newtonian potential As an application of the above result, we will evaluate the corrections generated by the thermal graviton self-energy to the classical gravitational potential. To this end, we will proceed similarly to the method used at zero temperature in Ref. [20]. Thus, we couple the external background field to the energy-momentum tensor \(T^{\mu\nu}\) of the matter fields as \[\mathcal{L}_{I}=-\frac{\kappa}{2}\bar{h}_{\mu\nu}T^{\mu\nu}, \tag{4.1}\] where we have defined \(\bar{g}_{\mu\nu}=\eta_{\mu\nu}+\kappa\bar{h}_{\mu\nu}\). For scalar fields described by the Lagrangian \[\mathcal{L}_{M}=\frac{\sqrt{-g}}{2}\left(g^{\mu\nu}\partial_{\mu}\phi\partial _{\nu}\phi-M^{2}\phi^{2}\right), \tag{4.2}\] the energy-momentum tensor is given by \[T_{\mu\nu}=\partial_{\mu}\phi\,\partial_{\nu}\phi-\frac{1}{2}\eta_{\mu\nu}( \partial_{\lambda}\phi\,\partial^{\lambda}\phi-M^{2}\phi^{2}). \tag{4.3}\] Using this result in Eq. (4.1), we obtain in momentum space the graviton-matter coupling \[V_{\mu\nu}(p,p^{\prime})=-\frac{\kappa}{2}\left[p_{\mu}p^{\prime}_{\nu}+p^{ \prime}_{\mu}p_{\nu}-\eta_{\mu\nu}(p\cdot p^{\prime}-M^{2})\right]. \tag{4.4}\] We can now calculate the quantum correction coming from the diagram shown in Fig. (3-a) This graph yields the contribution (compare with Eq. (3.10) in [20]) \[\Delta V^{T}_{self}(k) = \frac{V_{\mu\nu}(p,p^{\prime})}{2p_{0}}\bar{\mathcal{D}}^{\mu\nu, \rho\sigma}\Pi^{\log T}_{\rho\sigma,\,\lambda\delta}\bar{\mathcal{D}}^{\lambda \delta,\alpha\beta}\frac{V_{\alpha\beta}(q,q^{\prime})}{2q_{0}} \tag{4.5}\] \[= \frac{G}{\pi}\ln(-k^{2}-8\pi ik_{0}T+16\pi^{2}T^{2})\frac{V_{\mu \nu}(p,p^{\prime})}{2p_{0}}\left[c_{1}(\xi)\eta^{\mu\nu}\eta^{\alpha\beta}+c_{2 }(\xi)\frac{\eta^{\alpha\mu}\eta^{\beta\nu}+\eta^{\alpha\nu}\eta^{\beta\mu}}{2 }\right]\frac{V_{\alpha\beta}(q,q^{\prime})}{2q_{0}},\] where \(\bar{\mathcal{D}}^{\mu\nu,\rho\sigma}\) is the background field propagator, \(p_{0}\) and \(q_{0}\) are normalization factors and we have used the transversality of the graviton self-energy. The thermal part of this propagator involves a \(N(|\vec{k}|/T)\delta(k^{2})\) term which yields a vanishing contribution because \(\Pi^{\log T}_{\rho\sigma,\,\lambda\delta}\) is proportional to \((k^{2})^{2}\). We will evaluate the above quantity in the case involving two heavy particles with mass \(M\), by taking the non-relativistic static limit \(p\approx p^{\prime}\approx(M,0)\) in Eq. (4.5). We then get \[\Delta V^{T}_{self}(k)\approx G^{2}M^{2}\ln(\vec{k}^{2}+16\pi^{2}T^{2})\left[ \frac{43}{15}+\frac{4}{3}(\xi-1)(3\xi-1)\right], \tag{4.6}\] where we used Eq. (4.4), the constants \(c_{1}(\xi)\) and \(c_{2}(\xi)\) given in Eq. (3.2) and we have set \(k_{0}=0\). This can be transformed to coordinate space by using the relation [35] \[\int\frac{d^{3}k}{(2\pi)^{3}}e^{-i\vec{k}\cdot\vec{r}}\ln(\vec{k}^{2}+16\pi^{2} T^{2})=-\frac{1}{2\pi}\frac{1}{r^{3}}(1+4\pi rT)\exp(-4\pi rT). \tag{4.7}\] We thus obtain for the correction generated by the graviton self-energy, the result (reinstating factors of \(\hbar\) and \(c\)) \[\Delta V^{T}_{self}(r)=-\left[\frac{43}{30}+\frac{2}{3}(\xi-1)(3\xi-1)\right] \frac{GM^{2}}{r}\frac{G\hbar}{\pi c^{3}r^{2}}\left(1+\frac{4\pi rT}{\hbar c} \right)\exp\left(-\frac{4\pi rT}{\hbar c}\right), \tag{4.8}\] which generalizes the equation (3.13) obtained at zero temperature in Ref. [20]. As explained in this reference, the correction given by the graviton self-energy in the special gauges \(\xi=(2\pm\sqrt{13})/3\) matches, at zero temperature, the complete result obtained in the gauge \(\xi=1\) in Refs. [11; 12]. The full correction to the gravitational potential is a physical quantity which is necessarily gauge independent. Moreover, the Fourier transform (4.7) is, like in the case at \(T=0\), common to all diagrams contributing to the full result. Thus, in these special gauges, the thermal contribution (4.8) generated by the graviton self-energy yields \[\Delta V^{T}(r)=-\frac{41}{10}\frac{GM^{2}}{r}\frac{G\hbar}{\pi c^{3}r^{2}} \left(1+\frac{4\pi rT}{\hbar c}\right)\exp\left(-\frac{4\pi rT}{\hbar c} \right), \tag{4.9}\] which gives the complete leading thermal correction to the Newtonian potential. We note that, in a general background gauge, the physical result obtained in Eq. (4.9) arises only by taking into account the contributions coming from a large number of Feynman diagrams. A plot of the ratio \(R\) between the finite temperature and the zero temperature corrections is shown in Fig. (4), as a function of the variable \(x=4\pi rT/\hbar c\). ## V Discussion We extended the work done to one-loop order in Ref. [20] at zero temperature in a general background gauge, to any finite temperature. We obtained for the leading logarithmic contribution of the thermal graviton self-energy the Figure 3: Examples of Feynman diagrams which yield corrections to the gravitational potential. result given in Eq. (3.10), which reduces to that found earlier for the graviton self-energy at \(T=0\). The transversality of this term is a consequence of the invariance of the theory under background field transformations. We note that the logarithmic factor in this equation is gauge-independent, which indicates that its branch cuts may correspond to physical processes that occur at finite temperatures. We have applied Eq. (3.10) to evaluate the leading correction to the Newtonian potential generated by the gravitational vacuum polarization at all temperatures. In the special background gauges \(\xi=(2\pm\sqrt{13})/3\), we obtained the analytic expression (4.9) which generalizes the full result previously obtained at \(T=0\). The quantum factor \(G\hbar/c^{3}r^{2}\) is usually very small being about \(10^{-38}\) at \(r=10^{-15}\) m, but may become appreciable at much shorter distances. One can see from Eq. (4.9) and from Fig. (4) that the quantum correction lessens as the temperature increases. This behavior may be understood adapting an argument given by Feynman [36]. As the temperature rises, the field lines connecting the two particles spread out, because the entropy increases. This broadening of the field configuration reduces the gravitational force between the particles, which leads to a decrease of the magnitude of such corrections. Thus, in spite of the lack of predictability of quantum gravity at high energy, due to higher-order loops, one can make in this theory calculable physical predictions at low energies. This confirms the general effective low energy strategy implemented in the literature [7; 8; 9; 10; 37]. (We note parenthetically that there is a proposal for an alternative method of quantizing general relativity, that leads to a renormalizable and unitary theory [38; 39]. This approach employs a Lagrange multiplier field which restricts the radiative corrections in pure quantum gravity to one-loop order. Some aspects of thermal quantum gravity have been examined in this context in [40]). ###### Acknowledgements. We thank CNPq (Brazil) for financial support. ## Appendix A The temperature-dependent integrals \(I_{ab}^{T}(k)\) We examine here the behavior of the integrals \(I_{ab}^{T}(k)\) defined in Eq. (3.6). We begin by considering the integral \(I_{11}^{T}(k)\) given in Eq. (3.8). In terms of \(x=\cos\theta\), where \(\theta\) is the angle between \(\vec{k}\) and \(\vec{p}\), we find by setting \(\epsilon=0\) and \(p\equiv|\vec{p}|\), that \[I_{11}^{T}(k)=\frac{k^{2}}{16\pi^{2}}\int_{-1}^{1}\frac{dx}{(k_{0}-|\vec{k}|x )^{2}}\int_{0}^{\infty}dpp\frac{N\left(\frac{p}{T}\right)}{p^{2}-\frac{1}{4} \left(\frac{k^{2}}{k_{0}-|\vec{k}|x}\right)^{2}}.\] (A1) It is now convenient to make the change of variable \[K(x)=\frac{1}{4\pi i}\frac{k^{2}}{k_{0}-|\vec{k}|x};\,\,\,K_{+}\equiv\frac{k_{0}+ |\vec{k}|}{4\pi i};\,\,\,K_{-}\equiv\frac{k_{0}-|\vec{k}|}{4\pi i} \tag{10}\] so that the above integral may be written in the form \[I_{11}^{T}(k)=-\frac{1}{2\pi i|\vec{k}|}\int_{K-}^{K_{+}}dK\int_{0}^{\infty}dpp \frac{N\left(\frac{p}{T}\right)}{p^{2}+4\pi^{2}K^{2}}. \tag{11}\] Performing the \(p\) integration [35], leads to the expression \[I_{11}^{T}(k)=-\frac{1}{4\pi i|\vec{k}|}\int_{K-}^{K_{+}}dK\left[\log\left( \frac{K}{T}\right)+\frac{T}{2K}-\psi\left(1+\frac{K}{T}\right)\right] \tag{12}\] where \(\psi(x)=d\log\Gamma(x)/dx\) is the digamma function. The \(K\) integration may be done by noticing that the \(\psi\) function leads to a surface term. We thus obtain the result \[I_{11}^{\log T}(k)=-\frac{1}{16\pi^{2}}\log\left(\frac{T^{2}}{-k^{2}}\right)+ \frac{T}{4\pi i|\vec{k}|}\log\frac{\Gamma(1+K_{+}/T)}{\Gamma(1+K_{-}/T)}. \tag{13}\] We next consider the integral \(I_{12}^{T}(k)\) (see Eq. (3.6)) \[I_{12}^{T}(k)=\frac{1}{(2\pi)^{3-2\epsilon}}\int\frac{d^{3-2\epsilon}p}{4p^{ 2}}\left[\left(\frac{N\left(\frac{p}{T}\right)}{p}-\frac{dN\left(\frac{p}{T} \right)}{dp}\right)\frac{1}{k^{2}+2k\cdot p}+\frac{4k_{0}N\left(\frac{p}{T} \right)}{(k^{2}+2k\cdot p)^{2}}\right]+(k\rightarrow-k) \tag{14}\] It turns out that the leading logarithmic contribution arises only from the first term in Eq. (14). This may be evaluated in a similar way to that employed above, which leads to the following equation \[I_{12}^{\log T}(k)=\frac{1}{8\pi i|\vec{k}|}\int_{K_{-}}^{K_{+}}dK\int_{0}^{ \infty}dpp^{-1-2\epsilon}\frac{N\left(\frac{p}{T}\right)}{p^{2}+4\pi^{2}K^{2}}. \tag{15}\] This expression is infrared divergent. Such a divergence arises due to the use of the integral reduction method, which allows to express the tensor integrals in terms of the scalar integrals \(I_{ab}(k)\). These divergences cancel in the final result since the graviton self-energy is infrared finite. Thus, we subtract and add to the last term in Eq. (15) the part with \(p=0\) in the denominator that leads to an infrared divergence, which will be disregarded due to the above consideration. In the remaining part, we can set \(\epsilon=0\), getting \[I_{12}^{\log T}(k)=-\frac{1}{32\pi^{3}i|\vec{k}|}\int_{K-}^{K_{+}}\frac{dK}{K^ {2}}\int_{0}^{\infty}dpp\frac{N\left(\frac{p}{T}\right)}{p^{2}+4\pi^{2}K^{2}}. \tag{16}\] Performing the \(p\) integration, we then obtain [35] \[I_{12}^{\log T}(k)=-\frac{1}{64\pi^{3}i|\vec{k}|}\int_{K-}^{K_{+}}\frac{dK}{K^ {2}}\left[\log\left(\frac{K}{T}\right)+\frac{T}{2K}-\psi\left(1+\frac{K}{T} \right)\right] \tag{17}\] We can no longer integrate the last term in this equation in closed form. But it turns out that the leading logarithmic contribution comes, similarly to the previous case, from the surface term which arises from an integration by parts. Thus, we obtain \[I_{12}^{\log T}(k)=\frac{1}{k^{2}}\left[\frac{1}{16\pi^{2}}\log\left(\frac{T^ {2}}{-k^{2}}\right)-\frac{T}{4\pi i|\vec{k}|}\log\frac{\Gamma(1+K_{+}/T)}{ \Gamma(1+K_{-}/T)}\right]. \tag{18}\] We finally consider the integral \(I_{22}^{T}(k)\) in Eq. (3.6) which leads to \[I_{22}^{T}(k)=\frac{1}{(2\pi)^{3-2\epsilon}}\int\frac{d^{3-2\epsilon}p}{2p^{2} }\left[\left(\frac{N\left(\frac{p}{T}\right)}{p}-\frac{dN\left(\frac{p}{T} \right)}{dp}\right)\frac{1}{(k^{2}+2k\cdot p)^{2}}+\frac{4(k_{0}+p)N\left(\frac {p}{T}\right)}{(k^{2}+2k\cdot p)^{3}}\right]+(k\rightarrow-k) \tag{19}\] Like in the Eq. (14), only the first term turns out to be relevant for our purpose. This can be computed in a similar way to that used above. After some calculation, we obtain for the leading logarithmic contribution \[I_{22}^{\log T}(k)=\frac{2}{k^{4}}\left[\frac{1}{16\pi^{2}}\log\left(\frac{T^{2} }{-k^{2}}\right)-\frac{T}{4\pi i|\vec{k}|}\log\frac{\Gamma(1+K_{+}/T)}{\Gamma(1 +K_{-}/T)}\right]. \tag{15}\] From the equations (13), (12) and (15), one can verify the relation given in Eq. (12). Thus, we can write the relevant logarithmic contributions just in terms of those appearing in \(I_{11}^{\log T}(k)\). To proceed, we express the \(\log\frac{\Gamma(1+K_{+}/T)}{\Gamma(1+K_{-}/T)}\) term using the series representation [35] \[\log\Gamma(z)=z\log z-z-\frac{1}{2}\log z+\log\sqrt{2\pi}+\frac{1}{2}\sum_{m= 1}^{\infty}\frac{m}{(m+1)(m+2)}\sum_{n=1}^{\infty}\frac{1}{(z+n)^{m+1}} \tag{16}\] where \(z=1+K_{\pm}/T\). This yields the following logarithmic contributions \[\frac{1}{2}\left(1+\frac{k_{0}}{2\pi iT}\right)\log\left(\frac{K_{+}+T}{K_{-} +T}\right)+\frac{|\vec{k}|}{4\pi iT}\log\left[\left(1+\frac{K_{+}}{T}\right) \left(1+\frac{K_{-}}{T}\right)\right]. \tag{17}\] Substituting this expression in Eq. (13), we obtain for the leading logarithmic term \[I_{11}^{\log T}(k)\approx-\frac{1}{16\pi^{2}}\log\frac{-k^{2}-8\pi ik_{0}T+16 \pi^{2}T^{2}}{-k^{2}}. \tag{18}\] We note that this expression vanishes in the zero temperature limit, as expected for the purely thermal contributions due to the statistical factor (7). This term yields the contribution shown in Eq. (11). After the cancellation of the \(\log(-k^{2})\) with that present in Eq. (12), the remaining \(\log(-k^{2}-8\pi ik_{0}T+16\pi^{2}T^{2})\) term can become very large for very small values of \(k^{2}\) and \(T^{2}\). In the static limit, \(k_{o}=0\), such a contribution would dominate over the other contributions arising from Eqs. (16) and (17).
2310.20329
InstructCoder: Instruction Tuning Large Language Models for Code Editing
Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of Large Language Models (LLMs) to edit code based on user instructions. Evaluated on a novel human-written execution-based benchmark dubbed EditEval, we found current models often struggle to fulfill the instructions. In light of this, we contribute InstructCoder, the first instruction-tuning dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks such as comment insertion, code optimization, and code refactoring. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The collection process starts with filtered commit data sourced from GitHub Python repositories as seeds. Subsequently, the dataset is systematically expanded through an iterative process, where both seed and generated tasks are used to prompt ChatGPT for more data. Our findings reveal that open-source LLMs fine-tuned on InstructCoder can significantly enhance the accuracy of code edits, exhibiting superior code-editing performance matching advanced proprietary LLMs. The datasets and the source code are publicly available at https://github.com/qishenghu/CodeInstruct.
Kaixin Li, Qisheng Hu, Xu Zhao, Hui Chen, Yuxi Xie, Tiedong Liu, Qizhe Xie, Junxian He
2023-10-31T10:15:35Z
http://arxiv.org/abs/2310.20329v3
# InstructCoder: Empowering Language Models for Code Editing ###### Abstract Code editing encompasses a variety of pragmatic tasks that developers deal with daily. Despite its relevance and practical usefulness, automatic code editing remains an underexplored area in the evolution of deep learning models, partly due to data scarcity. In this work, we explore the use of large language models (LLMs) to edit code based on user instructions, covering a broad range of implicit tasks such as comment insertion, code optimization, and code refactoring. To facilitate this, we introduce InstructCoder, the first dataset designed to adapt LLMs for general-purpose code editing, containing high-diversity code-editing tasks. It consists of over 114,000 instruction-input-output triplets and covers multiple distinct code editing scenarios. The dataset is systematically expanded through an iterative process that commences with code editing data sourced from GitHub commits as seed tasks. Seed and generated tasks are used subsequently to prompt ChatGPT for more task data. Our experiments demonstrate that open-source LLMs fine-tuned on InstructCoder can edit code correctly based on users' instructions most of the time, exhibiting unprecedented code-editing performance levels. Such results suggest that proficient instruction-finetuning can lead to significant amelioration in code-editing abilities. The dataset and the source code are available at [https://github.com/qishenghu/CodeInstruct](https://github.com/qishenghu/CodeInstruct). ## 1 Introduction Developers typically engage in a cyclic routine of writing and revising code. As a crucial element, automatic code editing could potentially enhance development efficiency significantly. However, the intricacy of this task has hampered substantial progress by deep learning models. This is attributable to the fact that code editing encapsulates diverse subtasks, such as code optimization, comment insertion, and bug fixing. Each of these diverse subtasks presents distinct challenges and requires unique capabilities to solve, thereby posing considerable hurdles for modeling. Recent development of large language models (LLMs) has made remarkable progresses in NLP, demonstrating strong few-shot and zero-shot abilities Brown et al. (2020); Scao et al. (2022); Chowdhery et al. (2022); Ouyang et al. (2022); OpenAI (2022); Touvron et al. (2023). Beyond text models, code LLMs have also elicited significant interest, highlighting their immense potential in code generation Nijkamp et al. (2023); Chen et al. (2021); Li et al. (2023). Inspired by these advancements, we explore the proficiency of LLMs in editing code based on user instructions, for instance, "add docstring to the function for clarity", "remove redundant code", or "refactor it into reusable functions". To this end, we curate a code editing dataset, dubbed InstructCoder, for improving and evaluating code editing abilities of LLMs. InstructCoder is an instructional dataset containing diverse code-editing tasks. The dataset is primarily generated by ChatGPT OpenAI (2022). Specifically, we first collect and manually scrutinize git commit data from public repositories on GitHub as the seed code editing tasks, then we utilize the seed data to prompt ChatGPT to generate new instructions and input-output pairs respectively, where a scenario (e.g. web development) is randomly sampled from a list of scenarios and specified to ensure diversity of the data. This process resembles the Self-Instruct Wang et al. (2022) and Alpaca Taori et al. (2023) frameworks. By innovatively incorporating scenarios during the generation process, our approach ensures that the code-editing instances in the InstructCoder dataset are diverse and relevant to real-world programming situations. This approach enables Chat-GPT to synthesize more diverse input-output code snippets in terms of variable naming and functionality given the code-editing instructions and scenarios, resulting in a robust dataset for instruction finetuning in the code editing domain. After proper deduplication and postprocessing, we retain over 114,000 samples in the dataset. Our empirical studies reveal that LLMs display notable gains in code editing abilities post finetuning on InstructCoder. The largest model used in the experiment, LLaMA-33B (Touvron et al., 2023), performs on-par with ChatGPT, achieving an edit accuracy of 89.3% and 76.3% as evaluated by GPT-4 (OpenAI, 2023) and humans respectively. Further findings signify that edit accuracy improves log-linearly with data scale. ## 2 Related Work ### Instruction Finetuning Datasets Previous studies have concluded that instruction finetuning LLMs on a diverse collection of instructional tasks can further improve the ability of LLMs to generalize well on unseen tasks (Ouyang et al., 2022; Mishra et al., 2022; Wei et al., 2022; Chung et al., 2022; Wang et al., 2023). To support these tasks, datasets consisting of a large number of code snippets with corresponding annotations are necessary. These instruction can be reformulated from existing datasets (Aribandi et al., 2022; Wei et al., 2022; Mishra et al., 2022; Longpre et al., 2023), or human-written with crowd-sourcing efforts (Ouyang et al., 2022; Wang et al., 2022). Machine generation of instruction data has also been explored to reduce human labour (Wang et al., 2022; Honovich et al., 2022; Taori et al., 2023; Xue et al., 2023). Despite the presence of elevated noise levels within the data, its effectiveness has been identified. ### Code Synthesis Code generation is an extensively studied area. Language models pretrained on large collections of code have demonstrated strong abilities in a variety of programming tasks. A number of general LLMs gain code generation abilities due to the mixture of code in the pretraining corpus (e.g. The Pile (Gao Figure 1: Data collection pipeline of InstructCoder (left) and an qualitative example from the dataset (right, best viewed with zoom). Initial seed tasks are selected from GitHub commits, and inspire ChatGPT to generate new instructions. Plausible scenarios where the filtered instructions may be used are then generated. Finally, corresponding code input and output are obtained conditioned on both the instruction and scenario. High-quality samples are manually selected and recurrently added to the task pool for further generation. et al., 2020)), such as GPT-3 (Brown et al., 2020), ChatGPT, GPT-4 (OpenAI, 2023), LLaMA (Touvron et al., 2023), BLOOM (Scao et al., 2022), GPT-NeoX (Black et al., 2022), and Pythia (Biderman et al., 2023). LLMs specifically trained on code and optimized for code generation are also studied, e.g. CodeX (Chen et al., 2021), CodeGen (Nijkamp et al., 2023), CodeGeeX (Zheng et al., 2023) and StarCoder (Li et al., 2023). These models all adopt the decoder-only transformer architecture, but differ in size and specific model design (e.g. positional embedding, norm layer placement) as well as the selection and preprocessing of pretraining corpus. On the other hand, relatively fewer literature addresses the objective of code editing. Previous works focus on a subset of code editing tasks, such as code infilling (Fried et al., 2023) and debugging (Just et al., 2014; Tarlow et al., 2020; Ding et al., 2020). The PIE (Madaan et al., 2023) dataset is a concurrent work most relevant to ours, which focuses on speeding up programs. Other works (Yin et al., 2018; Wei et al., 2023; Chakraborty et al., 2020) can not accept natural Figure 3: Visualizations of InstructCoder data. Best viewed in zoom. Figure 2: Distribution of code edit intent categories. language as edit intentions, rendering them less user-friendly. Nevertheless, datasets particularly tailored for general-purpose code editing are absent. To fill this gap, we introduce InstructCoder, a novel dataset aimed at further advancing the capabilities of code editing with LLMs. ## 3 InstructCoder Dataset Collection To generate instructional data for code editing, we employed a method based on Self-Instruct Wang et al. (2022), which expands instruction finetuning data by bootstrapping off language model generation. The methodology of generating data with LLMs requires minimal human-labeled data as seed tasks while maintaining the quality and relevance of the tasks in dataset. Through an iterative process of generating instructions and refining them with deduplication, we create a dataset of a wide range of code-editing tasks. Figure 1 illustrates the data collection pipeline of InstructCoder. ### Seed Data Collection GitHub is a code hosting platform whose version control service naturally records code edits with commits, which can be converted to instructions. The repositories on GitHub provide diverse data with human-generated quality. However, the data cannot be directly utilized. First, commit messages are mostly brief and resultant, missing detailed descriptions. Furthermore, they can be imprecise or even absent. Second, commits can be huge involving multiple files, which is beyond the scope of this work. In light of this, we direct our attention towards LLMs as a means to generate data, instead of the direct utilization of collected data. Initially, raw github commit data were collated through BigQuery.1 The task instructions were derived from the commit message, while the input and output corresponded to the code version before/after the commits. We came across many imprecise or emotionally charged commit messages. To convert commit messages to proper instructions, we employed Codex Chen et al. (2021) to clarify the changes made between versions and improve the commit messages, resulting in more precise and informative instructions. A total of 768 seed tasks were processed from the commit data through manual efforts. 634 tasks were used for self-instruct purposes while 134 tasks reserved for evaluation. Footnote 1: [https://cloud.google.com/bigquery](https://cloud.google.com/bigquery) In addition to GitHub commit data, we leverage high-quality generated samples for seed tasks. With manual inspection, we compiled a batch of 592 high-quality samples as additional seed tasks. This set of seed data cover a wide range of code-editing scenarios and forms the very basis on which InstructCoder is created, ensuring that the tasks are rooted in plausible real-world code-editing cases. ### Instruction Bootstrapping Self-Instruct Wang et al. (2022) serves as an effective automated framework for instruction data generation. It works by iterative bootstrapping off LLM's generation, presenting a way to enrich the instructional dataset while maintaining task quality and relevance from a small set of human-evaluated seed tasks. We leveraged a similar approach to generate diverse code editing instructional data. In each iteration, seven seed task instructions and one ChatGPT-generated task instruction are sampled and combined in a few-shot manner to prompt ChatGPT for more instructions. To generate more diverse and practically applicable instructions, we also generate tasks across multiple sub-domains by specifying the editing intent in the prompt provided. Relevant prompt used can be found in Table 3 in Appendix A. ### Scenario-conditional Generation We originally found many generated samples share similar codebases and variable names despite different instructions and few-shot examples provided. Such similarity could largely diminish the dataset's research value. Empirical analysis suggests the issue could be attributed to LLM generating general codebases for input/output snippets when insufficient context provided. To mitigate this, we introduce scenarios to input/output generation. As an illustration of the effects of scenario generation, we present some examples in Figure 7,8,9 in Appendix B, where we observe that instances generated with scenario demonstrate higher quality in terms of richer context and code structure compared to those without. For each generated instruction, we first prompt ChatGPT to generate practical events as "real world" scenarios where the editing instruction could be performed, and randomly selected one for instance generation in the next step. Subsequently, the LLM is instructed to generate samples that correspond with the instruction and scenario, ensuring the codebases and variable names are appropriate. The prompt used can be found in Table 3 in Appendix A. By incorporating scenario-conditional generation, the resulting samples exhibit increased variability in regards to codebases and variable naming, thus augmenting the diversity of InstructCoder. ### Postprocessing Following Self-Instruct (Wang et al., 2022), deduplication is applied on the generated instructions to remove instructions that have a ROUGE-L (Lin, 2004) overlap score larger than 0.7 with the existing instructions. We also employ MinHash with Locality Sensitive Hashing (LSH) indexing using datasketch2 to remove instances with an input code Jaccard similarity greater than 0.75, in order to deduplicate at the code level. More heuristic rules were used to clean the generated data. With postprocessing, we achieved a high level of effectiveness in eliminating erroneous and redundant data. Footnote 2: [http://ekzhu.com/datasketch/](http://ekzhu.com/datasketch/) We kept 95% of the dataset as the train set and assigned 5% of the dataset as the validation set. The test set is built with held-out seed samples from real GitHub data to better reflect the real-world edit cases. Since commit messages from GitHub code edits are noisy, we conducted manual quality filtering. As a result, InstructCoder consists of 108391 training samples, 5708 validation samples and 134 test samples. ## 4 Data Analysis We analyze InstructCoder in terms of 1) diversity, 2) complexity, and 3) correctness. We provide distribution and complexity analyses of the task instances. Finally, we demonstrate through human investigation that our data is highly reliable. ### Statistic Overview InstructCoder comprises over 114 thousand code editing instructions, each paired with an input/output instance. The token length distribution of input/output can be viewed in Figure 4 and Table 4 in Appendix C. Most of the data falls within a reasonable range in terms of length, while there are also some extreme values that reflect the breadth of our dataset. ### Instruction Diversity To explore the diversity of tasks in InstructCoder and their practical applicability, we present various instruction intents i.e. _what_ the code edits intend to accomplish, and instruction verbs, i.e. _how_ the code edit is accomplished. Instruction Intents.We asked ChatGPT to classify the types of code edits in our dataset, and manually identified 27 empirical genres. Figure 2 shows the distribution of the code edit intent categories in InstructCoder, which include adding functionalities, optimizing code, improving readability, etc. These objectives underscore the extensive range of InstructCoder. Instruction Verbs.The diversity of instruction verbs is also portrayed in Figure 2(a). We demonstrate the top-20 root verbs and their top-4 direct nouns both ranked by frequency. While a great portion of the instructions can be roughly clustered as _creation_ (e.g. "add", "implement", "creat") and _modification_ (e.g. "modify", "replace", "change"), InstructCoder presents a long-tail distribution with less common verbs other than the top-20 taking up 25.0% percentage. This demonstrates that the dataset contains a wide spectrum of instructions. ### Scenario Diversity InstructCoder is designed to cover a wide range of scenarios. Each instruction is prompted to generate different scenarios where the editing instruction Figure 4: Token length distribution of InstructCoder could be performed. This approach ensures that the generated samples exhibit greater diversity in terms of codebases and contexts. A wordcloud is provided to show some of the scenario domains in our dataset, as illustrated in Figure 2(b), with each sector referring to a different domain. The diversity of the dataset is emphasized by the presence of a wide range of domains such as image processing, web development, and cybersecurity. ### Complexity We reflect the complexity of a code edit task by the number of differing lines and its ratio in the input/output pair, which are defined as: \[n_{\mathit{diff}}=|I\cup O\setminus I\cap O|, \tag{1}\] \[r_{\mathit{diff}}=\frac{n_{\mathit{diff}}}{|I\cup O|}, \tag{2}\] where \(I\) and \(O\) are sets of input/output code with single lines as elements. We measure the differing lines of a code-editing task instance using the Python library _difflib_.3 We found that the average number of differing lines in InstructCoder is 11.9 and the average ratio is 0.52. These values suggest a fairly acceptable level of complexity, indicating that the dataset is neither too easy nor too hard. _InstructCoder_ strikes a balance in terms of complexity, making it well-suited for finetuning and evaluating LLMs in a wide range of code editing tasks. Figure 10 in Appendix C illustrates the distribution of the number of differing lines. Footnote 3: [https://docs.python.org/3/library/difflib.html](https://docs.python.org/3/library/difflib.html) ### Correctness We further randomly sample 200 instances and invite three co-authors to evaluate the instances based on two criteria: the validity of the instruction and the correctness of the instances. The validity assessment focuses on deciding if the instructions clearly exhibit editing intent and are appropriate for code editing. The correctness evaluation examines if the input-output pairs reflect the changes specified by the instructions. The results in Table 1 indicate that most instructions in the InstructCoder dataset are valid. A few instances exhibit noise and occasional failure to follow the instruction, but overall high correctness is achieved. Out of the 200 evaluated instances, 180 were successfully solved, showcasing the overall quality and reliability of InstructCoder. ## 5 Experiments ### Setup Training.We experiment with two families of open-source language models: LLaMA (7B, 13B, 33B) (Touvron et al., 2023) and BLOOM (560M, 3B, 7B) (Scao et al., 2022). LLaMA is a series of large language models with parameter counts ranging from 7B to 65B, and pretrained with an excessive amount of tokens, wherein code takes up approximately 4.5%. BLOOM is a multilingual LLM capable of generating human-like outputs in 46 languages and 13 programming languages. A full finetuning which updates all the parameters in an LLM can be computationally expensive. Instead, we adopt LoRA (Hu et al., 2022), a parameter-efficient finetuning method which optimizes an approximated low-rank delta matrix of the fully-connected layers. Though the number of parameters updated in LoRA is typically several magnitudes lower than that of the full model, many works have demonstrated its effectiveness comparable to full finetuning (Hu et al., 2022; Wang et al., 2023). In this way we could finetune a 33B model in a single A100-80GB GPU card. Across all our experiments, LoRA is applied on the query, key, value and output transform weights of the Transformer architecture (Vaswani et al., 2017). All hyperparameters can be found in Table 5 in Appendix D. Baselines.We select zero-shot ChatGPT (OpenAI, 2022) as a strong baseline. We also include open-source models, LLaMA (Touvron et al., 2023) and Alpaca (Taori et al., 2023), and report their zero-shot and one-shot performance. We do not experiment on k-shot with larger k, because these prompts take up too many tokens. \begin{table} \begin{tabular}{p{113.8pt} p{113.8pt}} \hline \hline **Question** & **Pass** \\ \hline Determine if the instruction is valid. & 97\(\%\) \\ Is the output an acceptable edited code response to the instruction and input? & 90\(\%\) \\ \hline \hline \end{tabular} \end{table} Table 1: Quality check questions and results on a randomly sampled subset with 200 data points. Concurrent to our work, CodeAlpaca4 is a popular dataset generated with the pipeline of Alpaca (Taori et al., 2023). Its seed data is replaced by hand-written easy instructions with short programs. We finetune LLaMA models with CodeAlpaca and compare the results. Footnote 4: [https://github.com/sahil280114/codealpaca](https://github.com/sahil280114/codealpaca) ### Metrics Evaluating the accuracy of code edits presents a complex challenge due to the potential for incomplete code snippets and the existence of multiple valid modifications. Evaluating correctness using conventional metrics proves arduous, hence our reliance on human evaluation. Each sample is annotated by three examiners, and the average accuracy is reported. We also endeavored to prompt GPT-4 (OpenAI, 2023) in inspecting the modifications. Human Scoring.We establish a rule indicating three scoring levels: _correct_, _partial_ and _wrong_. To ensure impartiality, output samples from different models are shuffled and each is evaluated by three co-authors using a tool that guarantees the anonymity of the models was used. The edit is assigned _correct_ if it correctly reflects the instruction demands and _wrong_ if it fails to follow the instruction. We introduce a _partial_ class to contain subtle situations where the edit is correct but unexpected modifications are made, such as removal of comments or redundant continuous generation. Gpt-4 (OpenAI, 2023) Evaluation.We leverage GPT-4 as an automatic evaluator to alleviate the need of human effort and ensure fair evaluation. Using LLMs as generation evaluators has been demonstrated effective in NLG tasks (Liu et al., 2023; Wang et al., 2023; Fu et al., 2023), and especially in code generation (Zhuo, 2023). We prompt GPT-4 to evaluate if the code edit is an acceptable response to the input and instruction. The prompts can be found in Table 3 in Appendix A. ## 6 Results ### Finetuning Efficacy with InstructCoder Table 2 provides a comprehensive comparison across models finetuned with InstructCoder and the baselines. We leave the discussion of the validity of using GPT-4 as an evaluator and human scoring results in Appendix E. The average of three runs was taken for each score. We also showcase human-evaluated model performance finetuned with InstructCoder in Table 6. While low accuracy are observed in plain open-source models and only marginal improvement is achieved through few-shot prompting, finetuning with InstructCoder significantly boost the accuracy, suggesting the effectiveness of efficient instruction finetuning with machine-generated code edit pairs. It is noteworthy that our largest finetuned LLaMA-33B exhibits a performance comparable with the strong baseline ChatGPT on the test set. Some qualitative results are shown in Appendix F. Despite the noise present in the data points collected through git-diff,5 which might entail incomplete contextual information and some disparity in code structure, the finetuned LLaMA-33B achieves an accuracy of 89.3% under GPT-4 evaluation, with a 65% increase over its plain counterpart. Footnote 5: [https://git-scm.com/docs/git-diff](https://git-scm.com/docs/git-diff) The ability of the underlying LLM also serves as a significant determinant in the code-editing \begin{table} \begin{tabular}{c c c} \hline \hline **Model** & **Size** & **Accuracy (\%)** \\ \hline \multicolumn{3}{c}{_Baselines_} \\ ChatGPT (0-shot) & - & 90.5 \\ BLOOM (0-shot) & 3B & 3.0 \\ BLOOM (1-shot) & 3B & 3.0 \\ BLOOM (0-shot) & 7B & 5.2 \\ BLOOM (1-shot) & 7B & 11.7 \\ LLaMA (0-shot) & 7B & 12.4 \\ LLaMA (1-shot) & 7B & 14.2 \\ LLaMA (0-shot) & 13B & 18.7 \\ LLaMA (1-shot) & 13B & 25.6 \\ LLaMA (0-shot) & 33B & 24.1 \\ LLaMA (1-shot) & 33B & 54.5 \\ \hline \multicolumn{3}{c}{_Finetuned with Alpaca Dataset_} \\ \multicolumn{3}{c}{_7B_} & 39.3 \\ LLaMA & 13B & 55.2 \\ \multicolumn{3}{c}{33B} & 70.6 \\ \hline \multicolumn{3}{c}{_Finetuned with CodeAlpaca Dataset_} \\ LLaMA & 13B & 48.5 \\ LLaMA & 33B & 74.6 \\ \hline \multicolumn{3}{c}{_Finetuned with CodeInstruct Dataset_} \\ \multicolumn{3}{c}{560M} & 20.9 \\ BLOOM & 3B & 51.2 \\ \multicolumn{3}{c}{7B} & 56.2 \\ \multicolumn{3}{c}{7B} & 69.2 \\ LLaMA & 13B & 75.9 \\ \multicolumn{3}{c}{33B} & **89.3** \\ \hline \hline \end{tabular} \end{table} Table 2: Main experimental results on various models evaluated by accuracy. capacity. While enhancements are evident across all finetuned models, the LLaMA models exhibit superior accuracies when compared to BLOOM models of comparable sizes. The models finetuned with CodeAlpaca exhibit unsatisfactory results. The 13B model exhibits a mere 48.5% accuracy, a markedly inferior performance compared to the finetuning with InstructCoder, and even lower than that of Alpaca. In the case of the 33B model, CodeAlpaca surpasses Alpaca in performance; however, it remains substantially worse than InstructCoder. This finding validates our methodology of employing GitHub seed data to produce a more challenging and diverse dataset. The observation suggests the presence of a considerable domain gap between CodeAlpaca and authentic real-world test data, rendering CodeAlpaca suboptimal, though the phenomenon can be partially alleviated by scaling up model size. ### Dataset Scaling InstructCoder has a scale considerably smaller than what LLMs are typically pretrained on. In order to ascertain the sufficiency of this scale, we conducted an experiment wherein we fine-tuned the LLaMA family of models using varying proportions (1%, 10%, 50%, and 100%) of the dataset. The data subsets corresponding to smaller proportions are guaranteed to be encompassed within the larger data subsets. The results are shown in Figure 5. The identified trend demonstrates a positive correlation between the model's accuracy and the scale of the training set. However, this relationship exhibits diminishing returns as the dataset size continues to expand. Utilizing just 10% of the data brings significant increase and surpass the corresponding zero-shot and one-shot accuracies without finetuning (see Table 2) by considerable margins. With over 10% training data, larger models demonstrates superior performance than smaller models trained with full data. except for LLaMA-13B@10% and LLaMA-7B@100%. While we empirically observed that the training time grows approximate linearly with parameter count in our experiments, the results reveals that larger models should be preferred with limited training compute budget. ### Edit Ratios Figure 6 shows the accuracy of finetuned LLaMA models across five levels of edit ratio. Larger models consistently outperforms smaller ones within each bin. Interestingly, the accuracy of models' edit is generally lower as the edit ratio decreases. One plausible reason is that, as the fine-tuning loss is the average of cross-entropy on the label tokens, a shortcut of copying the inputs is easily learnt by the model to achieve a fairly low loss value, especially when the absolute number of modifications is small. Our observations indicate that this issue can be alleviated by scaling up the models. Larger models perform better in capturing subtle differences in low edit ratio cases. ## 7 Conclusion We introduce InstructCoder, the first instruction tuning dataset for general-purpose code-editing tasks. The dataset comprises generations of Large Language Models, where real GitHub commits serve as seed tasks to guide the generation process. A scenario-conditional approach is introduced to ensure both diversity and high quality of the data. Figure 5: Data scaling performance of InstructCoder on LLaMA evaluated by GPT-4, using 1%, 10%, 50% and 100% training data. Figure 6: GPT-4 evaluation results at different edit ratios on 2000 validation samples. Our experiments show that with computationally lightweight parameter-efficient finetuning, open-source models can gain huge improvements and even yield ChatGPT-like performance. We also reveal that the LLM base model and the scale of finetuning data are both profound factors of code-editing ability. We hope the dataset can benefit and inspire more research in this area towards building more powerful coding models. ## Limitations While we chose genuine github commits as the source of our seed tasks, the data produced may still exhibit biases that deviate from real-world application. Moreover, our approach did not encompass code changes involving cross-files contexts, which might be the common case in development. We hope to explore these aspects further and incorporate additional programming languages in our future research. ## Ethics Statement This research paper adheres to the ethical guidelines and principles set forth by the Conference on Empirical Methods in Natural Language Processing (EMNLP) and the wider scientific community. All real-world data were collected only from public repositories from GitHub.
2309.13680
Observational Signatures: Shadow cast by the effective metric of photons for black holes with rational non-linear electrodynamics
This study explores spherically symmetric non-linear electrodynamics black holes and their effects on light propagation. We derive the governing metric, revealing radial coordinate dynamics within the event horizon. We analyze photon trajectories, finding that increasing magnetic charge expands the horizon and emission range. Furthermore, with the help of the Event Horizon Telescope results, we constrain parameters and emission profiles. Direct emission dominates, while lensing rings play a lesser role. Comparing with Schwarzschild black holes, we observe higher intensity but a wider emission region in non-linear electrodynamics black holes. This work enhances our understanding of modified spacetimes and their impact on black hole properties.
Akhil Uniyal, Sayan Chakrabarti, Mohsen Fathi, Ali Övgün
2023-09-24T16:08:15Z
http://arxiv.org/abs/2309.13680v2
Observational Signatures: Shadow cast by the effective metric of photons for black holes with rational non-linear electrodynamics ###### Abstract This study explores spherically symmetric non-linear electrodynamics black holes and their effects on light propagation. We derive the governing metric, revealing radial coordinate dynamics within the event horizon. We analyze photon trajectories, finding that increasing magnetic charge expands the horizon and emission range. Using data from the Event Horizon Telescope, we constrain parameters and emission profiles. Direct emission dominates, while lensing rings play a lesser role. Comparing with Schwarzschild black holes, we observe higher intensity but a wider emission region in non-linear electrodynamics black holes. This work enhances our understanding of modified spacetimes and their impact on black hole properties. Black holes; non-linear electrodynamics; shadow cast; deflection angle; thin accretion disk pacs: 95.30.Sf, 04.70.-s, 97.60.Lf, 04.50.+h ## I Introduction Black holes (BHs) continue to captivate scientists due to their enigmatic nature and their ongoing challenge to our understanding. The theoretical framework for BHs originates from the pioneering works of Schwarzschild [1] and subsequent contributions by Finkelstein [2]. The exploration of BHs gained significant momentum after the discovery of Cygnus X-1 and its subsequent identification [3; 4]. However, the groundbreaking observations of M87* [5] and Sgr A* [6] by the Event Horizon Telescope (EHT) propelled the BH research forward. These observations provided profound insights into the behavior of light in the strong gravitational fields of BHs. Furthermore, the EHT observations of M87* unveiled the presence of a mysterious magnetic field that may hold clues to the origin of its powerful jets [7; 8; 9]. Similarly, assessments of Sgr A* by the EHT explored alternative theories of gravity beyond general relativity (GR), offering constraints on modified gravity theories [10], which have been recently employed in Ref. [11] to constrain modified theories of gravity. These assessments also shed light on the possibility of compact objects at the cores of galaxies, potentially constituting active galactic nuclei (AGNs). Understanding BHs can provide invaluable insights into the fundamental nature of the Universe, as their extreme gravitational fields serve as unique testing grounds for theories that extend beyond terrestrial laboratories. Indeed, despite the remarkable successes of GR in passing astrophysical tests [12], the theory still leaves unanswered questions, such as the origins of the accelerated expansion of the universe, the flat galactic rotations curves, anti-lensing, anisotropies on the cosmic microwave background radiation, the coincidence problem and etc. [13; 14; 15; 16; 17; 18]. Many scientists believe that the aforementioned phenomena arise from the mysterious aspects of the universe, which have not been adequately explained thus far. In order to account for these phenomena, it is believed that modifications to GR are necessary [19; 20; 21]. Our study is motivated by the need to understand deviations from the linear superposition of electromagnetic fields, which are well-established at macroscopic and atomic levels but become significant at the subatomic level due to the intense fields near charged particles. This departure from linearity challenges the classical Maxwell electromagnetic theory, leading to singularities. When subjected to strong electromagnetic fields (EM), the behavior of light can be effectively likened to its passage through a dispersive medium. As the electromagnetic field strength nears critical thresholds, such as the critical electric field (\(E_{\rm cr}\approx 10^{18}\,{\rm V/m}\)) or the critical magnetic field (\(B_{\rm cr}\approx 10^{9}\,{\rm T}\)), the influence of external fields on the quantum properties of the vacuum becomes notably pronounced [22]. These effects can be phenomenally described by classical theories characterized by Lagrangian that exhibit non-linear dependence on the two fundamental electromagnetic invariants. In the presence of extremely strong electromagnetic fields, such as near critical values, the effects on vacuum quantum properties become notable. Additionally, there have been discussions regarding the possibility of removing BH singularities in the framework of GR by employing the Born-Infeld non-linear electrodynamics (NED) model [23]. This approach enables the generation of regular BH spacetimes. The idea was originally sparked by Bardeen, who introduced a regular spherically symmetric BH with a purely magnetic charge using the linear Maxwell theory [24]. We are particularly interested in the rational NED. This concept has given rise to the creation of numerous regular BH spacetimes and continues to be a significant area of research in the field of BH studies (see for examples Refs. [25; 26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36; 37; 38; 39; 40; 41; 42]). In the present study, the investigation includes the consideration of a particular model presented in Ref. [43], adding to the existing body of research in this field. Therefore, through the removal of singularities and incorporation of quantum corrections, these models aim to shed light on the behavior of the electromagnetic field near extremely gravitating systems. As a result, they have become a subject of great interest within the scientific community, as they have the potential to explore unresolved phenomena in modern cosmology, such as the Big Bang singularity, cosmic inflation, and the universe's accelerated expansion (see, for example, Refs. [44; 45; 46; 47; 48]). In this study, driven by the same research interest, our focus lies on investigating the observational signatures of a NED BH. Specifically, we aim to provide precise constraints on the shadow size and the angle of light deflection associated with such a BH. The investigation of the gravitational lensing effect caused by BHs is an active research area in the fields of astrophysics and cosmology. Weak gravitational lensing refers to the phenomenon where the trajectory of light is slightly deflected when it traverses a region affected by a gravitational field. As a consequence, distant objects such as galaxies and quasars can appear distorted in their images, and in some cases, multiple images of the same object can be formed. The verification of Einstein's theory of relativity through the Eddington experiment, which involved observing gravitational lensing, established this method as a crucial tool in astrophysics. As a result, numerous studies and papers have since focused on utilizing weak gravitational lensing for various astrophysical investigations [49; 50; 51; 52; 53; 54; 55; 56; 57; 58]. In fact, the theoretical study of BH shadows and their constraints based on observational data has garnered significant interest among scientists, leading to numerous dedicated publications (see, for instance, References [11; 59; 12; 13; 14]). Recently, the advent of silhouette imaging by the EHT has amplified the importance of reliable methods for visualizing BHs with accretion disks as their sources of illumination. This significance was initially recognized by Luminet in 1979 [115], who computed the radiation emitted from a thin accretion disk surrounding a Schwarzschild BH and proposed a ray-traced image of the disk. Generally, this type of accretion is based on models such as the Shakura-Sunyaev [116], Novikov-Thome [117], and Page-Thorne [118] models, where the disk is assumed to be thin both geometrically and optically. In light of these assumptions and the growing interest in BH imaging, a new method for simulating higher-order light rings in BHs with thin accretion disks was proposed in Ref.[119]. Since then, this method has been employed in several publications (see, for example, Refs.[120; 121; 122; 123; 124; 125; 126; 127; 128; 129; 130; 131; 132; 133]), and is also of importance in our paper for the analysis of the shadow of the aforementioned NED BH. This BH possesses a distinctive and intriguing characteristic: it alters the geometric background through which photons propagate. In the context of linear electrodynamics and vacuum, it is well-established that electromagnetic waves travel along the null geodesics of spacetime. However, in the case of self-interacting or NED theories, this is no longer true. Instead, light rays deviate from the null geodesics and traverse the effective metric background, which is modified from the original metric [134; 22]. This phenomenon is also observed in perturbative theories, where in the high-energy limit, the perturbative effective potential of NED coincides with a function governing the motion of photons in the gravitational field of a central object [135; 136; 137]. Thus, the NED BH exhibits a remarkable interplay between its gravitational and electromagnetic properties, leading to deviations from the expected behavior based on linear electrodynamics. This paper focuses on investigating the detailed impact of the effective metric background on the trajectories of photons, as well as its potential influence on observational signatures in the vicinity of a magnetically charged spherically symmetric NED BH. This effective metric is obtained by modifying the original spacetime metric. To achieve our objective, we structure this paper as follows: Sect. II introduces the physical origin and spacetime structure of the NED BH, providing essential background information. The derivation of the effective metric is presented in Section III, which also includes an analysis of the behavior of light rays in the BH's exterior. In Sect. IV, we calculate the diameter of the BH's shadow and compare it with observations of M87* and Sgr A*, enabling us to constrain the NED parameters of the spacetime. Next, in Sect. V, we employ fully algebraic methods to compute the weak deflection angle of light near the BH. Section VI employs ray-tracing techniques to visualize the BH's shadow when an optically thin accretion disk with different emission profiles is present. Furthermore, in Sect. VII, we extend the same procedure to visualize the BH's shadow under the condition of infalling accretion. Finally, we conclude our study in Sect. VIII, summarizing our findings and discussing potential future research directions. Bhs with rational NED In this study, we consider the rational NED theory proposed by Kruglov in Ref. [138]. The Lagrangian density employed in this framework is considered such that it obeys the correspondence principle. This principle states that in the weak field limit, the non-linearity should be absent, ensuring that the field equations align with the classical Maxwell equations of classical electrodynamics, as stated in Ref. [139]. Hence, we opt the form \[\mathcal{L}=-\frac{\mathcal{F}}{2\beta\mathcal{F}+1}. \tag{1}\] In the given expression, the parameter \(\beta\) is a non-negative quantity with dimensions of (length)\({}^{4}\). The quantity \(\mathcal{F}\) is defined as \(\mathcal{F}=(1/4)F_{\mu\nu}F^{\mu\nu}=(B^{2}-E^{2})/2\), where \(F_{\mu\nu}=\partial_{\mu}A_{\nu}-\partial_{\nu}A_{\mu}\) represents the field tensor. The symmetrical energy-momentum tensor is given by [140] \[T_{\mu\nu}=-\frac{F_{\mu}^{\ \alpha}F_{\nu\alpha}}{(1+2\beta\mathcal{F})^{2}}- g_{\mu\nu}\mathcal{L}, \tag{2}\] by means of which, we derive the energy density \[\rho=T_{0}^{\ 0}=\frac{\mathcal{F}}{1+2\beta\mathcal{F}}+\frac{E^{2}}{(1+2 \beta\mathcal{F})^{2}}. \tag{3}\] For a healthy theory, the general principles of causality and unitarity must be upheld. According to the principle of causality, the group velocity of excitations over the background must be less than the speed of light, ensuring the absence of tachyons in the theory. The unitarity principle guarantees the absence of ghosts. These principles are satisfied in the case of \(\mathbf{E}\cdot\mathbf{B}=0\), if the following inequalities are upheld [141]: \[\mathcal{L}_{\mathcal{F}}\leq 0,\quad\mathcal{L}_{\mathcal{FF}} \geq 0, \tag{4a}\] \[\mathcal{L}_{\mathcal{F}}+2\mathcal{F}\mathcal{L}_{\mathcal{FF}} \leq 0, \tag{4b}\] where \(\mathcal{L}_{\mathcal{F}}\equiv\partial\mathcal{L}/\partial\mathcal{F}\). Therefore, utilizing Eq. (1), we can derive \[\mathcal{L}_{\mathcal{F}}=-\frac{1}{(1+2\beta\mathcal{F})^{2}}, \tag{5a}\] \[\mathcal{L}_{\mathcal{FF}}=\frac{4\beta}{(1+2\beta\mathcal{F})^{ 3}},\] (5b) \[\mathcal{L}_{\mathcal{F}}+2\mathcal{F}\mathcal{L}_{\mathcal{FF}} =\frac{6\beta\mathcal{F}-1}{(1+2\beta\mathcal{F})^{3}}. \tag{5c}\] Based on Eqs. (4) and (5), we can deduce that the principles of causality and unitarity are satisfied when \(6\beta\mathcal{F}\leq 1\) (\(\beta\geq 0\)). Consequently, when \(\mathbf{E}=0\), we have \(\beta B^{2}\leq 1/3\). Considering a static magnetic BH and taking into account the absence of electric charge (\(q_{e}=0\)) and assuming \(\mathcal{F}=q_{m}^{2}/(2r^{4})\) (where \(q_{m}\) represents the magnetic charge), we can derive the expression for the magnetic energy density from Eq. (3) as follows: \[\rho_{M}=\frac{B^{2}}{2(\beta B^{2}+1)}=\frac{q_{m}^{2}}{2(r^{4}+\beta q_{m}^{ 2})}. \tag{6}\] Now, let us consider the line element \[\mathrm{d}s^{2}=-A(r)\mathrm{d}t^{2}+\frac{1}{A(r)}\mathrm{d}r^{2}+r^{2}\left( \mathrm{d}\theta^{2}+\sin^{2}\theta\mathrm{d}\phi^{2}\right), \tag{7}\] with the lapse function being defined by \[A(r)=1-\frac{2M(r)}{r}, \tag{8}\] in which the mass function is \[M(r)=m_{0}+\int_{0}^{r}\rho(r)r^{2}\mathrm{d}r=m_{0}+m_{M}-\int_{r}^{\infty} \rho(r)r^{2}\mathrm{d}r. \tag{9}\] In this context, the BH's total mass is given by the sum of the Schwarzschild mass \(m_{0}\) and the magnetic mass \(m_{M}=\int_{0}^{\infty}\rho(r)r^{2}\mathrm{d}r\). Thus, utilizing Eqs. (6) and (9), we can express the mass function as \[M(x)=m_{0}+\frac{q_{m}^{3/2}}{8\sqrt{2}\beta^{1/4}}\left[\ln\frac{x^{2}-\sqrt{2 }x+1}{x^{2}+\sqrt{2}x+1}+2\arctan\left(\sqrt{2}x+1\right)-2\arctan\left(1- \sqrt{2}x\right)\right], \tag{10}\] where \(x=r/\sqrt[4]{\beta q_{m}^{2}}\). On the other hand, the magnetic mass of the BH is determined as follows: \[m_{M}=\int_{0}^{\infty}\rho_{M}(r)r^{2}\mathrm{d}r=\frac{\pi q_{m}^{3/2}}{4 \sqrt{2}\beta^{1/4}}\approx 0.56\frac{q_{m}^{3/2}}{\beta^{1/4}}. \tag{11}\] As expected, when \(q_{m}=0\), the magnetic mass \(m_{M}\) becomes zero, resulting in the Schwarzschild BH. Then, the lapse function can be obtained by employing Eqs. (8) and (7), resulting in the following expression: \[A(x)=1-\frac{2m_{0}G}{\sqrt[4]{\beta q_{m}^{2}}x}-\frac{q_{m}G}{4\sqrt{2} \beta x}\left[\ln\frac{x^{2}-\sqrt{2}x+1}{x^{2}+\sqrt{2}x+1}+2\arctan\left( \sqrt{2}x+1\right)-2\arctan\left(1-\sqrt{2}x\right)\right]. \tag{12}\] In the limit of \(r\rightarrow\infty\), the lapse function (12) can be approximated as \[A(r)=1-\frac{2mG}{r}+\frac{q_{m}^{2}G}{r^{2}}+\mathcal{O}(r^{-5}), \tag{13}\] where \(m=m_{0}+m_{M}\). Consequently, one can deduce from Eq. (13) that the correction to the Reissner-Nordstrom (RN) solution, is of the order \(\mathcal{O}(r^{-5})\). Additionally, when \(m_{0}=0\) and \(r\to 0\), Eq. (12) indicates the presence of the asymptotic \[A(r)=1-\frac{Gr^{2}}{\beta}+\frac{Gr^{6}}{7\beta^{2}q_{m}^{2}}-\frac{Gr^{10}}{ 11\beta^{3}q_{m}^{4}}+\mathcal{O}(r^{12}), \tag{14}\] possessing a de Sitter core. Note that, the solution given by Eq. (14) is regular, as it approaches unity as \(r\) tends to zero. However, when \(m_{0}\neq 0\), the solution becomes singular, leading to \(A(r)\) diverging to infinity. With the introduction of the geometrical structure of the BH spacetime under consideration, we can now move forward to the main objectives of this study. We begin by examining the dynamics of light rays within the effective geometry of the BH. ## III Light propagation around the NED BH exterior in the effective metric In the context of NED, electromagnetic fluctuations travel along an _effective_ light cone [142; 143; 144; 145; 146; 147; 148; 149; 150; 151], which generally differs from the standard geometrically-defined light cones [152; 153]. Notably, for a general theory of NED, characterized by two independent four-dimensional relativistic invariants, \(\mathbf{F}\) (as defined above) and \(\mathbf{F}\star\mathbf{F}\), there exist (in general) two effective light cones, each associated with a specific polarization. This phenomenon is referred to as _birefringence_, and it supports the interpretation of electromagnetic fluctuations propagating on a NED background as a medium (independent of their coupling to gravity). In the case of NED models solely dependent on \(\mathbf{F}\) (with no dependence on \(\mathbf{F}\star\mathbf{F}\)), birefringence does not occur in general1. In such scenarios, the single effective light cone can be geometrically described by considering photons propagating along null geodesics of an effective metric tensor \(g_{\text{eff}}^{\mu\nu}\), which relies on the contributions of the NED source to the energy-momentum tensor [152; 153; 156; 22]. The expression for the effective metric tensor is given as [22] Footnote 1: However, birefringence phenomena can arise in NED models that solely depend on \(\mathbf{F}\) when external magnetic fields are present [154; 155]. \[g_{\text{eff}}^{\mu\nu}=g^{\mu\nu}\mathcal{L}_{\mathcal{F}}-4\mathcal{L}_{ \mathcal{F}\mathcal{F}}F_{\alpha}^{\mu}F_{\nu}^{\alpha}. \tag{15}\] And, therefore, for a magnetically charged spherically symmetric BH, the line element will take the following form: \[\mathrm{d}s_{\text{eff}}^{2}=g_{\mu\nu}^{\text{eff}}\mathrm{d}x^{\mu}\mathrm{d} x^{\nu}=\frac{1}{\mathcal{L}_{\mathcal{F}}}\left(g_{tt}\mathrm{d}t^{2}+g_{rr} \mathrm{d}r^{2}\right)+\frac{g_{\theta\theta}}{\Phi}\mathrm{d}\theta^{2}+\frac {g_{\phi\phi}}{\Phi}\mathrm{d}\phi^{2}, \tag{16}\] where \[\Phi=\mathcal{L}_{\mathcal{F}}+2\mathcal{F}\mathcal{L}_{\mathcal{FF}}. \tag{17}\] In this case, the Lagrangian associated with the geodesic motion in the spacetime described by the line element (16) is defined as \[\mathcal{L}=\frac{1}{\mathcal{L}_{\mathcal{F}}}\left(g_{tt}\dot{t}^{2}+g_{rr} \dot{r}^{2}\right)+\frac{g_{\theta\theta}}{\Phi}\dot{\theta}^{2}+\frac{g_{ \phi\phi}}{\Phi}\dot{\phi}^{2}, \tag{18}\] where the dot represents the derivative with respect to the affine parameter. Now, considering the equatorial plane (i.e. \(\theta=\pi/2\)), we can express the equations of motion for null geodesics as \[\dot{t}=-\frac{\mathcal{E}\mathcal{L}_{\mathcal{F}}}{g_{tt}}, \tag{19}\] \[\dot{\phi}=\frac{L\Phi}{g_{\phi\phi}},\] (20) \[\frac{1}{\mathcal{L}_{\mathcal{F}}}\left(g_{tt}\dot{t}^{2}+g_{rr }\dot{r}^{2}\right)+\frac{g_{\phi\phi}}{\Phi}\dot{\phi}^{2}=0, \tag{21}\] where \(\mathcal{E}\) and \(L\) represent the energy and angular momentum associated with the null geodesics, respectively. Combining these equations yields \[\left(\frac{dr}{d\phi}\right)^{2}=-\frac{g_{\phi\phi}\mathcal{L}_{\mathcal{F }}}{g_{rr}\Phi}-\frac{\mathcal{E}^{2}g_{\phi\phi}^{2}\mathcal{L}_{\mathcal{F }}^{2}}{L^{2}g_{tt}g_{rr}\Phi^{2}}. \tag{22}\] To determine the turning point where circular orbits occur, we initially employ the condition of \(\dot{r}=0\). By utilizing Eqs. (19)-(21), we can derive the impact parameter associated with the null geodesics as \[b=\frac{L}{\mathcal{E}}=\sqrt{-\frac{g_{\phi\phi}\mathcal{L}_{ \mathcal{F}}}{g_{tt}\Phi}}. \tag{23}\] The impact parameter plays a crucial role in determining the size of the BH shadow. Considering the effective metric (15), and the relationship \(g_{rr}=-g_{tt}^{-1}=A(r)^{-1}\), one can recast Eq. (22) as \[\left(\frac{\mathrm{d}r}{\mathrm{d}\phi}\right)^{2}=\frac{r^{2} \mathcal{L}_{\mathcal{F}}A(r)}{\Phi}\left[\frac{h^{2}(r)}{b^{2}}-1\right], \tag{24}\] in which \[h^{2}(r)=-\frac{\mathcal{L}_{\mathcal{F}}}{\Phi}\frac{r^{2}}{A( r)}. \tag{25}\] Note that for marginally stable circular orbits, an additional condition must be imposed, namely \(\ddot{r}=0\), which leads to the following result: \[\frac{2b^{2}\Phi}{r^{3}\mathcal{L}_{\mathcal{F}}A(r)}-\frac{2A^{ \prime}(r)}{A^{3}(r)}+\frac{b^{2}\Phi A^{\prime}(r)}{r^{2}\mathcal{L}_{ \mathcal{F}}A^{2}(r)}+\frac{b^{2}\Phi\mathcal{L}_{\mathcal{F}}^{\prime}}{r^{2 }\mathcal{L}_{\mathcal{F}}^{2}A(r)}-\frac{b^{2}\Phi^{\prime}}{r^{2}\mathcal{L }_{\mathcal{F}}A(r)}=0. \tag{26}\] in which prime denotes differentiation with respect to \(r\). Substituting the expression (23) into the above relation yields \[\left[r\Phi\mathcal{L}_{\mathcal{F}}^{\prime}+\mathcal{L}_{\mathcal{F}} \Big{(}2\Phi-r\Phi^{\prime}\Big{)}\right]A(r)-r\mathcal{L}_{\mathcal{F}}\Phi A ^{\prime}(r)=0, \tag{27}\] which is equivalent to the condition \[\frac{\mathrm{d}}{\mathrm{d}r}h^{2}(r)=0, \tag{28}\] that governs the radius of the photon sphere, denoted as \(r_{\mathrm{ph}}\), where unstable circular orbits occur. Adopting the lapse function (12), and expanding up to the fourth order of \(r\), the condition (28) yields the depressed quartic \[r^{4}+ar+b=0, \tag{29}\] where \[\mathrm{a} =-\frac{3\beta q_{m}^{2}}{52Gm_{0}}, \tag{30a}\] \[\mathrm{b} =\frac{9Gm_{0}\beta q_{m}^{2}}{52Gm_{0}}. \tag{30b}\] The above equation has the four solutions \[r_{p_{1}} =\frac{1}{2}\left[-\bar{\mathrm{c}}+\sqrt{\bar{\mathrm{c}}^{2}-4 \bar{\mathrm{d}}}\,\right], \tag{31}\] \[r_{p_{2}} =\frac{1}{2}\left[-\bar{\mathrm{c}}-\sqrt{\bar{\mathrm{c}}^{2}-4 \bar{\mathrm{d}}}\,\right],\] (32) \[r_{p_{3}} =\frac{1}{2}\left[-\bar{\mathrm{e}}+\sqrt{\bar{\mathrm{e}}^{2}+4 \bar{\mathrm{f}}}\,\right],\] (33) \[r_{p_{4}} =\frac{1}{2}\left[-\bar{\mathrm{e}}-\sqrt{\bar{\mathrm{e}}^{2}-4 \bar{\mathrm{f}}}\,\right], \tag{34}\] where \[\bar{\mathrm{c}} =\sqrt{\bar{\mathrm{u}}_{1}}, \tag{35a}\] \[\bar{\mathrm{e}} =-\sqrt{\bar{\mathrm{u}}_{1}},\] (35b) \[\bar{\mathrm{d}} =\frac{\bar{\mathrm{u}}_{1}}{2}-\frac{\mathrm{a}}{2\sqrt{\bar{ \mathrm{u}}_{1}}},\] (35c) \[\bar{\mathrm{f}} =\frac{\bar{\mathrm{u}}_{1}}{2}+\frac{\mathrm{a}}{2\sqrt{\bar{ \mathrm{u}}_{1}}}, \tag{35d}\] in which \[\bar{\mathrm{u}}_{1}=\sqrt{\frac{\bar{\xi}_{2}}{3}}\cosh\left(\frac{1}{3} \arccos\left(3\bar{\xi}_{3}\sqrt{\frac{3}{\bar{\xi}_{2}^{3}}}\,\right)\right), \tag{36}\] with \(\bar{\xi}_{2}=16\mathrm{b}^{2}\) and \(\bar{\xi}_{3}=4\mathrm{a}^{2}\). Note that, when we expand the differential equation (28) up to the third order of \(r\), it simplifies to the first order equation \(r=3Gm_{0}\). This equation provides the radius of the photon sphere around a Schwarzschild BH. Upon checking the solutions in Eqs. (31)-(34), it becomes evident that \(r_{p_{3}}\) and \(r_{p_{4}}\) are of complex values (i.e. \(r_{p_{3}},r_{p_{4}}\in\mathbb{C}\)), and therefore, we disregard them. Plotting the two remaining solutions in Fig. 1, we observe that \(r_{p_{1}}<0\), while \(r_{p_{2}}>0\). Thus, we can define the radius of the photon sphere for the BH as \(r_{\mathrm{ph}}=r_{p_{2}}\). It is also essential to highlight that preserving the sign of the effective metric background in Eq. (16), requires the imposition of a specific condition on the lapse function. To achieve this, we rigorously solve the equations and derive the precise condition governing the radial distance. This condition leads to the establishment of a minimum permissible value for the radial coordinate, denoted by \(r_{\mathrm{eff}}=(3\beta q^{2})^{1/4}\). We created a three-dimensional plot to explore the relationship between the parameters \(\beta\) and \(q\) concerning \(r_{\mathrm{eff}}\) (on the left) and the BH horizon \(r_{\mathrm{h}}\) (on the right) in Fig. 2. Our observations reveal that, in our particular scenario, \(r_{\mathrm{h}}\) consistently exceeds \(r_{\mathrm{eff}}\), thereby allowing us to study photon motion directly from the BH horizon. However, in cases where \(r_{\mathrm{eff}}>r_{\mathrm{h}}\), the photon motion should be investigated away from \(r_{\mathrm{eff}}\) rather than the horizon. Consequently, the effective metric background imposes this additional constraint on the photon trajectories around the BH. In this context, the most important phenomenon revolves around the propagation of light rays in the effective metric, where the aforementioned parameters play a significant role. By solving the equations of motion (19)-(21) in the effective spacetime geometry, we have simulated the orbit of light rays in the equatorial plane, as depicted in the bottom panels of Fig. 3. The top panels of the figure illustrate the contribution of these rays to the formation of the photon and lensing rings. The horizontal axis represents the impact parameter, \(b\), while the vertical axis, \(n\), corresponds to the number of times the rays cross the BH's plane. In other words, it shows the number of half-orbits that the light rays undergo during their trajectories. The diagrams were generated for a fixed \(\beta\)-parameter and three different values of the magnetic charge. As evident from the figures, with an increase in the magnetic charge, the size of the horizon expands, leading to a higher likelihood of light rays being absorbed by the BH. We have categorized the photon trajectories into three groups, following Ref. [157]. For light rays with \(n<3/4\), we observe the direct emission profile, where the distant observer receives the light directly from the light source (such as the BH's accretion disk). In the range \(3/4<n<5/4\), the observer receives a lensed image of the backside of the source, as the light rays cross the BH's plane twice. This phenomenon corresponds to the formation of the lensing ring. For \(n>5/4\), the light rays cross the BH's plane more than twice, resulting in the formation of photon rings of higher order. The top panels of Fig. 3 depict the thickness of the three aforementioned categories for the adopted values of the BH parameters. We continue our discussion by validating our study with real astrophysical data from the EHT, enhancing the significance of our findings for understanding BH properties and light propagation. Figure 1: The behavior of \(r_{p_{i}}\), \(i=1,2\), concerning changes in the magnetic charge \(q_{m}\) and the \(\beta\)-parameter. Figure 2: Three-dimensional plots to illustrate the dependence of the minimum allowed values of \(r_{\rm eff}\) (left panel) and the BH horizon \(r_{\rm h}\) (right panel) on the \(\beta\)-parameter and the magnetic charge \(q_{m}\). ## IV Constraints from the M87* and Sgr A* In this section, we aim at constraining the magnetic charge and NED parameter by utilizing observed data from the EHT. For M87*, the angular diameter of the BH shadow is \(\theta_{\text{M87*}}=42\pm 3\)\(\mu\)as, the distance to the BH is \(d_{s}^{\text{M87*}}=16.8\) Mpc, and the mass is \(M_{\text{M87*}}=(6.5\pm 0.90)\times 10^{9}M_{\odot}\)[5]. For Sgr A*, the angular diameter of the shadow is \(\theta_{\text{Sgr A*}}=48.7\pm 7\mu\)as, the distance to the BH is \(d_{s}^{\text{Sgr A*}}=8277\pm 33\) pc, and the BH mass is \(M_{\text{Sgr A*}}=(4.3\pm 0.013)\times 10^{6}M_{\odot}\)[6]. By using this data and the formula from Ref. [158], the BH shadow diameter can be calculated as \[d_{\text{sh}}=\frac{d_{s}\theta}{M}. \tag{37}\] Using the above formula, one can calculate the shadow diameters for M87* and Sgr A* as \(d_{\text{sh}}^{\text{M87*}}=(11\pm 1.5)M\) and \(d_{\text{sh}}^{\text{Sgr A*}}=(9.5\pm 1.4)M\), respectively. Next, utilizing the impact parameter (23) and the previously obtained value of \(r_{\text{ph}}\), we can calculate the shadow radius as \(r_{\text{sh}}=b_{\text{ph}}\), where \(b_{\text{ph}}=b(r_{\text{ph}})\). Consequently, the theoretical shadow diameter in the effective metric can be expressed as \(d_{\text{sh}}^{\text{theo}}=2b_{\text{ph}}\). Figure 4 illustrates the profile of this quantity over a potential range of \(\beta\) and \(q_{m}\), which could impact the photons due to the effective metric background. This comparison includes the shadow diameters of M87* and Sgr A*, for which we precisely selected appropriate parameter values falling within the range of \(1\sigma\) and \(2\sigma\) uncertainties. Notably, Sgr A* imposes a more stringent constraint on the parameter than M87*. It is essential to note that observing BHs relies on gravitational lensing and the process of light deflection. Therefore, analyzing how the NED BH affects light ray trajectories is crucial. This analysis involves calculating the light deflection angle, which will be add Figure 3: The behavior of photons in the effective geometry plotted for \(\beta=0.1\) and, from left to right, for \(q_{m}=0.6,0.8,\) and \(1.0\). The red, orange, and black lines correspond to the photon ring, lensing ring, and direct emission, respectively. The green dotted circle represents the radius of unstable photon orbits, \(r_{\text{ph}}\). The black disk indicates the event horizon of the BH. In the top panels, the fractional number of orbits (\(n=\phi/(2\pi)\)) is displayed, where \(\phi\) represents the total change in the azimuth angle outside the horizon. The bottom panel illustrates selected photon trajectories, treating \(r\) and \(\phi\) as Euclidean polar coordinates. ## V Deflection angle of the NED BH in the effective geometry Considering the line element (16), the canonical equations (19) and (20) together with the null condition (21), result in the angular equation of motion \[\left(\frac{dr}{d\phi}\right)^{2}=-\frac{g_{\phi\phi}^{2}\mathcal{L}_{\mathcal{ F}}^{2}}{b^{2}\Phi^{2}}-\frac{g_{\phi\phi}\mathcal{L}_{\mathcal{F}}}{g_{rr}\Phi}. \tag{38}\] Assuming the complete spherical symmetry, the above relation yields the integral equation \[\phi-\phi_{0}=\int_{r_{d}}^{\infty}\frac{dr}{\sqrt{-\frac{r^{4}C_{\mathcal{F}} ^{2}}{b^{2}\Phi^{2}}-\frac{r^{2}\mathcal{L}_{\mathcal{F}}A(r)}{\Phi}}}\equiv \int_{r_{d}}^{\infty}\frac{dr}{\sqrt{\mathcal{P}(r)}}, \tag{39}\] regarding the changes in the azimuth angle for deflecting light ray trajectories, where \(\phi_{0}\) is the initial azimuth angle, \(r_{d}\) is the minimum distance to the BH at which the deflection occurs, and \(\mathcal{P}(r)\) is the characteristic polynomial, which in the weak field limit and by taking into account the expressions in Eq. (5) and the lapse function (12), is given by \[\mathcal{P}(r)=\delta_{0}r\left(4r^{3}-g_{2}r-g_{3}\right)+\mathcal{O}(r^{5}), \tag{40}\] expanded up to the fourth order in \(r\), where \[\delta_{0} =-\frac{1}{12}\left(\frac{1}{3b^{2}}+\frac{Gq_{m}}{3\sqrt{\beta} }\right), \tag{41a}\] \[g_{2} =-\frac{1}{3\delta_{0}},\] (41b) \[g_{3} =-\frac{2Gm_{0}}{3\sqrt{q_{m}}\beta^{1/4}\delta_{0}}. \tag{41c}\] It is then readily verified for the discriminant of the cubic in the parenthesis of Eq. (40), that \(\Delta=g_{2}^{3}-27g_{3}^{2}<0\). Hence, the quartic \(\mathcal{P}(r)=0\) has only one positive real root \(r_{d}>0\), and two complex conjugate roots \(r_{1}=r_{2}^{*}\), which are given by \[r_{d} =\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3}\arccos\left(3g_{3} \sqrt{\frac{3}{g_{2}^{3}}}\right)\right), \tag{42}\] \[r_{1} =\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3}\arccos\left(3g_{3} \sqrt{\frac{3}{g_{2}^{3}}}\right)+\frac{4\pi}{3}\right),\] (43) \[r_{2} =\sqrt{\frac{g_{2}}{3}}\cos\left(\frac{1}{3}\arccos\left(3g_{3} \sqrt{\frac{3}{g_{2}^{3}}}\right)+\frac{2\pi}{3}\right). \tag{44}\] (45) Now to obtain the deflection angle \[\Theta=2(\phi-\phi_{0})-\pi, \tag{46}\] for the light rays at the distance \(r_{d}\) from the BH, we directly integrate the Eq. (39), which yields \[\Theta=-\frac{1}{\sqrt{\delta_{1}}}\wp^{-1}\left(\frac{\chi_{1}}{12}\right)-\pi, \tag{47}\] where the inverse Weierstrassian \(\wp\) elliptic function with the invariants \(\zeta_{2}\) and \(\zeta_{3}\), is defined in terms of the integral \[\wp^{-1}(y)\equiv\wp^{-1}(y;\zeta_{2},\zeta_{3})=\int_{\infty}^{y}\frac{dy}{ \sqrt{4y^{3}-\zeta_{2}y-\zeta_{3}}}. \tag{48}\] In the solution (47) we have defined \[\delta_{1}=1-\frac{r_{1}}{r_{d}}-\frac{r_{2}}{r_{d}}+\frac{r_{1}r_{2}}{r_{d}^ {2}}, \tag{49}\] and the Weierstrass invariants are given as \[\tilde{g}_{2} =-\frac{1}{4}\left(\chi_{1}-\frac{\chi_{2}^{2}}{3}\right), \tag{50a}\] \[\tilde{g}_{3} =-\frac{1}{16}\left(\chi_{0}-\frac{\chi_{1}\chi_{2}}{3}+\frac{2 \chi_{2}^{3}}{27}\right), \tag{50b}\] where \[\chi_{0} =\frac{1}{\delta_{1}}, \tag{51a}\] \[\chi_{1} =\frac{1}{\delta_{1}}\left(3-\frac{r_{1}}{r_{d}}-\frac{r_{2}}{r_{ d}}\right),\] (51b) \[\chi_{2} =\frac{1}{\delta_{1}}\left(3-\frac{2r_{1}}{r_{d}}-\frac{2r_{2}}{r _{d}}+\frac{r_{1}r_{2}}{r_{d}^{2}}\right). \tag{51c}\] In Fig. 5, the behavior of the deflection angle \(\Theta\) has been demonstrated for some allowed values of the \(q_{m}\)-parameter, as constrained in Fig. 4. As expected, the deflection drops intensely with the increase in the impact parameter and approaches zero. For the special case of \(\beta=10\), however, after vanishing completely at a certain impact parameter, the deflection angle raises up to a smooth curve and then reaches a constant value as the impact parameter increases. Having completed a comprehensive analytical study of photon trajectories in the effective geometry of the NED BH, we now turn our focus to the next two sections, where we will employ a thin accretion model around the BH and utilize a ray-tracing method to simulate the shadow and rings. This approach will help us understand the observational signature of the BH. Figure 5: The evolution of the deflection angle \(\Theta\) versus the changes in the impact parameter \(b\) for the three cases of \(\beta=0.1,5\) and \(10\), plotted for \(\phi_{0}=0\) and five different values of \(q_{m}\), in accordance with the constraints obtained in Fig. 4. Observed emission profile using the direct emission, lensing, and photon ring characteristics In this section, we will examine the overall characteristics of observed emissions by considering specific emission profiles from the accretion disk in the equatorial plane. As the brightness decreases while the accretion disk extends outward from the BH, we define three distinct emitted intensity profiles (\(I_{\text{EM}}\)) based on their decay rate concerning the radial coordinate \(r\), as well as the inner disk radius. * Model 1: \[I_{\text{EM1}}(r)=\begin{cases}\left(\frac{1}{r-(r_{\text{FSCO}}-1)}\right)^{2}, &r\geq r_{\text{ISCO}}\\ 0,&r\leq r_{\text{ISCO}}\end{cases}\] * Model 2: \[I_{\text{EM2}}(r)=\begin{cases}\left(\frac{1}{r-(r_{\text{ph}}-1)}\right)^{3}, &r\geq r_{\text{ph}}\\ 0,&r\leq r_{\text{ph}}\end{cases}\] * Model 3: \[I_{\text{EM3}}(r)=\begin{cases}\frac{1-\arctan(r-(r_{\text{FSCO}}-1))}{1- \arctan(r_{\text{ph}})},&r\geq r_{\text{h}}\\ 0.&r\leq r_{\text{h}}\end{cases}\] These models exhibit specific properties: the first model initiates from the ISCO (innermost stable circular orbit) position (\(r_{\text{ISCO}}\)), with the inner disk boundary set at the ISCO. The second model begins from the photon radius (\(r_{\text{ph}}\)), where the inner disk boundary is positioned at the photon sphere. Lastly, the third model originates from the horizon (\(r_{\text{h}}\)). The second model exhibits rapid decay, while the third model decays very slowly compared to the other models. To calculate the observed intensity, we utilize Liouville's theorem and express the observed intensity (\(I_{\nu^{\prime}}^{\text{obs}}\)) in terms of the emitted intensity from the disk (\(I_{\nu}^{\text{em}}\)) as \[I_{\nu^{\prime}}^{\text{obs}}=g^{3}I_{\nu}^{\text{em}}, \tag{52}\] where \(g=\nu_{o}/\nu_{e}=\sqrt{g_{tt}}\) is the redshift factor. Now considering \(d\nu^{\prime}=gd\nu\) and integrating over all the frequencies of \(I_{\nu^{\prime}}^{\text{obs}}\) as, \[I^{\text{obs}}=\int I_{\nu^{\prime}}^{obs}\mathrm{d}\nu^{\prime}=g^{4}I_{ \text{em}}. \tag{53}\] where we have used \(I_{\text{em}}=\int I_{\nu}^{\text{em}}\mathrm{d}\nu\). Hence, the observed intensity for an observer will be \[I(r)=\sum_{n}I^{\text{obs}}(r)|_{r=r_{m}(b)}, \tag{54}\] where transfer function \(r_{m}(b)\) represent that \(m^{\text{th}}\) intersections with the equatorial plane. The slope of this function provides the demagnification factor, which reveals the contributions of direct emission, photon, and lensing ring to the observed emission intensity profile. This factor has been extensively studied in Refs. [129, 157, 159, 130]. Figures 6, 7, and 8 were plotted to understand the behavior with fixed \(\beta=0.01\) and varying \(q_{m}=0.6,0.8\) and \(1.0\), for the described emission profiles emitted from the thin accretion disk around the BH. Each row in the figures corresponds to a different model. For model 1, the emission profile starts from the ISCO and decays as the radial distance increases. The observed intensity exhibits multiple peaks corresponding to direct emission, lensing, and photon rings. The photon and lensing rings' peaks are narrower and smaller than the direct emission peak. The two-dimensional shadow image in the first row and the third column shows a single bright ring inside the accretion disk's inner boundary, which is formed due to the existence of the photon sphere inside the ISCO. Direct emission contributes significantly to the brightness, and the shadow image displays additional rings resulting from lensing and photon ring effects. In model 2, the emission intensity profile starts from the photon sphere position and decreases with increasing radial distance. The observed emission profile also exhibits multiple peaks, with a prominent narrow peak and a subsequent decrease with the impact parameter. The shadow image in the second row, third column, shows the superimposed contribution of lensing and photon rings on the direct emission, leading to an increased observed intensity area. Despite this, direct emission remains dominant. Model 3 displays a peak at the horizon with a subsequent slow decrease to a certain value as the radial coordinate increases. The observed intensity has a large observational area due to the superimposed contribution of photon and lensing rings on the direct emission. The lensing ring's contribution increases, but direct emission still dominates, and the photon ring has minimal influence. The two-dimensional shadow image in the third row, third column, demonstrates this behavior. As \(q_{m}\) increases with a fixed \(\beta=0.1\), the positions of the peaks shift significantly, distinguishing it from the Schwarzschild spacetime [157]. The background metric plays a crucial role in determining the peak and deflection angle, influencing the observational features of the BH. In the next section, we will explore the observational features of the infalling accreting matter. ## VII Observational features of the shadow with infalling spherical accretion In this section, we explore the observational characteristics of the BH accretion disk with infalling matter in the context of the effective metric background. We adopt the optically thin accretion disk model, where matter falls Figure 6: The appearance of a face-on observed thin disk with varying emission profiles for \(q_{m}=0.6\) and \(\beta=0.1\). Each row corresponds to a different model: top row - model 1, second row - model 2, and third row - model 3. The emitted and observed intensities (\(I_{\rm EM}\) and \(I_{\rm obs}\)) in the plots are normalized to the maximum emitted intensity value outside the horizon (\(I_{0}\)). directly into the BH with negligible angular momentum [160]. During this spherical infall, the gas heats up and emits radiation due to the strong gravitational field around the BH. Consequently, we consider the observed specific intensity from the perspective of an observer at infinity as \[I_{\rm obs}=\int_{\Gamma}g^{3}j(\nu_{e}){\rm d}l_{\rm prop}, \tag{55}\] where \(g\), \(\nu_{e}\), and \(\nu_{\rm obs}\) are known as the redshift factor, the photon frequency at emission, and the observed photon frequency at infinity, respectively. To compute the aforementioned expression, it is necessary to define the emissivity per unit volume, \(j(\nu_{e})\), from the emitter's rest frame. For this work, we adopt a \(1/r^{2}\) profile, represented as \(j(\nu_{e})\propto\delta(\nu_{e}-\nu_{f})/r^{2}\). It is important to note that the expression contains the delta function \(\delta\), where the frequency of the radiative light (\(\nu_{f}\)) is considered monochromatic in nature, and \({\rm d}l_{\rm prop}\) denotes an infinite proper length. The redshift factor can be expressed as \[g=\frac{\mathcal{K}_{\rho}u_{o}^{\rho}}{\mathcal{K}_{\sigma}u_{e}^{\sigma}}, \tag{56}\] where \(\mathcal{K}^{\mu}\) represents the four-velocity of the photon, and \(u_{o}^{\mu}=(1,0,0,0)\) is the four-velocity of the static observer at infinity. In this scenario, the components of the four-velocity of the infalling matter can be expressed as \[u_{e}^{t}=A(r)^{-1},\quad u_{e}^{r}=-\sqrt{1-A(r)},\quad u_{e}^{\theta}=u_{e}^ {\phi}=0. \tag{57}\] Figure 7: The face-on observed thin disk for \(q_{m}=0.8\) and \(\beta=0.1\). Therefore, the four-velocity components for photons originating from the spherical disk are \[\mathcal{K}_{t}=\frac{1}{b},\quad\frac{\mathcal{K}_{r}}{\mathcal{K}_{t}}=\pm\frac{ 1}{A(r)}\sqrt{1-A(r)\frac{b^{2}}{r^{2}}}. \tag{58}\] Here, the \(+\) and \(-\) signs correspond to the photons traveling towards or away from the BH. Hence, the redshift factor for the infalling accretion is given by \[g=\left[u_{e}^{t}+\left(\frac{\mathcal{K}_{r}}{\mathcal{K}_{e}}\right)u_{e}^{r }\right]^{-1}. \tag{59}\] In this case, the proper distance has the following expression: \[\mathrm{d}l_{\text{prop}}=\mathcal{K}_{\mu}u_{e}^{\mu}\mathrm{d}\lambda=\frac{ \mathcal{K}_{t}}{g|\mathcal{K}_{r}|}\mathrm{d}r. \tag{60}\] Using all these expressions, we can now calculate the observed intensity for the static observer at infinity by integrating Eq. (55) over all frequencies, which yields \[I_{\text{obs}}\propto\int_{\Gamma}\frac{g^{3}\mathcal{K}_{t}\mathrm{d}r}{r^{2 }|\mathcal{K}_{r}|}. \tag{61}\] Using this expression, we can explore the effect of the effective metric background on photons reaching the observer at infinity. In Fig. 9, we compare the observed intensity (\(I_{\rm obs}\)) profiles for the effective metric case with the Schwarzschild spacetime. In all cases, the intensity increases to a peak value at \(b\sim b_{\rm ph}\) and then decreases as \(b\) increases, eventually reaching zero. We also observe that the intensity decreases with the effective metric, but the observational area increases, indicating a decrease in brightness in the shadow image. In the figure, we fixed \(\beta=0.1\) and varied \(q_{m}\) along with Schwarzschild \(\beta=q_{m}=0\) (in black), \(q_{m}=0.6\) (in green), \(q_{m}=0.8\) (in blue), and \(q_{m}=1.0\) (in red). To observe this effect in the two-dimensional shadow plot, we presented Fig. 10. A similar trend is evident, where increasing \(q_{m}\) results in decreased brightness but an increased observational area. ## VIII Summary and conclusions In this paper, we examined a spherically symmetric NED BH solution. We considered the deviation of light rays from conventional null geodesics in NED, as they instead follow the null geodesics of the effective metric, obtained through modifications to the original spacetime. Consequently, we derived the effective metric, and in Fig. 2, we demonstrated that the minimum allowed condition for the radial coordinate (\(r_{\rm eff}\)) in our specific solution lies inside the horizon (\(r_{\rm h}\)). Subsequently, we investigated the photon trajectories around the BH, whose spacetime is defined by the effective background geometry for the light rays. In Fig. 3, we presented two-dimensional images of the photon trajectories for a specific value of the NED parameter \(\beta=0.1\) and different magnetic charge values of \(q_{m}=0.6,0.8\) and \(1.0\). We observed that as the magnetic charge increases, the BH horizon expands, along with an increased range Figure 10: The BH shadow with infalling spherical accretion shown for \(q_{m}=0.6,0.8\) and \(1.0\), all with \(\beta=0.1\). Figure 9: The intensity of infalling spherical accreting matter observed at various values of \(q_{m}\), each represented by a different color. The color scheme includes Schwarzschild in black, \(q_{m}=0.6\) in green, \(q_{m}=0.8\) in blue, and \(q_{m}=1.0\) in red, all with \(\beta=0.1\). for the direct emission, lensing, and photon rings. Furthermore, we constrained the spacetime parameters using data from the EHT for M87* and Sgr A* in Fig. 4. It was found that Sgr A* provides a more stringent constraint on the parameters. In our study, we selected parameter values within the \(1\sigma\) and \(2\sigma\) uncertainties and investigated the emission profiles around the BH. Subsequently, before delving into the observational signatures of the BH, we focused on the analytical derivation of the deflection angle for the passing light rays. This step is important as it forms the foundation of the gravitational lensing process. We used the constraints obtained earlier for the magnetic charge, but we varied the NED parameter to emphasize its effect on the deflection angle, especially for larger values. This allowed us to gain insights into how the NED parameter influences the bending of light around the BH and its impact on the overall gravitational lensing phenomenon. Regarding the observational signatures, we focused on the scenario where the BH is illuminated by an optically thin accretion disk. We studied three specific models, as described in Section VI, for the emission profiles. For each model, we calculated the observed profile for the observer at infinity, while keeping the NED parameter fixed and varying the magnetic charge. The results were presented in Figures 6, 7, and 8. The observed emission profiles were then depicted in the two-dimensional images. In all the models, the direct emission consistently appeared prominent. However, in models 2 and 3, the contribution of the lensing ring increased in the observed emission, while the photon ring remained relatively small. These characteristics were evident in the form of brightness and rings in the respective figures. The results indicate that the direct emission plays a significant role in the observed brightness, while the lensing ring's contribution increases, and the photon ring remains relatively minor in all three models. This outcome is consistent across different values of the magnetic charge. Furthermore, we investigated the spherical infalling accretion around the NED BH and compared it with the Schwarzschild case. In Fig. 9, we demonstrated that the observed intensity peak is higher for the Schwarzschild BH compared to the NED BH; however, the observed angular region is larger for the NED BH. In all cases, the intensity peak occurs at \(b\sim b_{\rm ph}\), and as the impact factor \(b\) increases, the intensity gradually decreases until it eventually reaches zero. The impact of this effect can also be observed in the two-dimensional shadow images shown in Fig. 10, where we fixed the \(\beta\)-parameter and varied the magnetic charge. It is worth noting that the model retains its realism even when the NED parameter and magnetic charge are set to zero, as it corresponds to the well-known Schwarzschild case. However, the impact of NED on particle dynamics and light propagation in BH spacetimes presents an intriguing and potentially fruitful area for further exploration. Particularly fascinating is the investigation of strong gravitational lensing effects in NED BHs with accretion disks, especially as seen from the perspective of edge-on observers. Such research involves meticulous examination of higher-order photon rings for both static and stationary NED BHs. To ensure astrophysical reliability, all these studies must consider the constraints provided by the outcomes of the EHT. Subsequently, comparing the lensing features of NED BHs, including caustics, with observational expectations will be a crucial step. These investigations are left as topics for future works. ## Acknowledgement The paper was funded by the National Natural Science Foundation of China 11975145. The work of S.C. is supported by the SERB-MATRICS grant MTR/2022/000318, Govt. of India. M.F. acknowledges financial support from Vicerrectoria de Investigacion, Desarrollo e Innovacion - Universidad de Santiago de Chile (USACH), Proyecto DICYT, Codigo 042331CM_Postdoc. A.O. would like to acknowledge the contribution of the COST Action CA18108 - Quantum gravity phenomenology in the multi-messenger approach (QG-MM) and the COST Action CA21106 - COSMIC WISPers in the Dark Universe: Theory, astrophysics and experiments (CosmicWISPers). A.O. is funded by the Scientific and Technological Research Council of Turkey (TUBITAK).
2309.09814
Convolutional Deep Kernel Machines
Standard infinite-width limits of neural networks sacrifice the ability for intermediate layers to learn representations from data. Recent work (A theory of representation learning gives a deep generalisation of kernel methods, Yang et al. 2023) modified the Neural Network Gaussian Process (NNGP) limit of Bayesian neural networks so that representation learning is retained. Furthermore, they found that applying this modified limit to a deep Gaussian process gives a practical learning algorithm which they dubbed the deep kernel machine (DKM). However, they only considered the simplest possible setting: regression in small, fully connected networks with e.g. 10 input features. Here, we introduce convolutional deep kernel machines. This required us to develop a novel inter-domain inducing point approximation, as well as introducing and experimentally assessing a number of techniques not previously seen in DKMs, including analogues to batch normalisation, different likelihoods, and different types of top-layer. The resulting model trains in roughly 77 GPU hours, achieving around 99% test accuracy on MNIST, 72% on CIFAR-100, and 92.7% on CIFAR-10, which is SOTA for kernel methods.
Edward Milsom, Ben Anson, Laurence Aitchison
2023-09-18T14:36:17Z
http://arxiv.org/abs/2309.09814v3
# Convolutional Deep Kernel Machines ###### Abstract Deep kernel machines (DKMs) are a recently introduced kernel method with the flexibility of other deep models including deep NNs and deep Gaussian processes. DKMs work purely with kernels, never with features, and are therefore different from other methods ranging from NNs to deep kernel learning and even deep Gaussian processes, which all use features as a fundamental component. Here, we introduce convolutional DKMs, along with an efficient inter-domain inducing point approximation scheme. Further, we develop and experimentally assess a number of model variants, including 9 different types of normalisation designed for the convolutional DKMs, two likelihoods, and two different types of top-layer. The resulting models achieve around 99% test accuracy on MNIST, 92% on CIFAR-10 and 71% on CIFAR-100, despite training in only around 28 GPU hours, 1-2 orders of magnitude faster than full NNGP / NTK / Myrtle kernels, whilst achieving comparable performance. ## 1 Introduction Deep kernel machines (DKMs) (Yang et al., 2023) are a new family of kernel methods, obtained by taking the infinite-width limit of a deep Gaussian process (DGP). Infinite width limits are a widely used theoretical tool for understanding Bayesian neural networks (BNNs) and deep Gaussian processes (Lee et al., 2017; Jacot et al., 2018; Aitchison, 2020). In traditional infinite-width limits, BNNs and DGPs become equivalent to single-layer Gaussian processes (GPs) (Neal, 1995; Lee et al., 2017; Matthews et al., 2018; Pleiss & Cunningham, 2021). Critically, the resulting single-layer GP kernel is a fixed function of the inputs; computing this kernel involves computing a simple dot-product kernel of the raw inputs, then computing the transformation of the kernel implied by each network layer (Neal, 1995; Cho & Saul, 2009). This highlights a key flaw of traditional infinite-width limits: that the kernel and hence the "representation" is a fixed, deterministic function of the inputs that is not learned from data (e.g. see Aitchison, 2020; Pleiss & Cunningham, 2021). In contrast, representations in BNNs and DGPs are learned. The ability to use flexible, deep architectures to learn representations is critical to the excellent performance of modern deep learning (Bengio et al., 2013; LeCun et al., 2015). DKMs fix this issue, allowing representation learning in the infinite-width limit by modifying the likelihood (see Yang et al. 2023 for details; also see Yang & Hu 2021; Yang et al. 2022 for related work in the NTK / NN dynamics setting). Practically, computations in a DKM look very similar to computations in an infinite neural network. Specifically, the DKM starts by computing an initial dot-product kernel, then each layer transforms that kernel. The key difference is that in an infinite neural network, each kernel transformation is deterministic, whereas in a DKM, these transformations have parameters, and are learned. Naive, full-rank DKMs require \(\mathcal{O}(P^{3})\) compute, where \(P\) is the number of training examples. This is of course intractable for larger datasets, so Yang et al. (2023) additionally used methods from the DGP literature to develop sparse DKMs, which learn a small set of inducing points that summarise the training set. This allowed DKMs to be scaled up to larger datasets. However, they only considered fully-connected architectures. To explore the potential of DKMs as a practical machine learning method, it is important to consider other architectures that have already had success in deep learning. Hence in this paper, we introduce convolutional DKMs, which are much better suited to image tasks than the original DKM. While convolutional DKMs themselves are not overly challenging to define, it turns out that developing an efficient inducing point scheme in the convolutional setting is highly non-trivial. Standard inducing points live in the same space as the inputs, which in the convolutional setting might be images. However, images are too large, resulting in expensive \(\mathcal{O}(P_{i}^{3}W^{3}H^{3})\) operations, where \(P_{i}\) is the number of inducing points, and \(W,H\) are the width and height of the images. At the same time, efficient inducing patch schemes from the GP literature (van der Wilk et al., 2017; Blomqvist et al., 2018; Dutordoir et al., 2020) cannot be applied to DKMs, as they rely on intermediate features being available to perform comparisons using the kernel function, whereas DKMs only propagate kernel/Gram matrices through the layers, _not_ features. Therefore, we are forced to develop an entirely new inter-domain (Lazaro-Gredilla and Figueiras-Vidal, 2009; Hensman et al., 2017; Rudner et al., 2020) convolutional inducing point scheme, designed specifically for the DKM setting. To summarise, our contributions are: * Introducing convolutional DKMs, which achieve SOTA performance for kernel methods on CIFAR-10, with an accuracy of 92%. * Introducing an efficient inter-domain inducing point scheme for convolutional DKMs, which allows our largest model to train in 28 GPU hours, 1-2 orders of magnitude faster than full NNGP / NTK / Myrtle kernels. * Developing a number of different model variants, empirically investigating their performance, including 7 different normalisation schemes (Appendix C), two likelihoods (Appendix D.2) and two different top-layers (Appendix F.3). ## 2 Related Work Perhaps the most closely related work is Yang et al. (2023), which introduces deep kernel machines and their connections to DGPs and neural networks. However, they only consider fully-connected networks, and do not consider convolutional structure. One way to define a deep kernel machine is using the infinite-width limit of a deep kernel process. Deep kernel processes are fully Bayesian deep nonlinear function approximators that optimise a variational approximate posterior over Gram matrices (Aitchison et al., 2021; Ober and Aitchison, 2021; Ober et al., 2023). DKPs are similar to DKMs in the sense that they learn a flexible kernel over data, though DKMs are deterministic and correspond to a type of infinitely-wide NN. There is a body of literature on infinite width convolutional neural networks (CNNs) (Novak et al., 2018; Garriga-Alonso et al., 2018). These networks are sometimes known as "neural network Gaussian processes" (NNGPs; Neal, 1995; Lee et al., 2017; Matthews et al., 2018), as they can be understood as a Gaussian process over the outputs, with a kernel that is computed by recursively applying transformations corresponding to the nonlinearity and the convolution operations. However, infinite CNNs have a Gaussian process kernel that is a fixed, deterministic function of the inputs. They therefore do not allow for flexibility in the learned kernels/representations, unlike real, finite CNNs and unlike our convolutional DKMs (Aitchison, 2020; Yang et al., 2023). One source of inspiration for our efficient inducing point approximation scheme might be convolutional DPGs, which use inducing patches (van der Wilk et al., 2017; Blomqvist et al., 2018; Dutordoir et al., 2020). However, as discussed in the introduction, it is not possible to use their inducing point scheme in the convolutional DKM setting, as it requires access to features at each layer, while we work solely with Gram matrices. Instead, we must develop an alternative scheme. Our inducing points have no spatial component, but act like fully-connected features. As such, our DKM inducing point scheme is inter-domain (Lazaro-Gredilla and Figueiras-Vidal, 2009; Hensman et al., 2017; Rudner et al., 2020), in the sense that the inducing points do not mirror data-points (images), but instead store information about the function in a different domain. Of course, convolutional DKMs are very different from work on inter-domain inducing points in GPs, in that we are in the DKM, not GP setting, and also in that this past work on inter-domain inducing points is not in the convolutional setting. An alternative approach to getting a flexible kernel is to take the inputs, transform them through a NN (e.g. 10-40 layer CNN), then to use the outputs of the NN as inputs to a standard kernel. This approach is known as deep kernel learning (DKL; Calandra et al., 2016; Wilson et al., 2016, 2016; Bradshaw et al., 2017; Ober et al., 2021), and should inherit most of the properties and flexibility of the underlying neural network. Recent work has extended DKL by using infinite width neural networks in the optimisation process to act as a regulariser (Achituve et al., 2023). DKMs are very different to DKL in that there is no underlying neural network. Instead the DKM directly transforms and allows flexibility in the kernels, without ever needing to use neural network weights/features. Finally, there are a few other interesting modern approaches to using kernels. Wu et al. (2022) uses a kernel viewpoint to design a closed-form, greedy layerwise training procedure for neural network weights. Their approach is very different if for no other reason than they are not in the convolutional setting, and they ultimately end up with NN weights, whereas DKMs work entirely with kernels. ## 3 Background **Deep Kernel Machines.** A detailed derivation of DKMs can be found in Appendix A or Yang et al. (2023). In a nutshell, DKMs are derived from deep Gaussian processes where the intermediate layers \(\mathbf{F}^{\ell}\in\mathbb{R}^{P\times N_{\ell}}\), with \(P\) datapoints and \(N_{\ell}\) features, have been made infinitely wide (\(N_{\ell}\rightarrow\infty\)). However, in the traditional infinite-width limit, representations become fixed (e.g. Aitchison, 2020; Yang et al., 2023). In contrast, the DKM (Yang et al., 2023) modifies the likelihood function so that, in the infinite-width setting, representations remain flexible and are learned from data. While the full derivation is somewhat involved, the key idea of the resulting algorithm is to consider the Gram matrices (which are training-example by training-example matrices like kernels) at each layer of the model as _parameters_, and optimise them according to an objective. Here we give a short, practical introduction to DKMs. As DKMs are ultimately infinite-width DGPs, and we cannot practically work with infinite-width features, we must instead work in terms of Gram matrices: \[\mathbf{G}^{\ell}=\tfrac{1}{N_{\ell}}\mathbf{F}^{\ell}(\mathbf{F}^{\ell})^{T} \in\mathbb{R}^{P\times P}. \tag{1}\] Note that from this point on we will always refer to \(\mathbf{G}^{\ell}\) in Eq. (1) as a "Gram matrix", whilst the term "kernel matrix" will carry the usual meaning of a matrix of similarities computed using a kernel function such as a squared exponential. Ordinarily, at each layer, a DGP computes the kernel matrix \(\mathbf{K}_{\text{features}}(\mathbf{F}^{\ell-1})\in\mathbb{R}^{P\times P}\) from the previous layer's features. Since we no longer have access to \(\mathbf{F}^{\ell-1}\), but instead propagate \(\mathbf{G}^{\ell-1}\), computing \(\mathbf{K}_{\text{features}}(\mathbf{F}^{\ell-1})\) is not possible in general. However, it turns out that for many kernels of interest, we can compute the kernel from the Gram matrix, \(\mathbf{G}^{\ell-1}\). For example, isotropic kernels (such as the squared exponential) only depend on \(\mathbf{F}\) through the normalised squared distance \(R_{ij}\) between points. These normalised squared distances can be computed from the Gram matrices, \[R_{ij}=\frac{1}{N_{\ell}}{\sum_{\lambda=1}^{N_{\ell}}}(F_{i\lambda}-F_{j \lambda})^{2}=\frac{1}{N_{\ell}}{\sum_{\lambda=1}^{N_{\ell}}}F_{i \lambda}^{2}-2F_{i\lambda}F_{j\lambda}+F_{j\lambda}^{2}=G_{ii}^{\ell}-2G_{ij}^ {\ell}+G_{jj}^{\ell}, \tag{2}\] and hence we can instead compute the kernel matrix as \[\mathbf{K}(\mathbf{G}^{\ell-1})=\mathbf{K}_{\text{features}}(\mathbf{F}^{\ell -1}). \tag{3}\] Here, \(\mathbf{K}(\cdot)\) and \(\mathbf{K}_{\text{features}}(\cdot)\) are functions that compute the same kernel, but \(\mathbf{K}(\cdot)\) takes a Gram matrix as input, whereas \(\mathbf{K}_{\text{features}}(\cdot)\) takes features as input. Notice that \(\mathbf{K}(\mathbf{G}^{\ell-1})\in\mathbb{R}^{P\times P}\) has the same shape as \(\mathbf{G}^{\ell-1}\), and so it is possible to recursively apply the kernel function in this parametrisation, taking \(\mathbf{G}^{0}=\frac{1}{N_{0}}\mathbf{XX}^{T}\) where \(\mathbf{X}\in\mathbb{R}^{P\times N_{0}}\) is the input data: \[\mathbf{G}^{\ell}=\mathbf{K}(\mathbf{G}^{\ell-1})=\underbrace{\mathbf{K}( \mathbf{K}(\cdots\mathbf{K}(\mathbf{G}^{0})))}_{\ell\text{ times}}). \tag{4}\] It turns out Eq. (4) exactly describes the traditional infinite-width limit of a DGP. Notice that \(\mathbf{G}^{\ell}\) is a fixed function of \(\mathbf{G}^{0}\) and thus cannot adapt to data (perhaps with the exception of a small number of tunable kernel hyperparameters). This is a major disadvantage compared to finite NNs and DGPs, which flexibly learn representations at each layer from the data. The DKM solves this problem by taking a slightly different infinite limit, the "Bayesian representation learning limit" (Yang et al., 2023). Under this alternative limit, the Gram matrices have some flexibility, and are no longer fixed and equal to those given by Eq. (4). Instead, the \(L\) Gram matrices, \(\mathbf{G}^{1},\ldots,\mathbf{G}^{L}\), become parameters which are optimised by maximising the DKM objective, \[\mathcal{L}(\mathbf{G}^{1},\ldots,\mathbf{G}^{L})=\log\mathrm{P}\left(\mathbf{ Y}|\mathbf{G}^{L}\right)-\sum_{\ell=1}^{L}\nu_{\ell}\operatorname{D_{KL}} \left(\mathcal{N}\left(\mathbf{0},\mathbf{G}^{\ell}\right)\middle\|\mathcal{N }\left(\mathbf{0},\mathbf{K}(\mathbf{G}^{\ell-1})\right)\right). \tag{5}\] The first term encourages good predictions by maximising the log-likelihood of the training data \(\mathbf{Y}\), whilst the other KL-divergence terms form a regulariser that encourages \(\mathbf{G}^{\ell}\) to be close to \(\mathbf{K}(\mathbf{G}^{\ell-1})\). As such, the model reduces to the standard infinite-width limit (Eq. 4) when no data is observed, since the KL-divergence is minimised by setting \(\mathbf{G}^{\ell}=\mathbf{K}(\mathbf{G}^{\ell-1})\). Formally, the DKM objective can be understood as the evidence lower bound (ELBO) for an infinite-width DGP, in the Bayesian representation learning limit. After optimising the training Gram matrices, prediction of unseen test points (which requires us to predict the test-test and test-train Gram matrix blocks for each layer progressively before predicting the output value at the final layer) can proceed using an algorithm inspired by prediction in DGPs (see Yang et al. 2023 for details). Sparse DKMs.As with all naive kernel methods, directly computing the DKM objective for all data is \(\mathcal{O}(P_{\mathrm{i}}^{3})\), where \(P_{\mathrm{i}}\) is the number of datapoints, which is typically infeasible. As a DKM is ultimately an infinite-width DGP parameterised by Gram matrices rather than features, Yang et al. (2023) developed an inducing point scheme inspired by the DGP literature. In the GP context, inducing point schemes reduce computational complexity by using variational inference with an approximate posterior that replaces the training set (of size \(P_{\mathrm{i}}\)) with a set of \(P_{\mathrm{i}}\) pseudo-inputs, called inducing points, where \(P_{\mathrm{i}}\) is usually much smaller than \(P_{\mathrm{i}}\). The inducing points can be treated as parameters of the variational approximate posterior and learned. In DGPs, each layer is a GP, so we learn one set of inducing points for each layer. When using inducing points, the features at each layer are partitioned into inducing points \(\mathbf{F}_{\mathrm{i}}^{\ell}\) and training points \(\mathbf{F}_{\mathrm{i}}^{\ell}\). Likewise, the kernel matrix is partitioned into blocks corresponding to inducing and training points. In a DKM, we partition \(\mathbf{G}^{\ell}\) similarly, \[\mathbf{G}^{\ell}=\begin{pmatrix}\mathbf{G}_{\mathrm{ii}}^{\ell}&\mathbf{G}_{ \mathrm{ii}}^{\ell}\\ \mathbf{G}_{\mathrm{ii}}^{\ell}&\mathbf{G}_{\mathrm{ii}}^{\ell}\end{pmatrix} \mathbf{K}(\mathbf{G}^{\ell})=\begin{pmatrix}\mathbf{K}_{\mathrm{ii}}( \mathbf{G}_{\mathrm{ii}}^{\ell})&\mathbf{K}_{\mathrm{it}}(\mathbf{G}^{\ell}) \\ \mathbf{K}_{\mathrm{ii}}(\mathbf{G}^{\ell})&\mathbf{K}_{\mathrm{it}}(\mathbf{G} _{\mathrm{ii}}^{\ell})\end{pmatrix}, \tag{6}\] with e.g. \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}\times P_{\mathrm{ i}}}\), \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}\times P_{\mathrm{i}}}\), and \(\mathbf{G}_{\mathrm{it}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}\times P_{\mathrm{ i}}}\). At each layer \(\ell\), \(\mathbf{G}_{\mathrm{ii}}^{\ell}\) is learnt, whilst \(\mathbf{G}_{\mathrm{it}}^{\ell}\) and \(\mathbf{G}_{\mathrm{it}}^{\ell}\) are predicted using \(\mathbf{G}_{\mathrm{ii}}^{\ell}\) and \(\mathbf{K}(\mathbf{G}^{\ell-1})\)(see Yang et al., 2023). The objective in the sparse DKM is, \[\mathcal{L}_{\mathrm{ind}}(\mathbf{G}_{\mathrm{ii}}^{1},\ldots,\mathbf{G}_{ \mathrm{ii}}^{L}){=}\text{GP-ELBO}\left(\mathbf{Y}_{\mathrm{t}};\mathbf{G}_{ \mathrm{ii}}^{L},\mathbf{G}_{\mathrm{ii}}^{L},\mathbf{G}_{\mathrm{ii}}^{L} \right)-\sum_{\ell=1}^{L}\nu_{\ell}\operatorname{D_{KL}}\left(\mathcal{N} \left(\mathbf{0},\mathbf{G}_{\mathrm{ii}}^{\ell}\right)\middle\|\mathcal{N} \left(\mathbf{0},\mathbf{K}(\mathbf{G}_{\mathrm{ii}}^{\ell-1})\right)\right). \tag{7}\] where \(\text{GP-ELBO}\left(\mathbf{Y}_{\mathrm{t}};\mathbf{G}_{\mathrm{ii}}^{L}, \mathbf{G}_{\mathrm{ii}}^{L},\mathbf{G}_{\mathrm{it}}^{L}\right)\) is the ELBO for a sparse Gaussian process with data \(\mathbf{Y}_{\mathrm{t}}\) and kernel \(\mathbf{K}(\mathbf{G}_{L})\) which measures the performance of the model on training data (see Appendix D for further details). The likelihood function for this GP can take many forms; we consider two different likelihoods: categorical and Gaussian (Gaussian likelihoods are analogous to squared-error regression used in the NNGP literature, such as Novak et al. (2018); Matthews et al. (2018); see Appendix D.2). This scheme is efficient, with time complexity \(\mathcal{O}(L(P_{\mathrm{i}}^{3}+P_{\mathrm{i}}^{2}P_{\mathrm{i}}))\), both because the KL terms in the sparse DKM objective depend only on \(\mathbf{G}_{\mathrm{ii}}\), and because obtaining predictions and computing the performance term in the DKM objective requires only the diagonal of \(\mathbf{G}_{\mathrm{it}}^{\ell}\)(which mirrors the requirements for the GP ELBO; see Yang et al., 2023 for details). ## 4 Methods The overall approach to defining a convolutional DKM is to build convolutional structure into our kernel, \(\mathbf{K}(\cdot)\), by taking inspiration from the infinite convolutional neural network literature (Novak et al., 2018; Garriga-Alonso et al., 2018). Specifically, we will define the kernel to be the covariance between features induced by a single convolutional BNN layer. We start by defining the convolutional BNN layer. Noting that features will now have a spatial component, let \(\mathbf{H}^{\ell-1}\in\mathbb{R}^{PS_{\ell-1}\times N_{\ell-1}}\) be the inputs to our convolutional layer, where \(P\) is the number of images, \(S_{\ell-1}\) is the number of spatial locations in the feature map (for a 2D image we would have \(S_{\ell-1}=W_{\ell-1}H_{\ell-1}\) where \(W_{\ell-1}\) is the width and \(H_{\ell-1}\) is the height of the feature map), and \(N_{\ell-1}\) is the number of channels. Let \(\mathbf{W}^{\ell}\in\mathbb{R}^{D\times N_{\ell-1}\times N_{\ell}}\) be the weights for our convolutional layer, where \(N_{\ell-1}\) is the channels at the previous layer, \(N_{\ell}\) is the channels at the next layer, and \(D\) is the number of spatial locations in a convolutional patch for instance, for a very common \(3\times 3\) convolutional patch, we would have \(D=9\)). Finally, let \(\mathbf{F}^{\ell}\in\mathbb{R}^{PS_{\ell}\times N_{\ell}}\) denote the output of our convolution layer, where \(S_{\ell}\) is the number of spatial locations and \(N_{\ell}\) is the number of channels at the output. In a typical neural network, we would apply a nonlinearity \(\phi\), e.g. relu, to the output of the previous layer to obtain the input to the next layer, i.e. \(\mathbf{H}^{\ell-1}=\phi(\mathbf{F}^{\ell-1})\). The BNN convolution operation (i.e. Conv2d in PyTorch) can be written as \[F^{\ell}_{ir,\lambda}=\sum_{d\in\mathcal{D}}\sum_{\mu=1}^{N_{\ell-1}}H^{\ell-1 }_{i(r+d),\mu}W^{\ell}_{d\mu\lambda}. \tag{8}\] Here, \(i\) indexes the image, \(r\) the spatial location in the image, \(d\) the location in the patch, and \(\mu\) and \(\lambda\) the input and output channels, respectively. This is natural for 1D convolutions, e.g. \(r\in\{1,\dots,S_{\ell}\}\) and \(d\in\mathcal{D}=\{-1,0,1\}\) (with \(D=3\) in this example), but generalises to higher dimensions if we take the spatial location within the image, \(r\), and the displacement within the patch, \(d\), to be 2D integer vectors. We will define the kernel as the covariance between the outputs of the convolutional layer (Eq. 8) under an IID Gaussian prior on the weights: \[\mathrm{P}\left(W^{\ell}_{d\mu\lambda}\right)=\mathcal{N}\left(W^{\ell}_{d\mu \lambda};0,\tfrac{1}{DN_{\ell-1}}\right). \tag{9}\] More specifically, we need the covariance \(\mathbb{E}\left[F^{\ell}_{ir,\lambda}F^{\ell}_{js,\lambda}|\mathbf{H}^{\ell-1 }\right]\) for channel \(\lambda\) between spatial location \(r\) in image \(i\), and spatial location \(s\) in image \(j\). Note that the prior on weights (Eq. 9) has zero-mean, so the induced distribution over features (Eq. 8) has zero mean, and hence the second moment of \(F^{\ell}_{ir,\lambda}\) equals the covariance. To compute this covariance it will be useful to define a Gram-like matrix \(\mathbf{\Omega}^{\ell-1}\in\mathbb{R}^{PS_{\ell-1}\times PS_{\ell-1}}\) from the input \(\mathbf{H}^{\ell}\), \[\mathbf{\Omega}^{\ell-1}=\tfrac{1}{N_{\ell-1}}\mathbf{H}^{\ell-1}\left( \mathbf{H}^{\ell-1}\right)^{T}\in\mathbb{R}^{PS_{\ell-1}\times PS_{\ell-1}}, \qquad\Omega^{\ell-1}_{ir,js}=\tfrac{1}{N_{\ell-1}}\sum_{\mu=1}^{N_{\ell-1}}H^ {\ell-1}_{ir,\mu}H^{\ell-1}_{js,\mu} \tag{10}\] For the matrix multiplication to make sense, we have interpreted \(\mathbf{H}_{\ell-1}\in\mathbb{R}^{PS_{\ell-1}\times N_{\ell-1}}\) as a matrix with \(PS_{\ell-1}\) rows and \(N_{\ell-1}\) columns. With this in place, the covariance \(\Gamma_{ir,js}\) between the outputs of the convolutional layer (Eq. 8) for spatial location \(r\) in image \(i\) and spatial location \(s\) in image \(j\), for channel \(\lambda\), can be computed (Appendix B.3) as, \[\Gamma_{ir,js}(\mathbf{\Omega}^{\ell-1})=\mathbb{E}\left[F^{\ell}_{ir,\lambda} F^{\ell}_{js,\lambda}|\mathbf{H}^{\ell-1}\right]=\tfrac{1}{D}\sum_{d\in \mathcal{D}}\Omega^{\ell-1}_{i(r+d),j(s+d)}. \tag{11}\] This is not yet a valid kernel function for our convolutional DKM since it does not take the previous layer Gram matrix \(\mathbf{G}^{\ell-1}\) (Eq. 1) as input. To complete the choice of kernel, we remember that \(\mathbf{H}^{\ell-1}\) arises from applying e.g. a ReLU nonlinearity to \(\mathbf{F}^{\ell-1}\). Taking inspiration from infinite NNs, that would suggest we can compute \(\mathbf{\Omega}^{\ell-1}\) from \(\mathbf{G}^{\ell-1}\) using e.g. an arccos kernel (Cho & Saul, 2009), \[\mathbf{\Omega}^{\ell-1}=\mathbf{\Omega}(\mathbf{G}^{\ell-1}), \tag{12}\] where \(\mathbf{\Omega}(\cdot)\) is a function that applies e.g. an arccos kernel to a Gram matrix, while \(\mathbf{\Omega}^{\ell-1}\) is the specific value at this layer. Combining this with Eq. (11), we obtain the full convolutional DKM kernel, \[\mathbf{K}_{\text{Conv}}(\mathbf{G}^{\ell-1})=\mathbb{E}\left[\mathbf{f}^{ \ell}_{\lambda}\left(\mathbf{f}^{\ell}_{\lambda}\right)^{T}|\mathbf{H}^{\ell- 1}\right]=\mathbf{\Gamma}(\mathbf{\Omega}(\mathbf{G}^{\ell-1}))\in\mathbb{R}^ {PS_{\ell}\times PS_{\ell}}, \tag{13}\] where in Eq. (13) we have written the equation for the entire matrix instead of a specific element as in Eq. (11). Here, \(\mathbf{f}^{\ell}_{\lambda}\in\mathbb{R}^{PS_{\ell}}\) denotes a single feature vector, i.e. the \(\lambda^{\text{th}}\) column of \(\mathbf{F}^{\ell}\). From this perspective, \(\mathbf{\Gamma}(\cdot)\) (Eq. 11) can be viewed as the kernelised version of the convolution operation. **Sparse convolutional DKMs.** Computing the full kernel for all training images is intractable in the full-rank case. For example, CIFAR-10 has \(50\,000\) training images of size \(32\times 32\), so we would need to invert a kernel matrix with \(50\,000\cdot 32\cdot 32=51\,200\,000\) rows and columns. Hence, we need to develop a more efficient scheme. Following Yang et al. (2023), we consider a sparse DKM, involving inducing points. However, inducing points usually live in the same space as datapoints. Here, datapoints are images, and if we take inducing points as images, we would end up with \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{S_{\ell}\hat{\mathbf{P}}\times S_{ \ell}\hat{\mathbf{P}}}\). This is still intractable, even for a small number of images. For CIFAR-10, \(S=32\cdot 32=1024\), so with as few as \(P_{\mathrm{i}}=10\) inducing points, \(\mathbf{G}_{\mathrm{ii}}^{\ell}\) is a \(10,240\times 10,240\) matrix, which is impractical to invert. Hence we resort to inter-domain inducing points. However, we cannot use the usual inducing patch scheme (van der Wilk et al., 2017), as it requires us to have access to test/train feature patches at intermediate layers, but we only have Gram matrices. Therefore we propose a new scheme where the inducing points do not have image-like spatial structure. In particular, the Gram matrix blocks have sizes \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}^{\ell}\times P_{ \mathrm{i}}^{\ell}}\), \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}^{\ell}\times S_{ \ell}\hat{\mathbf{P}}_{\mathrm{i}}}\) and \(\mathbf{G}_{\mathrm{ii}}^{\ell}\in\mathbb{R}^{S_{\ell}\hat{\mathbf{P}}\times S _{\ell}\hat{\mathbf{P}}}\), so that the full Gram matrices are \(\mathbf{G}^{\ell}\in\mathbb{R}^{(P_{\mathrm{i}}^{\ell}+S_{\ell}\hat{\mathbf{P }})\times(P_{\mathrm{i}}^{\ell}+S_{\ell}\hat{\mathbf{P}})}\), with one row/column for each inducing point, and an additional row/column for each location in each of the train/test images. Note that \(P_{\mathrm{i}}^{\ell}\) will be able to vary by layer in our inducing-point scheme. All kernel/Gram matrix like objects (specifically, the outputs of \(\mathbf{\Gamma}(\cdot)\) and \(\mathbf{\Omega}(\cdot)\)) have this same structure. We now show how we define these inducing points and compute the relevant covariances, forming the primary contribution of this paper. Previously, we considered only spatially structured \(\mathbf{H}^{\ell-1}\) and \(\mathbf{F}^{\ell}\). Now, we consider \(\mathbf{H}^{\ell-1}\in\mathbb{R}^{(P_{\mathrm{i}}^{\ell-1}+S_{\ell-1}\hat{ \mathbf{P}})\times N_{\ell-1}}\), formed by concatenating non-spatial \(\mathbf{H}_{\mathrm{i}}^{\ell-1}\in\mathbb{R}^{P_{\mathrm{i}}^{\ell-1}\times N _{\ell-1}}\) (subscript "\(\mathrm{i}\)" for inducing points) with spatially structured \(\mathbf{H}_{\mathrm{t}}^{\ell-1}\in\mathbb{R}^{S_{\ell-1}\hat{\mathbf{P}} \times N_{\ell-1}}\) (subscript "\(\mathrm{t}\)" for test/train points). Likewise, we have \(\mathbf{F}^{\ell}\in\mathbb{R}^{(P_{\mathrm{i}}^{\ell}+S_{\ell}\hat{\mathbf{P }}_{\mathrm{i}})\times N_{\ell}}\) formed by combining non-spatial \(\mathbf{F}_{\mathrm{i}}^{\ell}\in\mathbb{R}^{P_{\mathrm{i}}^{\ell}\times N_{ \ell}}\) with spatially structured \(\mathbf{F}_{\mathrm{i}}^{\ell}\in\mathbb{R}^{S_{\ell}\hat{\mathbf{P}}\times N _{\ell}}\), \[\mathbf{H}^{\ell-1}=\begin{pmatrix}\mathbf{H}_{\mathrm{i}}^{\ell-1}\\ \mathbf{H}_{\mathrm{t}}^{\ell-1}\end{pmatrix} \mathbf{F}^{\ell}=\begin{pmatrix}\mathbf{F}_{\mathrm{i}}^{\ell}\\ \mathbf{F}_{\mathrm{t}}^{\ell}\end{pmatrix}. \tag{14}\] As before (Eq. 10), we define the full \(\mathbf{\Omega}^{\ell-1}\) as the normalised product of the full \(\mathbf{H}^{\ell-1}\) with itself, \[\mathbf{\Omega}^{\ell-1}=\tfrac{1}{N_{\ell-1}}\mathbf{H}^{\ell-1}(\mathbf{H}^ {\ell-1})^{T} \tag{15}\] or, breaking this expression down into blocks (with the bottom right block \(\mathbf{\Omega}_{\mathrm{u}}^{\ell-1}\) matching Eq. (10), \[\begin{pmatrix}\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1}&\mathbf{\Omega}_{ \mathrm{ii}}^{\ell-1}\\ \mathbf{\Omega}_{\mathrm{ii}}^{\ell-1}&\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1} \end{pmatrix}=\tfrac{1}{N_{\ell-1}}\begin{pmatrix}\mathbf{H}_{\mathrm{i}}^{ \ell-1}\begin{pmatrix}\mathbf{H}_{\mathrm{i}}^{\ell-1}\end{pmatrix}^{T}& \mathbf{H}_{\mathrm{i}}^{\ell-1}\begin{pmatrix}\mathbf{H}_{\mathrm{t}}^{\ell-1} \end{pmatrix}^{T}\\ \mathbf{H}_{\mathrm{t}}^{\ell-1}\begin{pmatrix}\mathbf{H}_{\mathrm{t}}^{\ell-1} \end{pmatrix}^{T}&\mathbf{H}_{\mathrm{t}}^{\ell-1}\begin{pmatrix}\mathbf{H}_{ \mathrm{t}}^{\ell-1}\end{pmatrix}^{T}\end{pmatrix}. \tag{16}\] As in Eq. (13), the kernel matrix \(\mathbf{K}_{\text{Conv}}(\mathbf{G}^{\ell-1})\) is formed as the covariance of features, with \(\mathbf{f}_{\lambda}^{\ell}\in\mathbb{R}^{(P_{\mathrm{i}}^{\ell}+S_{\ell}\hat{ \mathbf{P}}_{\mathrm{i}})}\) a single feature vector / column of \(\mathbf{F}^{\ell}\). Broken into the inducing and test/train blocks, \[\begin{pmatrix}\mathbf{K}_{\mathrm{ii}}^{\text{Conv}}&\mathbf{K}_{\mathrm{ii}}^ {\text{Conv}}\\ \mathbf{K}_{\mathrm{ii}}^{\text{Conv}}&\mathbf{K}_{\mathrm{ii}}^{\text{Conv}} \end{pmatrix}=\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{ \mathrm{ii}}^{\ell-1})&\mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ i}}^{\ell-1})\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})&\mathbf{ \Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{tt}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}&\mathbf{f}_{\lambda}^{\ell, \ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})&\mathbf{ \Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{tt}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}&\mathbf{f}_{\lambda}^{\ell, \ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})&\mathbf{ \Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{tt}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}&\mathbf{f}_{\lambda}^{\ell, \ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\begin{pmatrix}\mathbf{f}_{ \lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})&\mathbf{ \Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{tt}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}&\mathbf{f}_{\lambda}^{\ell, \ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\begin{pmatrix}\mathbf{f}_{ \lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})&\mathbf{ \Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{tt}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}&\mathbf{f}_{\lambda}^{\ell, \ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\begin{pmatrix}\mathbf{f}_{ \lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})\end{pmatrix}= \mathbb{E}\left[\begin{pmatrix}\mathbf{\Gamma}_{\mathrm{i}}^{\ell,\ell}\begin{pmatrix} \mathbf{f}_{\lambda}^{\ell,\ell}\begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell} \begin{pmatrix}\mathbf{f}_{\lambda}^{\ell,\ell}\end{pmatrix}^{T}\\ \mathbf{ changing \(P_{i}^{\ell}\) in \(\mathbf{C}^{\ell}\in\mathbb{R}^{DP_{i}^{\ell}\times P_{i}^{\ell-1}}\) lets us vary the number of inducing points at each layer. The parameters \(\mathbf{C}^{\ell}\) are learned by optimising Eq. (7). Using \(\mathbf{F}_{\mathrm{i}}^{\ell}\), we can then compute the covariances (Appendix B) to obtain \(\mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})\) and \(\mathbf{\Gamma}_{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1})\): \[\Gamma_{i,j}^{\mathrm{ii}}(\mathbf{\Omega}_{\mathrm{ii}}^{\ell-1}) =\mathbb{E}\left[F_{i,\lambda}^{i}\left(F_{j,\lambda}^{i}\right) \left|\mathbf{H}\right|\right]=\tfrac{1}{D}\sum_{d\in\mathcal{D}}\sum_{\nu^{ \prime}=1}^{P_{i}^{\ell-1}}C_{di,i^{\prime}}^{\ell}C_{dj,j^{\prime}}^{\ell} \Omega_{i^{\prime}j^{\prime}}^{\mathrm{ii};\ell-1} \tag{19}\] \[\Gamma_{i,js}^{\mathrm{it}}(\mathbf{\Omega}_{\mathrm{it}}^{\ell-1}) =\mathbb{E}\left[F_{i,\lambda}^{i}\left(F_{js,\lambda}^{i}\right) \left|\mathbf{H}\right|\right]=\tfrac{1}{D}\sum_{d\in\mathcal{D}}\sum_{i^{ \prime}=1}^{P_{i}}C_{di,i^{\prime}}^{\ell}\Omega_{i^{\prime}j,(s+d)}^{\mathrm{ it};\ell-1}. \tag{20}\] This fully defines our inducing point scheme. At the final layer, we collapse the spatial dimension (either using a linear layer or global average pooling, see Appendix E) and then perform classification using the final Gram matrix \(\mathbf{G}^{\ell}\) as the covariance of a GP. For efficiency, we only store the diagonal of the test/train "tt" block at all layers, which is sufficient for IID likelihoods (Appendix E.2). ## 5 Experiments **Model variants** To fully evaluate our algorithm, it was necessary to introduce and test a variety of model variants and new techniques. In particular, we consider 7 hyperparameters/architectural choices: the regularisation strength constant \(\nu\) in the objective function (Eq. 7), the normalisation and rescaling schemes (Appendix C), the top layer, i.e. linear (Appendix E.1) vs. global average pooling (Appendix E.3), the likelihood (Appendix D.2), and the initial learning rate used in the scheduler. We vary each hyperparameter in turn, starting from a reasonable "default" model, which uses \(\nu=0.001\) (we use the same \(\nu_{\ell}\) for all layers), "Local / Local" normalisation and "Batch / Batch" scaling (we can apply different variants to the inducing and test-train blocks of the Gram matrices, see Appendix C), global average pooling, a categorical likelihood, an initial learning rate of 0.001. We used a model architecture that mirrors the ResNet20 architecture He et al. (2016), meaning we use the the same number of convolution layers with the same strides, we use normalisation in the same places, and we use skip connections, which compute a convex combination of the Gram matrix outputted by a block with the inputted Gram matrix, thereby allowing information to "skip" over blocks. To mirror the ReLU, we used a first-order arccos kernel (Cho & Saul, 2009) for \(\mathbf{\Omega}(\cdot)\) everywhere. Where the original ResNet20 architecture uses {16,32,64} channels in its blocks, we initially used {128,256,512} inducing points (i.e. we set \(P_{i}^{\ell}=128\) for layers \(\ell\) contained in the first block, \(P_{i}^{\ell}=256\) in the second block, and \(P_{i}^{\ell}=512\) in the third block). Note that this is only possible since our inducing point scheme allows \(P_{i}^{\ell}\) to vary across layers, which is not true of the original sparse DKM. We vary these numbers in the final benchmarks, but needed to balance model capacity and computational resources for the model selection experiments. The model was implemented2 in PyTorch (Paszke et al., 2019). We optimised all parameters using Adam, with \(\beta_{1}=0.9\), \(\beta_{2}=0.999\), and used plateau scheduling for the learning rate, training for 100 epochs. We used data augmentation (random cropping and horizontal flips), and a batch size of 256. We also used ZCA (zero-phase component analysis) whitening, which is commonly used in kernel methods (e.g. Shankar et al., 2020; Lee et al., 2020) (for the ZCA regularisation parameter, \(\epsilon\), we used 0.1). Footnote 2: Anonymised code base: [https://anonymous.4open.science/r/convdkm_neurips2023/](https://anonymous.4open.science/r/convdkm_neurips2023/) We initialise the inducing inputs by randomly selecting patches from the training set, and then initialise the inducing Gram blocks \(\mathbf{G}_{ii}^{\ell}\) at the NNGP case (i.e. where regularisation strength is infinity) by propagating the inducing inputs through the model and setting \(\mathbf{G}_{ii}^{\ell}\) at each layer equal to the incoming kernel matrix block \(\mathbf{K}_{ii}^{\ell}\). The inducing output parameters mu and A are initialised randomly from a Gaussian distribution (for A we initialise L as Gaussian and then compute \(A=LL^{T}\) to ensure A is positive-definite). We initialise the mixup parameters \(\mathbf{C}^{\ell}\) randomly. \begin{table} \begin{tabular}{c c c c c} \hline \hline Category & Setting & MNIST & CIFAR-10 & CIFAR-100 \\ \hline \multirow{8}{*}{Regularisation Strength \(\nu_{\ell}\)} & \(\infty\) (inducing NNGP) & 71.60 \(\pm\) 0.38 & 45.24 \(\pm\) 0.55 & 20.73 \(\pm\) 0.34 \\ & \(10^{5}\) & 99.12 \(\pm\) 0.02 & 87.27 \(\pm\) 0.22 & 63.47 \(\pm\) 0.35 \\ & \(10^{4}\) & 99.10 \(\pm\) 0.02 & 88.16 \(\pm\) 0.36 & 63.89 \(\pm\) 0.22 \\ & \(10^{3}\) & 99.12 \(\pm\) 0.02 & 87.86 \(\pm\) 0.25 & 63.98 \(\pm\) 0.26 \\ & \(10^{2}\) & 99.15 \(\pm\) 0.01 & 88.23 \(\pm\) 0.21 & 63.69 \(\pm\) 0.50 \\ & \(10^{1}\) & **99.19 \(\pm\) 0.02** & 88.09 \(\pm\) 0.26 & 64.15 \(\pm\) 0.42 \\ & \(10^{0}\) & **99.19 \(\pm\) 0.02** & **88.65 \(\pm\) 0.21** & 64.39 \(\pm\) 0.38 \\ & \(10^{-1}\) & **99.21 \(\pm\) 0.02** & **88.80 \(\pm\) 0.27** & **64.55 \(\pm\) 0.32** \\ & \(10^{-2}\) & **99.19 \(\pm\) 0.02** & **89.13 \(\pm\) 0.26** & 64.61 \(\pm\) 0.18 \\ & \(10^{-3}\) & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & 64.60 \(\pm\) 0.17 \\ & \(10^{-4}\) & **99.21 \(\pm\) 0.02** & 88.27 \(\pm\) 0.33 & **64.66 \(\pm\) 0.35** \\ & \(10^{-5}\) & **99.20 \(\pm\) 0.02** & 88.54 \(\pm\) 0.15 & **65.29 \(\pm\) 0.29** \\ & \(10^{-6}\) & **99.18 \(\pm\) 0.02** & 88.32 \(\pm\) 0.33 & **64.82 \(\pm\) 0.27** \\ & 0 & **99.20 \(\pm\) 0.02** & **88.72 \(\pm\) 0.25** & **64.58 \(\pm\) 0.32** \\ \hline \multirow{8}{*}{Normalisation (Inducing / Local / Image & **99.21 \(\pm\) 0.02** & **88.60 \(\pm\) 0.12** & **66.44 \(\pm\) 0.39** \\ & Batch / Location & 98.88 \(\pm\) 0.02 & 88.04 \(\pm\) 0.41 & **66.13 \(\pm\) 0.31** \\ & Local / Image & **99.19 \(\pm\) 0.02** & **88.98 \(\pm\) 0.27** & 65.34 \(\pm\) 0.38 \\ & Local / Local & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & 64.60 \(\pm\) 0.17 \\ & None / None & 10.35 \(\pm\) 0.23 & 10.00 \(\pm\) 0.00 & 1.00 \(\pm\) 0.00 \\ \hline \multirow{8}{*}{Rescaling (Inducing / Train-Test)} & Batch & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & **64.60 \(\pm\) 0.17** \\ & Batch / Location & **99.20 \(\pm\) 0.03** & **88.31 \(\pm\) 0.41** & **64.40 \(\pm\) 0.34** \\ & Local / Batch & **99.19 \(\pm\) 0.02** & **88.53 \(\pm\) 0.24** & **64.45 \(\pm\) 0.37** \\ & Local / Location & 99.17 \(\pm\) 0.02 & **88.54 \(\pm\) 0.20** & 63.85 \(\pm\) 0.55 \\ & Local / None & **99.23 \(\pm\) 0.02** & 88.08 \(\pm\) 0.17 & **64.87 \(\pm\) 0.40** \\ & None / None & **99.19 \(\pm\) 0.02** & 87.97 \(\pm\) 0.23 & **65.19 \(\pm\) 0.36** \\ \hline \multirow{2}{*}{Top Layer} & Global Average Pooling & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & **64.60 \(\pm\) 0.17** \\ & Linear & **99.19 \(\pm\) 0.02** & **88.41 \(\pm\) 0.42** & 61.90 \(\pm\) 0.92 \\ \hline \multirow{2}{*}{Likelihood} & Gaussian & 99.05 \(\pm\) 0.03 & 87.35 \(\pm\) 0.39 & 6.36 \(\pm\) 1.66 \\ & Categorical & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & **64.60 \(\pm\) 0.17** \\ \hline \multirow{3}{*}{Initial Learning Rate} & \(10^{-1}\) & **88.10 \(\pm\) 10.97** & 41.70 \(\pm\) 12.15 & 29.24 \(\pm\) 8.55 \\ & \(10^{-2}\) & **99.26 \(\pm\) 0.01** & **89.04 \(\pm\) 0.23** & **64.55 \(\pm\) 0.34** \\ \cline{1-1} & \(10^{-3}\) & 99.20 \(\pm\) 0.02 & **88.62 \(\pm\) 0.17** & **64.60 \(\pm\) 0.17** \\ \cline{1-1} & \(10^{-4}\) & 98.87 \(\pm\) 0.05 & 85.08 \(\pm\) 0.35 & 60.62 \(\pm\) 0.79 \\ \hline \multirow{3}{*}{Jitter Size \(\epsilon\)} & \(10^{-4}\) & **99.20 \(\pm\) 0.02** & **88.62 \(\pm\) 0.17** & 64.60 \(\pm\) 0.17 \\ \cline{1-1} & \(10^{-5}\) & **99.21 \(\pm\) 0.02** & **88.20 \(\pm\) 0.41** & **65.25 \(\pm\) 0.30** \\ \cline{1-1} & \(10^{-6}\) & **99.22 \(\pm\) 0.02** & **88.58 \(\pm\) 0.18** & **64.88 \(\pm\) 0.21** \\ \cline{1-1} & \(10^{-7}\) & 99.18 \(\pm\) 0.02 & **88.83 \(\pm\) 0.18** & **64.85 \(\pm\) 0.22** \\ \cline{1-1} \cline{1-1} & \(10^{-8}\) & Fail & Fail & Fail \\ \hline \hline \end{tabular} \end{table} Table 1: Test accuracies (%) for model selection experiments. Base model used \(\nu_{\ell}=10^{-3}\), Local / Local normalisation, Batch / Batch rescaling, Global Average Pooling, and a categorical likelihood. Bold shows values statistically similar to the maximum in each category, according to a one-tailed Welch test. We tested on the MNIST3(LeCun et al., 1998), CIFAR-10, and CIFAR-1004(Krizhevsky and Hinton, 2009) image classification datasets. We ran all experiments for 8 different random seeds, and report mean test accuracy with standard errors in Table 1; see Appendix F for other metrics. For each hyperparameter and dataset, we identify the best setting, and then bold all those settings whose performance was statistically similar to the best, using a one-tailed Welch's t-test (a more conservative version of the Student's t-test that does not assume equal population variances) at a 5% significance level. For the normalisation and rescaling schemes, we test certain combinations for the inducing and test-train blocks that we feel are natural; see Appendix F.2 for more details. Footnote 3: [https://yann.lecun.com/exdb/mnist/](https://yann.lecun.com/exdb/mnist/) Licence: CC BY-SA 3.0 Setting \(\nu=\infty\) causes the KL-term in the objective to dominate, and forces \(\mathbf{G}^{\ell}=\mathbf{K}(\mathbf{G}^{\ell-1})\), as in a standard infinite width CNN(Novak et al., 2018; Garriga-Alonso et al., 2018). As \(\nu\) controls the strength of the regularisation, performance decreases for larger values, as this causes the model to underfit. On the other hand, we expect the model to overfit if we set \(\nu\) too small, and we indeed observe a small degradation in performance when we set \(\nu=0\) (though this is most evident in the test LL in Table 4). These experiments are subject to relatively high standard errors due to variance between different random seed runs, making it difficult to fully evaluate the effect of regularisation strength. However, it appears a value for \(\nu\) of around \(0.1\) should be good enough. Without normalisation (Appendix C), the model fails to learn. As with traditional batch-normalisation, allowing the model to learn a new scale parameter after normalising (rescaling) also provides advantages. We did not observe any convincing advantages to using "Local" normalisation / rescaling schemes over "Batch", or those designed specifically for image structure ("location" and "image"). In summary, it appears the simple "Batch" scheme is the most effective all-rounder. This scheme divides the Gram matrix by the mean of its diagonal elements for the normalisation, and multiplies each block by a scalar for the rescaling. \begin{table} \begin{tabular}{c c c c} \hline \hline Ind. Points & MNIST & CIFAR-10 & CIFAR-100 \\ \hline 16 / 32 / 64 & 98.50 \(\pm\) 0.05 & 72.18 \(\pm\) 0.86 & 41.17 \(\pm\) 0.58 \\ 32 / 64 / 128 & 99.04 \(\pm\) 0.02 & 81.83 \(\pm\) 0.71 & 51.40 \(\pm\) 0.42 \\ 64 / 128 / 256 & 99.19 \(\pm\) 0.03 & 86.90 \(\pm\) 0.18 & 61.14 \(\pm\) 0.42 \\ 128 / 256 / 512 & **99.24 \(\pm\) 0.03** & 89.24 \(\pm\) 0.12 & 65.70 \(\pm\) 0.34 \\ 256 / 512 / 1024 & **99.27 \(\pm\) 0.02** & 90.82 \(\pm\) 0.21 & 69.68 \(\pm\) 0.32 \\ 512 / 1024 / 2048 & **99.26 \(\pm\) 0.01** & **91.67 \(\pm\) 0.23** & **71.84 \(\pm\) 0.18** \\ \hline \hline \end{tabular} \end{table} Table 2: Test accuracies (%) using different numbers of inducing points in the ResNet blocks, selecting the best hyperparameters from Table 1. Bold shows values statistically similar to the maximum in each category, according to a one-tailed Welch test. \begin{table} \begin{tabular}{c l c} \hline \hline Paper & Method & CIFAR-10 & \\ \hline & This paper & DKM-DA-GAP & 91.67\% \\ & Novak et al. (2018) & NNGP-GAP & 77.43\% \\ & Arora et al. (2019) & NNGP-GAP & 83.75\% \\ Pure Kernel & Lee et al. (2020) & NNGP-GAP-DA & 84.8\% \\ Methods & Li et al. (2019) & NNGP-LAP-flip & 88.92\% \\ & Shankar et al. (2020) & Myrtle10 & 89.80\% \\ & Adlam et al. (2023) & Tuned Myrtle10 DA CG & 91.2\% \\ \hline & (Dutordoir et al., 2020) & Conv DGP & 74.41\% \\ Feature Based & (Ober et al., 2021) & DKL & 86\% \\ Methods & (Achituve et al., 2023) & GDKL & 95.67\% \\ & (Moreau et al., 2022) & ResNet18 & 95.55\% \\ \hline \hline \end{tabular} \end{table} Table 3: Comparison of test accuracy against other kernel methods and feature-based methods including a ResNet18 neural network. Global average pooling universally outperforms the linear method for collapsing the spatial structure at the top layer, reflecting common practice in deep learning and NNGPs (e.g. Novak et al., 2018). Using a categorical likelihood provides moderate advantages for CIFAR-10 and MNIST, which have 10 classes, and dramatic improvements for CIFAR-100, which has 100 classes. We found that an initial learning rate of \(10^{-2}\) or \(10^{-3}\) worked best with the convolutional DKM. We also experimented with varying the size of "jitter", a technique used to improve the numerical stability of kernel methods by adding a small positive constant to the diagonal of the kernel matrix. We found that relatively large values of jitter do not noticeably degrade predictive performance. **Benchmark performance.** Using what we learned from the model variants experiments, we ran a final set of experiments with \(\nu_{\ell}=10^{-1}\), "Batch / Batch" normalisation and rescaling, global average pooling, a categorical likelihood, and an initial learning rate of 0.01. In these experiments, we varied the number of inducing points in each block from \(\{16,32,64\}\) (in the first, middle, and last blocks) up to \(\{512,1024,2048\}\). We trained for 75 epochs which was adequate for convergence. Mean test accuracies are given in Table 2 with other metrics in Appendix F. As expected, increasing the number of inducing points increases observed test accuracy, with diminishing returns for MNIST, moderate improvements for CIFAR-10, and considerable improvements for CIFAR-100. Our largest model, with 2048 inducing points in the last block, achieved \(99\%\) test accuracy on MNIST, \(92\%\) on CIFAR-10, and \(71\%\) on CIFAR-100. Training this model on CIFAR-10/100 took around \(28\) hours on a single NVIDIA A100. **Comparisons against other models.** In Table 3, we compare our largest model against other pure kernel methods, including NNGP and NTK, as well as against feature-based methods like deep kernel learning and a ResNet18 neural network. Our model gives comparable accuracy to the best pure kernel methods (Adam et al., 2023; Shankar et al., 2020; Li et al., 2019). However, these models are not directly comparable to ours, as we developed a much faster efficient inducing point scheme, whereas NNGP/NTK methods usually compute the full training kernel matrix. As we were not able to find runtimes for these experiments in the papers themselves, we looked at the benchmarks from Google's Neural Tangents library5, where they report that a computing single NTK entry in a 21 layer convolutional network with pooling applied to a \(32\times 32\) dataset took around 6 ms on an Nvidia V100. Computing all \(50\,000^{2}\) elements of the kernel for CIFAR-10, would therefore take around \(4000\) GPU hours, roughly two orders of magnitude more than our method (though these numbers are not directly comparable, due e.g. to differences in the choice of GPU). They also found that the more efficient Myrtle-5/7/10 kernels from Shankar et al. (2020) took 316/330/508 GPU hours for the full \(60\,000\) CIFAR-10 dataset (Shankar et al., 2020), which is again around one order of magnitude more than our training time. Footnote 5: github.com/google/neural-tangents As one would expect, methods based on neural networks are still far ahead of purely kernel-based methods. This is not surprising, as these methods have been the focus of incredible amounts of research and development in the image domain, far more than kernel-based methods. ## 6 Conclusion and Limitations We have developed convolutional DKMs, and an efficient inter-domain inducing point approximation scheme. In the results, we considered varying the regularisation strength, the normalisation scheme, the top-layer, the likelihood, ZCA preprocessing, and the number of inducing points. We found that relatively small amounts of regularisation (around \(10^{-5}\)), local rescaling pixel averaging normalisation, GAP top-layer, a categorical likelihood, using ZCA preprocessing, and a larger number of inducing points gave best performance. Our best model obtained 99% test accuracy on MNIST, 92% on CIFAR-10 and 71% on CIFAR-100. This is state-of-the-art for kernel-methods / infinite NNs,and as this is the first paper introducing convolutional DKMs, there is considerable room for further development to close the gap to state of the art neural-networks. One of the limitations of our work is that we did not conduct experiments on high resolution image datasets like ImageNet (Deng et al., 2009) as we have not yet developed the efficient tricks (such as lower/mixed precision, and more efficient memory utilization) to enable these larger datasets. It may be possible to adapt existing ideas from the kernel literature (Adlam et al. (2023) and Maddox et al. (2022), for example) to address some of these issues, in addition to leveraging multi-GPU setups, which we leave to future work.
2309.11083
ElasticNotebook: Enabling Live Migration for Computational Notebooks (Technical Report)
Computational notebooks (e.g., Jupyter, Google Colab) are widely used for interactive data science and machine learning. In those frameworks, users can start a session, then execute cells (i.e., a set of statements) to create variables, train models, visualize results, etc. Unfortunately, existing notebook systems do not offer live migration: when a notebook launches on a new machine, it loses its state, preventing users from continuing their tasks from where they had left off. This is because, unlike DBMS, the sessions directly rely on underlying kernels (e.g., Python/R interpreters) without an additional data management layer. Existing techniques for preserving states, such as copying all variables or OS-level checkpointing, are unreliable (often fail), inefficient, and platform-dependent. Also, re-running code from scratch can be highly time-consuming. In this paper, we introduce a new notebook system, ElasticNotebook, that offers live migration via checkpointing/restoration using a novel mechanism that is reliable, efficient, and platform-independent. Specifically, by observing all cell executions via transparent, lightweight monitoring, ElasticNotebook can find a reliable and efficient way (i.e., replication plan) for reconstructing the original session state, considering variable-cell dependencies, observed runtime, variable sizes, etc. To this end, our new graph-based optimization problem finds how to reconstruct all variables (efficiently) from a subset of variables that can be transferred across machines. We show that ElasticNotebook reduces end-to-end migration and restoration times by 85%-98% and 94%-99%, respectively, on a variety (i.e., Kaggle, JWST, and Tutorial) of notebooks with negligible runtime and memory overheads of <2.5% and <10%.
Zhaoheng Li, Pranav Gor, Rahul Prabhu, Hui Yu, Yuzhou Mao, Yongjoo Park
2023-09-20T06:18:07Z
http://arxiv.org/abs/2309.11083v4
# ElasticNotebook: Enabling Live Migration for Computational Notebooks (Technical Report) ###### Abstract. Computational notebooks (e.g., Jupyter, Google Colab) are widely used for interactive data science and machine learning. In those frameworks, users can start a _session_, then execute _cells_ (i.e., a set of statements) to create variables, train models, visualize results, etc. Unfortunately, existing notebook systems do not offer live migration: when a notebook launches on a new machine, it loses its _state_, preventing users from continuing their tasks from where they had left off. This is because, unlike DBMS, the sessions directly rely on underlying kernels (e.g., Python/R interpreters) without an additional data management layer. Existing techniques for preserving states, such as copying all variables or OS-level checkpointing, are unreliable (often fail), inefficient, and platform-dependent. Also, re-running code from scratch can be highly time-consuming. In this paper, we introduce a new notebook system, ElasticNotebook, that offers live migration via checkpointing/restoration using a novel mechanism that is reliable, efficient, and platform-independent. Specifically, by observing all cell executions via transparent, lightweight monitoring, ElasticNotebook can find a reliable and efficient way (i.e., _replication plan_) for reconstructing the original session state, considering variable-cell dependencies, observed runtime, variable sizes, etc. To this end, our new graph-based optimization problem finds how to reconstruct all variables (efficiently) from a subset of variables that can be transferred across machines. We show that ElasticNotebook reduces end-to-end migration and restoration times by 85%-98% and 94%-99%, respectively, on a variety (i.e., Kaggle, JWST, and Tutorial) of notebooks with negligible runtime and memory overheads of ~2.5% and ~10%. + Footnote †: journal: Computer Vision and Pattern Recognition + If we can provide this capability with little to no modifications to existing systems (e.g., Jupyter), we can offer benefits to a large number of data scientists and educators who use notebooks. To achieve this, we must overcome the following technical challenges. ChallengeCreating a reliable, efficient, and platform-independent replication mechanism is challenging. First, the mechanism must offer high coverage. That is, for almost all notebooks people create, we should be able to successfully replicate them across machines. Second, the mechanism should be significantly faster than straightforward approaches--errunning all the cells exactly as they were run in the past, or copying, if possible, all the variables with serialization/deserialization. Third, the mechanism should integrate with existing notebook systems with clean separation for sustainable development and easier adoption. Our ApproachOur core idea is that by observing the evolution of session states via lightweight monitoring, we can address the three important challenges--reliability, efficiency, and platform-independence--by combining program language techniques (i.e., on-the-fly code analyses) and novel algorithmic solutions (i.e., graph-based mathematical optimization). Specifically, to represent session state changes, we introduce the _application history_, a special form of bipartite graph expressing the dependencies among variables and cell executions. Using this graph, we take the following approach. First, we achieve _reliability_ and _platform independence_ by choosing a computational plan (or _replication plan_) that can safely reconstruct platform-dependent variables (e.g., Python generators, incompletely defined custom classes) based on the other platform-independent variables. That is, in the presence of variables that cannot be serialized for platform-independent replication, ElasticNotebook uses the application history to recompute them dynamically on a target machine. In this process, ElasticNotebook optimizes for the collective cost of recomputing all such variables while still maintaining their correctness (SS4). Second, for _efficiency_, ElasticNotebook optimizes its replication plan to determine (1) the variables that will be copied, and (2) the variables that will be recomputed based on the copied variables, to minimize the end-to-end migration (or restoration) time in consideration of serialization costs, recomputation costs, data transfer costs, etc. For example, even if a variable can be reliably transferred across machines, the variable may still be dynamically constructed if doing so results in a lower total cost. To make this decision in a principled way, we devise a new graph-based optimization problem, which reduces to a well-established min-cut problem (SS5). ImplementationWhile our contributions can apply to many dynamically analyzable languages (e.g., Python/R, LLVM-based ones), we implement our prototype (in C and Python) for the Python user interface, which is widely used for data science, machine learning, statistical analysis, etc. Specifically, ElasticNotebook provides a data management layer to Jupyter as a hidden _cell magic_(K machine to a target machine in a way that the original session state can be restored from \(\mathcal{D}\). In this process, we want to minimize the end-to-end time for creating \(\mathcal{D}\), transferring \(\mathcal{D}\) to a target machine, reconstructing the state from \(\mathcal{D}\) on the target machine. This is the first use case we empirically study (SS7.3). Fast Restart for On-demand Computing.Leveraging pay-as-you-go pricing model offered by many cloud vendors (Covington et al., 2018; Krizhevsky et al., 2017), suspending sessions (and VMs) when not in use is an effective way for reducing charges (e.g., up to 6\(\times\)(Krizhevsky et al., 2017)). With the ability to create data \(\mathcal{D}\) sufficient for reconstructing the current session state, we can persist \(\mathcal{D}\) prior to either manual or automated suspension (Krizhevsky et al., 2017; Krizhevsky et al., 2017; Krizhevsky et al., 2017), to quickly resume, when needed, the session in the same state. This achieves on-demand, granular computing with fast session restart times without impacting user experience due to frequent session suspensions (Krizhevsky et al., 2017; Krizhevsky et al., 2017). In this process, we want to restore the session as quickly as possible by minimizing the time it takes for downloading \(\mathcal{D}\) and reconstructing a state from it. This is the second use case we empirically study (SS7.4). ### How to Enable Data Management Layer? We discuss the pros and cons of several different approaches to enabling a data management layer. OS-level Checkpointing.To save the current session state, we can checkpoint the entire memory space associated with the underlying Python/R kernels. To make the process more efficient, existing tools like CRIU patch the Linux kernel to trace dirty pages. However, as described in SS1, this approach is platform-independent, incurs higher space cost, and is limited to storing the state of primary memory (not GPU or other devices). We empirically compare our approach to CRIU to understand reliability and efficiency (SS7). Object wrappers.Watchpoint object wrappers (Zhu et al., 2017; Zhu et al., 2017) are commonly used for debugging purposes (Zhu et al., 2017) and program slicing (Zhu et al., 2017; Zhu et al., 2017): they maintain deep copies for objects in the session state, which are compared to check for changes after each frame execution; however, they are unsuitable for use during data science workflows due to the unacceptable -20\(\times\) runtime overhead in our preliminary tests. Monitoring Cell Executions (Ours).In order to trace cell executions and their effects on variables, we can add a lightweight wrapper (i.e., our data management layer) that functions before and after each cell execution to monitor the cell code, runtime, and variable changes. This idea is depicted conceptually in Fig 2. Specifically, our implementation uses _cell magics_, a Jupyter-native mechanism that allows arbitrary modification to cell statements when the cell is executed. With this, we add pre-/post-processing steps to capture cell code and resulting session state modifications. ### Fast Replication with Application History This section describes our core idea for devising an efficient replication strategy by leveraging the ability to monitor cell executions. Application History.An _application history graph_ (AHG) is a bipartite graph for expressing session states changes with respect to cell runs. There are two types of nodes: variables and transformations. A transformation node connects input variables to output variables (see an example in Fig 3). AHG aims to achieve two properties: * **Completeness:** No false negatives. All input/output variable for each transformation must be captured. * **Minimal:** Minimal false positives. The number of variables that are incorrectly identified as accessed/modified, while variables are not actually accessed/modified, must be minimized. These properties are required for correct state reconstruction (SS4). Core Optimization Idea.AHG allows for efficient state replication with a combination of (1) recompute and (2) copy. _Motivating Example_. Suppose a data analyst fitting a regression model (Fig 3). The notebook contains 4 cell runs: data load (Cell 1), train-test split (Cell 2), fitting (Cell 3), and evaluation (Cell 4). After fitting, the analyst decides to move the session to a new machine for GPU. Simply rerunning the entire notebook incurs **33 minutes**. Alternatively, serializing/copying variables takes _20.6 minutes_. However, there is a more efficient approach. By copying only model and plot and recomputing others on a new machine (Fast-migrate), _we can complete end-to-end migration in 4.6 minutes_. Or, if we prioritize restoration time (to reduce user-perceived restart time for on-demand computing), our optimized plan (Fast-restore) takes 3.3 minutes. This example illustrates significant optimization opportunities in session replication. Our goal is to have the ability to find the best replication plan for arbitrarily complex AHGs. ## 3. System Overview This section presents ElasticNotebook at a high level by describing its components (SS3.1) and operations (SS3.2). Figure 3. Example app history (top) and different replication plan costs (bottom). Combining recompute/copy allows faster migration (Fast-migrate). Alternatively, the optimal plan changes if the restoration is prioritized (Fast-restore). Figure 2. For every cell run, we can inject custom pre-/post-processing logic. “%%intercept” is hidden to users. ### ElasticNotebook Components ElasticNotebook introduces a unique data layer that acts as a gateway between the user and the kernel (See Fig 4): it monitors every cell execution, observing code and resulting session state changes. _Cell Execution Interceptor._ The Cell Execution Interceptor intercepts cell execution requests and adds pre-/post-processing scripts before rerouting it into the underlying kernel for regular execution. The added scripts perform (1) cell code analyses and the AHG updates, and (2) cell runtime recordings. _Application History Graph (AHG)._ The AHG is incrementally built by the Cell Execution Interceptor to record how variables have been accessed/modified by each cell execution (SS4). The AHG is used by the Optimizer to compute replication plans (SS5). _Cost Model._ The cost model stores profiled metrics (i.e., cell runtimes, variable sizes, network bandwidth), serving as the hyperparameters for the Optimizer (SS5.2). _Optimizer._ The Optimizer uses the AHG and the Cost Model to determine the most efficient replication plan consisting of (1) variables to store and (2) cells to re-run. We discuss ElasticNotebook's cost model and optimization in detail in SS5. _Session Replicator._ The Session Replicator replicates a notebook session according to the Optimizer's plan. Specifically, the Writer creates and writes a checkpoint file to storage (e.g., SSD, cloud storage), while the Notebook Replayer reads the file and restores the session, both following the replication plan. We discuss ElasticNotebook's session replication in detail in SS3.2. ### ElasticNotebook Workflow This section describes ElasticNotebook's operations. ElasticNotebook monitors every cell execution during a session lifecycle, then performs on-request replication of the session in two steps: _checkpointing_ (writing to the checkpoint file) and _restoration_. _Monitoring Cell Executions._ Upon each cell execution by the user, ElasticNotebook performs the following steps: 1. [leftmargin=*,noitemsep,topsep=0pt] 2. Accessed variables of the cell execution are identified via AST analysis (described in SS4.2). 3. The cell code is executed by the Jupyter kernel. 4. Variable changes (i.e., creation/deletion/modification) are identified within the global namespace (SS4.2). 5. The AHG is updated using (1) the cell code and (2) modified variables by the cell execution. 6. The Cost Model is updated to record cell runtime. _Initiating Replication._ When replication is requested, ElasticNotebook creates and writes a _checkpoint file_ to storage, which can be restored later to exactly and efficiently reconstruct the current session. ElasticNotebook first completes the Cost Model by profiling variable sizes and network bandwidth to storage; then, the Optimizer utilizes the AHG and Cost model to compute a replication plan, according to which the Writer creates the checkpoint file: it consists of (1) a subset of stored variables from the session state, (2) cells to rerun, (3) the AHG, and (4) the Cost Model. _Restoring a Session._ When requested, ElasticNotebook restores the notebook session from the checkpoint file according to the replication plan. The Notebook Replayer reconstructs variables in the order they appeared in the original session by combining (1) cell reruns and (2) data deserialization followed by variable declaration (into the kernel). Finally, ElasticNotebook loads the AHG and Cost Model for future replications. _Accuracy Guarantee:_ ElasticNotebook's state reconstructing is effectively the same as re-running all the cells from scratch exactly in the order they were run in the past. That is, ElasticNotebook shortens the end-to-end reconstruction time by loading saved variables (into the kernel namespace) if doing so achieves time savings. SS4.3 presents formal correctness analysis. SS6.1 discusses how we address external resources, side effects, and deserialization failures. ## 4. Application History Graph This section formally defines the Application History Graph (SS4.1), and describes how we achieve exact state replication (SS4.3). ### AHG Formal Definition The AHG is a directed acyclic graph expressing how a session state has changed with respect to cell executions. Fig 5 is an example. **Definition 1**.: A **variable** is a named entity (e.g., df) referencing an **object** (which can be uniquely identified by its object ID). \begin{table} \begin{tabular}{l l} \hline \hline **Symbols** & **Definition** \\ \hline \(\mathcal{X}\) & Set of Variables \\ \(\mathcal{V}\) & Set of Variable Snapshots (VSs) \\ \(\mathcal{V}_{a}\) & Set of Active Variable Snapshots \\ \(\mathcal{C}\) (\(=c_{t_{1}},c_{t_{2}},\ldots\)) & Set of Cell Executions (CEs) \\ \(\mathcal{E}_{w}\) & Set of write dependencies \\ \(\mathcal{E}_{r}\) & Set of read dependencies \\ \(\mathcal{G}\) := \(\{\mathcal{V}\cup\mathcal{C},\mathcal{E}_{w}\cup\mathcal{E}_{r}\}\) & Application History Graph (AHG) \\ \hline \(req:\mathcal{X}\rightarrow\mathbb{Z}^{C}\) & Reconstruction mapping function \\ \(w_{store}:\mathcal{X}\rightarrow\mathbb{R}^{+}\) & Variable storage cost \\ \(w_{return}:\mathcal{C}\rightarrow\mathbb{R}^{+}\) & Cell Rerun cost \\ \hline \(w_{M}:\mathcal{Z}^{X}\rightarrow\mathbb{R}^{+}\) & Migration cost function \\ \(w_{g}:\mathcal{Z}^{X}\rightarrow\mathbb{R}^{+}\) & Recomputation cost function \\ \(\mathcal{L}\subseteq\mathcal{X}\times\mathcal{X}\) & Pairs of linked variables \\ \hline \(\mathcal{H}=\{\mathcal{V}_{H},\mathcal{E}_{H}\}\) & Flow graph \\ \(c:\mathcal{E}_{H}\rightarrow\mathbb{R}^{+}\) & Flow graph edge capacity function \\ \hline \hline \end{tabular} \end{table} Table 2. Notations and their meaning Figure 4. ElasticNotebook architecture. Its data layer acts as a gateway between the user interface and the kernel: cell executions are intercepted to observe session state changes. A variable can be primitive (e.g., int, string) or complex (e.g., list, dataframe). Multiple variables may point to the same object. The set of all variables (i.e., \(\mathcal{X}\)) defined in the global namespace forms a session state. Cell executions may modify the values of variables (or referenced objects) without changes to their names, which we recognize in AHG using _variable snapshot_, as follows. Definition 2 ().: A **variable snapshot** (VS) is a name-timestamp pair, (\(x\), \(t\)), representing the variable \(x\) created/modified at \(t\). We denote the set of Vses as \(\mathcal{V}\). Definition 3 ().: A **cell execution** (CE) \(c_{t}\) represents a cell execution that finishes at timestamp \(t\). All cell executions are _linear_; that is, for each session, there is at most one cell running at a time, and their executions are totally ordered. We denote the list of CEs by \(\mathcal{C}\). Each CE also stores executed cell code, which can be used for re-runs (SS3.2). Definition 4 ().: A **write dependency** (\(c_{t}\rightarrow\) (\(x\), \(t\))) indicates CE \(c_{t}\) may have modified/created at time \(t\) the object(s) reachable from the variable \(x\). We denote the set of write dependencies as \(\mathcal{E}_{w}\). In Fig 5, \(c_{t_{3}}\) modifies x with "\(x\leftrightarrow 1\)"; hence, (\(c_{t_{3}}\rightarrow\) (\(x\), \(c_{t_{3}}\))). Definition 5 ().: A **read dependency** ((\(x\), \(s\)) \(\rightarrow\)\(c_{t}\)) indicates CE \(c_{t}\) may have accessed object(s) reachable from x last created/modified at time \(s\). We denote the set of read dependencies by \(\mathcal{E}_{r}\). In Fig 5, "gen=(i for i in 11)" in \(C_{t_{4}}\) accesses elements in the list 11 after its creation in \(c_{t_{3}}\); hence there is ((\(x\rightarrow\)\(c_{t_{3}}\)), \(c_{t_{4}}\)). Note that write/read dependencies are allowed to contain false positives; nevertheless, our replication ensures correctness (SS4.3). Definition 6 ().: The \(\mathbf{AHG}\subseteq\{\mathcal{V}\cup\mathcal{C},\ \mathcal{E}_{w}\cup \mathcal{E}_{r}\}\) is a bipartite graph, where \(\mathcal{V}\) is Vses, \(\mathcal{C}\) is CEs; \(\mathcal{E}_{w}\) and \(\mathcal{E}_{r}\) are write/read dependencies, respectively. It models the lineage of the notebook session. In sum, AHG formalizes variable accesses/modifications with respect to cell executions. at the variable level (not object level), theoretically bounding the size of AHG to scale linearly with the number of defined variables, not the number of underlying objects (which can be very large for lists, dataframes, and so on). We empirically verify AHG's low memory overhead in SS7.5. ### Dynamic AHG Construction We describe how ElasticNotebook constructs the AHG accurately. Constructing the AHGThe AHG is incrementally built with accessed/created/modified variables by each cell execution: * A new CE \(c_{t}\) is created; \(t\) is an execution completion time. * Read dependencies are created from Vses (\(x_{1}\), \(t_{x_{3}}\)),..., (\(x_{k}\), \(t_{x_{k}}\)) to \(c_{t}\), where \(x_{1},...,x_{k}\) are variables _possibly_ accessed by \(c_{t}\). * Vses (\(y_{1}\), \(t\)),..., (\(y_{k}\), \(t\)) are created, where \(y_{1},...,y_{k}\) are variables _possibly_ modified and created by \(c_{t}\). Write dependencies are added from \(c_{t}\) to each of the newly created Vses. Fig 5 (right) shows an example AHG. Identifying access/modified variables is crucial for its construction, which we describe below. ID GraphThe ID Graph aims to to detect changes at the reference level (in addition to values). For instance, conventional equality checks (e.g., based on serialization) will return True for "[a] == [b]" if a and b have the same value (e.g., a = [1] and b = [1]), whereas we ensure it returns True only if a and b refer to the same _object_, i.e., id(a)==id(b), where id is the object's unique ID. This is because for correct state replication, shared references (e.g. aliases) and inter-variable relationships must be captured precisely. Identifying Accessed VariablesElasticNotebook identifies both _directly accessed_ variables (via AST (Shen et al., 2018) parsing) and _indirectly accessed_ variables (with ID Graphs), as follows. Direct AccessCll code is analyzed with AST, stepping also into user-defined functions (potentially nested) to check for accesses to variables not explicitly passed in as parameters (e.g., global x). Indirect AccessThe object(s) reachable from a variable X may be accessed indirectly via another variable Y if X and Y reference common object(s) (e.g., when aliases exist, Fig 5(a)), which cannot be identified via parsing only. To recognize indirect accesses, we check the existence of overlaps between the ID Graphs of X and Y. Our approach is conservative; that is, it may over-identify variables by including, for example, ones reachable from control flow branches that were not taken during cell executions. However, these false positives do not affect accuracy of state replication (SS4.3). Identifying Modified VariablesVariable modifications are identified using a combination of (1) object hashes and (2) ID Graphs. Value ChangesElasticNotebook identifies value modifications by comparing hashes (by xxHash (Han, 2017)) before and after each cell execution while using deep copy as a fallback. If the deep copy fails (e.g., unserializable or uncomparable variables), we consider them to be modified-on-access using results from AST and ID Graph (SS6.1). This may result in false positives; however, as previously mentioned, these false positives do not affect the accuracy. Structural ChangesThe ID Graph enables detecting structural changes (Fig 5(b)). After each cell execution, the current variables' ID Graphs are compared to the ones created before to identify reference swaps. In Fig 5(b), while the value of 2dlist1 remains unchanged Figure 5. An example notebook and its corresponding Application History Graph. The AHG tells ElasticNotebook how to recompute variables; for example, rerunning \(c_{t_{1}}\) and \(c_{t_{3}}\) is necessary for recomputing x (red). after execution after executing Cell 2, the memory address of its nested list has been changed, no longer referencing list1. ### State Reconstruction with AHG This section describes how we reconstruct variable(s). We focus on reconstructing the latest version of each variable, as defined in _active variable snapshot_ (VS) in an AHG. Definition 7 ().: VS \((x,\,t_{i})\) is **active** if \(x\) is in the system (i.e., not deleted), and there is no VS \((x,\,t_{j})\) such that \(t_{i}<t_{j}\). An active VS, \((x,\,t_{i})\), represents the current version of \(x\). For example, even if we checkpoint after \(c_{t_{2}}\) (in Fig 5), "\((x,\,t_{3})\)" is active since \(x\) was last modified by \(c_{t_{3}}\). We denote the set of active VSes as \(\mathcal{V}_{a}\). Reconstruction AlgorithmOur goal is to identify the most efficient computation strategy for reconstructing one or more active variables. Note that we do not reconstruct non-active variables since they are not part of the current session state. In achieving this goal, the AHG allows us to avoid unnecessary cell executions (e.g., because their outcomes have been overwritten) and to learn proper execution orders. Moreover, this process can be extended to reconstruct a set of variables more efficiently than computing them one by one. while still ensuring correctness. Specifically, to recompute VS \((x,\,t)\), we traverse back to its ancestors in the AHG (e.g., using the breadth-first search), collecting all CEs into a list \(req(x,t)\), until we find a _ground variable_ for every path, where the ground variable is a variable whose value is available in the system, i.e., either another active VS or copied variable. By rerunning all the CEs in \(req(x,t)\) in the order of their completion times, we can obtain the target VS \((x,\,t)\). To extend this algorithm to multiple VSes, say \((x1,\,t_{x1})\), \((x2,\,t_{x2})\), and \((x3,\,t_{x3})\), we obtain \(req\) for each VS and union them into a merged set (that is, identical CEs collapse into one). By rerunning all the CEs in the merged set, we obtain all target VSes. Fig 5 shows an example. To recompute \((x,\,t_{3})\), we rerun \(c_{t_{3}}\) which requires the previous version (\(x,\,t_{1}\)) as input, which in turn requires \(c_{t_{1}}\) to be rerun. Notably, it is not necessary to rerun \(c_{t_{2}}\) as its output \(z\) is available in the namespace. Finally, SS6.1 discusses how this approach can recover even if some ground variables are unexpectedly unobtainable. Why Only Use Active VSes?_ Theoretically, it is possible to use non-active variables as ground variables. That is, by preserving deleted/overwritten variables (e.g., in a cache), we may be able to speed up the recomputation of active variables (Han et al., 2016; Wang et al., 2017). However, we don't consider this approach as many data science workloads are memory-hungry with large training data and model sizes. Still, there might be cases where we can speed up recomputation by storing small overwritten variables, which we leave as future work. Correctness of ReconstructionAs stated in SS2.3, the AHG is allowed to have false positives, meaning it may indicate a cell accessed/modified variables that were not actually accessed/modified. While the false positives have a performance impact, they do not affect the correctness of identification. Theorem 4.1 ().: _Given the approximate AHG \(\mathcal{G}\) of ElasticNotebook with false positives, and the true AHG \(\mathcal{G}^{s}\), there is \(req^{s}(x,t^{s})\subseteqreq(x,t)\) for any variable \(x\in\mathcal{X}\), where \((x,t)\) and \((x,t^{s})\), \(req\) and \(req^{s}\) are the active VSs of \(x\) and reconstruction mapping functions defined on \(\mathcal{G}\) and \(\mathcal{G}^{s}\) respectively._ That is, for any arbitrary variable \(x\), while \(req(x,t)\) may contain cell executions unnecessary for recomputing \(x\), it will never miss any necessary cell executions (i.e., those in \(req(x,t^{s})\)). The proof is presented in Appendix A.2. ## 5. Correct & Efficient Replication This section covers how ElasticNotebook computes an efficient and correct plan for state replication with the AHG and profiled metrics. We describe correctness requirements in SS5.1, the cost model in SS5.2, the optimization problem in SS5.3, and our solution in SS5.4. ### Correctness Requirements ElasticNotebook aims to _correctly_ replicate session states. which we define the notion of in this section: Definition 8 ().: A replication of state \(\mathcal{X}\) is **value-equivalent** if \(\forall\alpha\in\mathcal{X},x\!=\!new(x)\), where \(new(x)\) is the value of \(x\) post-replication. A value-equivalent replication preserves the value of each individual variable and is guaranteed by the correct identification of \(req(x,t)\) for each variable \(x\) (SS4.3). However, it is additionally important that shared references are preserved, as defined below. Definition 9 ().: A value-equivalent replication of a session state \(\mathcal{X}\) is additionally **isomorphic** if \(\forall a,b,\,id(a)=id(b)\to id\_new(a)=id\_new(b)\), where \(a,\,b\) are arbitrary references (e.g., \(\mathbb{x}[\emptyset][1],\mathbb{y}.\,\text{foo}\)), and \(id(a),id\_new(a)\) are the unique IDs (i.e., memory addresses) of the objects pointed to by \(a\) before and after replication. Figure 6. Two uses of the ID Graph during AHG construction. Figure 7. Two variables sharing references (in Fig 5). They must be migrated/recomputed together for the correct replication, serving as constraints to our opt problem (see §5.3). ElasticNotebook defines replication as 'correct' only if it is isomorphic, requiring all shared references to be preserved: two references pointing to the same object pre-replication will still do so post-replication. That is, inter-object relations are identical (analogous to graph isomorphism). We describe how ElasticNotebook ensures isomorphic replication via its _linked variable constraint_ in SS5.3. ### Cost Model Our model captures the costs associated with (1) serializing variables, (2) writing byte data into storage (e.g., local SSD, cloud storage) and (3) rerunning cell executions. These costs are computed using the AHG and profiled system metrics. Variable Migration Cost.Migrating a variable (from one session to another) includes serializing it to the checkpoint file, then loading it into a new session. Given a subset of variables to migrate \(\mathcal{S}\subseteq\mathcal{X}\), the migration cost \(w_{M}\) can be expressed as follows: \[w_{M}(\mathcal{S})=\sum_{x\in\mathcal{S}}\alpha\times w_{store}(x)+w_{load}(x) \tag{1}\] Where \(w_{store}(x)\) and \(w_{load}(x)\) are the time costs for serializing the value of \(x\) at checkpointing time into a file and unpacking into the new session, respectively. These times are estimated using the size of \(x\) and storage latency/bandwidth from ElasticNotebook's Profiler (SS3.1). The time costs for unserializable variables are set to infinity. \(\alpha\) is a coefficient for adjusting the time cost of storage; for example, if ElasticNotebook is to be invoked upon auto-suspension, \(\alpha\) can be set to a low value to discount the user-perceived time of storing variables prior to completely suspending a session (as the user is likely away). Variable Recomputation Cost.The Interceptor records cell runtimes during a session lifecycle (SS3.1). Combined with the reconstruction mapping \(req()\) for the AHG (SS4.3), the cost \(w_{R}\) for recomputing a subset of variables \(\mathcal{S}\subseteq\mathcal{X}\) can be defined as follows: \[w_{R}(\mathcal{S})=\sum_{c\inreq(\mathcal{S})}w_{return}(c),\text{ where }req( \mathcal{S})=\bigcup_{x\in\mathcal{S}}req(x,t) \tag{2}\] where \((x,t)\) is the active VS of \(x\) and \(w_{return}(c):\mathcal{C}\rightarrow\mathbb{R}^{+}\) is the estimated time to rerun the CE \(c\) in the new session. Replication Plan Cost.Using migration and recomputation costs (i.e., Eqs. (1) and (2)), the total cost \(w-\)with variables to migrate \(\mathcal{S}\) and variables to recompute \(\mathcal{X}-\mathcal{S}-\)is expressed as: \[w(\mathcal{S})=w_{M}(\mathcal{S})+w_{R}(\mathcal{X}-\mathcal{S}) \tag{3}\] ### Optimization Problem for State Replication The goal is to find the variables to migrate \(\mathcal{S}\subseteq\mathcal{X}\) that minimizes the cost Eq. (3). To ensure isomophric replication in consideration of variable inter-dependencies, additional constraints are added. Constraint for Linked Variables.Two variables containing references to the same object (which we refer to as _linked variables_, e.g., 11 and 2dlist1 in Fig 7) must be either both migrated or recomputed, as migrating one and recomputing the other may result in their contained shared reference/alias being broken, as illustrated in Fig 7. Let the set of linked variable pairs be denoted as \(\mathcal{L}\), then the constraint can be formally expressed as follows: \[(x_{1}\in\mathcal{S}\wedge x_{2}\in\mathcal{S})\vee(x_{1}\notin\mathcal{S} \wedge x_{2}\notin\mathcal{S})\vee(x_{1},x_{2})\in\mathcal{L} \tag{4}\] Problem definition.Using the cost model in Eq. (3) and the constraint in Eq. (4), we formally define the state replication problem: **Problem 1**.: **Optimal State Replication** Input: 1. AHG \(\mathcal{G}=\{\mathcal{V}\cup\mathcal{C},\mathcal{E}\}\) 2. Migration cost function \(w_{M}:2^{\mathcal{X}}\rightarrow\mathbb{R}^{+}\) 3. Recompute cost function \(w_{R}:2^{\mathcal{X}}\rightarrow\mathbb{R}^{+}\) 4. Linked variables \(\mathcal{L}\subseteq\mathcal{X}\times\mathcal{X}\) Output: A replication plan of subset of variables \(\mathcal{S}\subseteq\mathcal{X}\) for which we migrate (and another subset \(\mathcal{X}-\mathcal{S}\) which we recompute) Objective: Minimize replication cost \(w_{M}(\mathcal{S})+w_{R}(\mathcal{X}-\mathcal{S})\) Constraint: Linked variables are either both migrated or recomputed: \((x_{1},x_{2}\in\mathcal{S})\vee(x_{1},x_{2}\notin\mathcal{S})\vee(x_{1},x_{2})\in \mathcal{L}\) The next section (SS5.4) presents our solution to Prob 1. ### Solving State Replication Opt. Problem We solve Prob 1 by reducing it to a min-cut problem, with a \(src\)-\(sink\) flow graph constructed from the AHG such that each \(src\)-\(sink\) cut (a subset of edges, which, when removed from the flow graph, disconnects source \(s\) and sink \(t\)) corresponds to a replication plan \(\mathcal{S}\), while the cost of the cut is equal to the replication cost \(w_{M}(\mathcal{S})+w_{R}(\mathcal{X}-\mathcal{S})\). Therefore, finding the minimum cost \(src\)-\(sink\) cut is equivalent to finding the optimal replication plan. Flow Graph Construction.A flow graph \(H\coloneqq\{\mathcal{V}_{H},\mathcal{E}_{H}\}\) and its edge capacity \(\phi:\mathcal{E}_{H}\rightarrow\mathbb{R}^{+}\) are defined as follows: * \(\mathcal{V}_{H}=\mathcal{V}_{a}\cup\mathcal{C}\cup\{src,sink\}\): \(\mathcal{V}_{a}\) is active Vses, \(\mathcal{C}\) is cell executions, and \(src\) and \(sink\) are _dummy_ source and sink nodes. * \(\forall x\in\mathcal{V}_{a}\), \((src,(x,t))\in\mathcal{E}_{H}\) and \(\phi(src,(x,t))=w_{M}(x)\): We add an edge from the source to each active VS with a capacity equal to the migration cost of the variable. * \(\forall c\in\mathcal{C},(c,sink)\in\mathcal{E}_{H}\) and \(\phi(c,sink)=w_{return}(c)\): We add an edge with capacity from each CE to the sink with a capacity equal to the rerun cost of the CE. * \(\forall c\in\mathcal{C},c\in req(x,t)\rightarrow((x,t),c)\in\mathcal{E}_{H}\) and \(\phi((x,t),c)=\infty\) and \((x,t)\in\mathcal{V}_{a}\): We add an edge with infinite capacity from an active VS \((x,t)\) to a CE \(c\) if \((x,t)\) must be recomputed. * \(\forall(x_{1},x_{2})\in\mathcal{L},\ ((x_{1},t_{1})\leftrightarrow(x_{2},t_{2}))\in \mathcal{E}_{H}\) and \(\phi((x_{1},t_{1})\leftrightarrow(x_{2},t_{2}))=\infty\): We add a _bi-directional_ edge with an infinite Figure 8. Running min-cut on the flow graph constructed from the AHG in Fig 5. The partition (red) defined by the minimum cut (dashed edges) determines the replication plan. capacity between each pair of active VSe's corresponding to linked variables \(x_{1}\) and \(x_{2}\), e.g., 11 and 2dlist1. The flow graph \(\mathcal{H}\) for the AHG in Fig 5 is depicted in Fig 8. _Solution._ We can now solve Prob 1 by running a \(src\)-\(sink\) min-cut solving algorithm (i.e., Ford-Fulkerson (1997)) on \(H\). The set of edges that form the \(src\)-\(sink\) min-cut (dashed edges), when removed, disconnects \(src\) from \(sink\); therefore, it defines a partition (in red) of the nodes into nodes reachable from \(src\), \(\mathcal{V}_{H_{src}}\) and nodes unreachable from \(src\), \(\mathcal{V}_{H_{sink}}\). The replication plan can be obtained from the partition: * \(\mathcal{S}=\{x\mid(x,t)\in\mathcal{V}_{H_{sink}}\cap\mathcal{V}_{a}\}\) are the active variable snapshots (and thus variables) that we want to migrate; in the example, these variables are 11, 2dlist1, and gen. * \(\mathcal{V}_{H_{src}}\cap\mathcal{C}\) are the CFs which we will rerun post-migration to recompute \(\mathcal{X}-\mathcal{S}\). In the example, these CEs are \(t_{1}\), \(t_{2}\), and \(t_{3}\); when rerun, they recompute \(\mathbf{y}\), \(\mathbf{z}\), and \(\mathbf{x}\).2 Footnote 2: Evenning \(t_{3}\) also recomputes 11; however, it will be overwritten with the stored 11 in the checkpoint file following the procedure in §3.2. This is to preserve the link between 11 and 2dlist1. By construction of \(\mathcal{H}\), the sum of migration and recomputation costs of this configuration \(\mathrm{w}_{M}(\{x\mid(x,t)\in\mathcal{V}_{H_{sink}}\cap\mathcal{V}_{a}\}+w_ {B}(C_{a}-(\mathcal{V}_{H_{src}}\cap\mathcal{C}))\) is precisely the cost of the found \(src\)-\(sink\) min-cut. ## 6. Implementation and Discussion This section describes ElasticNotebook's implementation details (SS6.1) and design considerations (SS6.2). ### Implementation _Integrating with Jupyter._ For seamless integration, ElasticNotebook's data layer is implemented using a magic extension (Kumar et al., 2017), which is loaded into the kernel upon session initialization. The cell magic is automatically added to each cell (SS2.2) to transparently intercept user cell executions, perform code analyses, create ID Graphs and object hashes, and so on. _Serialization Protocol._ The Pickle protocol (e.g., __reduce__) is employed for (1) object serialization and (2) definition of reachable objects, i.e., an object \(y\) is reachable from a variable \(x\) if pickle(x) includes \(y\). As Pickle is the de-facto standard (in Python) observed by almost all data science libraries (e.g., NumPy, PyTorch (Kumar et al., 2017)), ElasticNotebook can be used for almost all use cases. _Handling Undeserializable variables._ Certain variables can be serialized but contain errors in its _deserialization_ instructions (which we refer to as _undeserializable_ variables), and are typically caused by oversights in incompletely implemented libraries (Kumar et al., 2017; Kumar et al., 2017). While undetectable via serializability checks prior to checkpointing, ElasticNotebook handles them via fallback recomputation: if ElasticNotebook encounters an error while deserializing a stored variable during session restoration, it will trace the AHG to determine and rerun (only) necessary cell executions to recompute said variable, which is still faster than recomputing the session from scratch. ### Design Considerations _Definition of Session State._ In ElasticNotebook, the session state is formally defined as the contents of the user namespace dictionary (user_ns), which contains key-value pairs of variable names to their values (i.e., reachable objects). The session state does not include local/module/hidden variables, which we do not aim to capture. _Unobservable State / External Functions._ Although the Pickle protocol is followed by almost all libraries, there could be lesser-known ones with incorrect serialization (e.g., ignoring data defined in a C stack). To address this, ElasticNotebook can be easily extended to allow users to annotate cells/variables to inform our system that they must be recomputed for proper reconstruction. Mathematically, this has the same effect as setting their recomputation costs to infinity in Eq. (2). _Cell Executions with Side Effects._ Certain cell executions may cause external changes outside a notebook session (e.g., filesystem) and may not be desirable to rerun (e.g., uploading items to a repository). Our prototype currently does not identify these side effects as our focus is read-oriented data science and analytics workloads. Nevertheless, our system can be extended at least in two ways to prevent them. _(1: Annotation)_ We can allow users to add manual annotations to the cells that may cause side effects; then, our system will never re-run them during replications3_(2: Sandbook)_ We can block external changes by replicating a notebook into a sandbox with altered file system access (e.g., chroot (Kumar et al., 2017)) and blocked outgoing network (e.g., ufw (Kumar et al., 2017)). The sandbox can then be associated with regular file/network accesses upon successful restoration. Footnote 3: Replication may be unfeasible due to annotations, e.g., an unserializable variable requiring an cell execution annotated ’never-rerun’ to recompute. ElasticNotebook can detect these cases as they have infinite min-cut cost (§5.4), upon which the user can be warned to delete the problematic variable to proceed with replicating the remaining (majority of) variables in the state. _Non-deterministic Operations._ The replication has the same effect as rerunning the cells in the exact same order as they occurred in the past; thus, under the existence of nondeterministic operations (e.g., randint()), the reconstructed variables may have different values than the original ones. Users can avoid this by using annotations to inform ElasticNotebook to always copy them. _Library Version Compatibility._ Accurate replication is ensured when external resources (e.g., installed modules, database tables) remain the same before and after the replication. While there are existing tools (i.e., pip freeze (Kumar et al., 2017)) for reproducing computational environments on existing data science platforms (i.e., Jupyter Notebook, Colab) (Becker et al., 2016; Kumar et al., 2017), this work does not incorporate such tools. ## 7. Experimental Evaluation In this section, we empirically study the effectiveness of ElasticNotebook's session replication. We make the following claims: 1. **Robust Replication:** Unlike existing mechanisms, ElasticNotebook is capable of replicating almost all notebooks. (SS7.2) 2. **Faster Migration:** ElasticNotebook reduces session migration time to upscaled/downscaled machines by 85%-98%/84%-99% compared to rerunning all cells and is up to 2.07\(\times\)/2.00\(\times\) faster than the next best alternative, respectively. (SS7.3) 3. **Faster Resumption:** ElasticNotebook reduces session restoration time by 94%-99% compared to rerunning all cells and is up to 3.92\(\times\) faster than the next best alternative. (SS7.4) 4. **Low Runtime Overhead:** ElasticNotebook incurs negligible overhead--amortized runtime and memory overhead of <2.5% and <10%, respectively. (SS7.5) 5. **Low Storage Overhead:** ElasticNotebook's checkpoint sizes are up to 66% smaller compared to existing tools. (SS7.6) 6. **Adaptability to System Environments:** ElasticNotebook achieves consistent savings across various environments with different network speeds and available compute resources. (SS7.7) 7. **Scalability for Complex Notebooks:** ElasticNotebook's runtime and memory overheads remain negligible (\(<\)150ms, \(<\)4MB) even for complex notebooks with 2000 cells. (SS7.8) ### Experiment Setup _Datasets._ We select a total of 60 notebooks from 4 datasets: * Kaggle (Kaggle, 2009): We select 35 popular notebooks on the topic of EDA (exploratory data analysis) + machine learning from Kaggle created by Grandmaster/Master-level users. * JWST (Kaggle, 2010): We select 5 notebooks on the topic of data pipelining from the example notebooks provided on investigating data from the James Webb Space Telescope (JWST). * Tutorial (Kaggle, 2011): We select 5 notebooks from the Cornell Virtual Workshop Tutorial. These notebooks are lightweight and introduce tools (i.e., clustering, graph analysis) to the user. * Homework (Han et al., 2016; Kaggle, 2017; Kaggle, 2018): 15 in-progress notebooks are chosen from data science exercises. They contain out-of-order cell executions, runtime errors, and mistakes (e.g., df_backup=df4). Footnote 4: This creates a shallow copy of df, which does not serve the purpose of backup. Table 3 reports our selected notebooks' dataset sizes and runtimes. _Methods._ We evaluate ElasticNotebook against existing tools capable of performing session replication: [MISSING_PAGE_POST] ep,margin=*] * [noitemsep,leftmargin=*] * [nosep,margin=*] * [noitemsep,leftmargin=*] * [nosep,margin=*] * [noitemsep,leftmargin=*] * [nosep,margin=*] * [noitemsep,margin=*] * [nosep,margin=*] * linked components of a Matplotlib (Mikolov et al., 2017) plot\(-\)\(f\), fig\(\,\)ax) ; it serializes variables into individual files, which breaks object references and isomorphism. ElasticNotebook's linked variables constraint (SS5.3) ensures that it does not do so. **ElasticNotebook + Helix** fails to correctly replicate 5/60 notebooks containing variable aliases due to its lacking of the linked variable constraint. **EN (No ID graph)** fails to correctly replicate 11/60 sessions due to it missing indirect accesses and structural modifications causing incorrect construction of the AHG, which in turn leads it to recompute some variables value-incorrectly. **CRIU** fails on one notebook (Zhou et al., 2017) which contains an invisible file; however, unlike ElasticNotebook's failures, this failure is currently a fundamental limitation in CRIU (Kumar et al., 2018). _Robust Migration across System Architectures._ We additionally performed session replication from our D32as VM (x64 architecture) to a D32pds V5 VM instance (arm64 architecture). The CRIU images cannot be replicated across machines with different architectures. In contrast, ElasticNotebook does not have such a limitation. ### Faster Session Migration This section compares the efficiency of ElasticNotebook's session migration to existing methods. We choose 10 notebooks with no unserializable variables (otherwise, existing methods fail) to compare the end-to-end session migration time achieved by different methods. We report upscaling and downscaling results in Fig 10 and Fig 16, respectively. The design goal of ElasticNotebook is to reduce session replication time through balancing variable storage and recomputation, which is successfully reflected as follows. ElasticNotebook is able to reduce session migration time to the upscaled/downscaled VMs by 85%-98%/84%-99% compared. Compared to DumpSession, %Store, and CRIU, which store all variables in the checkpoint file, ElasticNotebook upscales/downscales up to 2.07\(\times\)/2.00\(\times\) faster than the best of the three. DumpSession, while being the next best alternative for upscaling/downscaling on 8/9 notebooks, falls short in robustness as demonstrated in SS7.2. %Store's individual reading and writing of each variable results in high overhead from multiple calls to the NFS for each migration. CRIU is the slowest non-rerun method for upscaling/downscaling on 6/7 notebooks, due to the size of its memory dump (higher I/O during migration) being up to 10\(\times\) larger compared to checkpoint files from native tools (SS7.6). ### Faster Session Restoration In this section, we compare the efficiency of ElasticNotebook's session restoration to existing methods. We generate checkpoint files using each method, then compare the time taken to restore the session from the checkpoint files on the 10 notebooks from SS7.3. For ElasticNotebook, we set the coefficient \(\alpha\) to 0.05 (SS5.2) to emphasize session restoration time heavily. We report the results in Fig 10. ElasticNotebook's restoration time is 94%-99% faster compared to full rerun. Compared to the baselines, ElasticNotebook is 3.92\(\times\) faster than the next best alternative. These fast restoration can be attributed to ElasticNotebook capable of adapting to the new optimization objective, unlike the baselines: for example, on the Sklearn (Kumar et al., 2018) notebook, instead of re-running cell 3 (df = pd.read_csv(...)) to re-read the dataframe of into the session as in the migration-centric plan, the restoration-centric plan opts to store of instead. The reasoning is that despite the sum of serialization and deserialization times of df being greater than the re-reading time with pd.read_csv (6.19s + 1.17s > 5.5s), the deserialization time by itself is less than the re-reading time (1.17s < 5.5s); hence, storing df is the optimal choice. Figure 11. ElasticNotebook’s session restoration time vs. existing tools. Times normalized w.r.t. **R**erunAll. **ElasticNotebook speeds up migration by 94%-99%, and is up to 3.92\(\times\) faster compared to the next best alternative. \begin{table} \begin{tabular}{l|c c c c c c c c c c c} \hline \hline & Sklearn & NLP & StoreSales & TPS-Mar & Glove & Trading & Timeser. & Stacking & Agricultural. & LANL & HW-LM & HW-ex3 \\ \hline Notebook runtime (s) & 58.48 & 1016.77 & 283.06 & 178.42 & 696.64 & 687.54 & 204.10 & 788.54 & 269.40 & 1437.87 & 22.54 & 27.29 \\ Total cell monitoring time (s) & 1.26 & 4.30 & 0.81 & 1.34 & 6.43 & 0.46 & 0.60 & 2.13 & 3.08 & 0.19 & 0.50 & 0.09 \\ **Runtime overhead (%)** & **2.14** & **0.42** & **0.28** & **0.78** & **0.92** & **0.07** & **0.29** & **0.27** & **1.14** & **0.01** & **2.21** & **0.32** \\ \hline User Namespace memory usage (MB) & 1021.45 & 325.82 & 6732.17 & 1558.52 & 347.16 & 1363.32 & 130.27 & 20211.51 & 5026.48 & 7641.19 & 31.28 & 19.06 \\ ElasticNotebook memory usage (MB) & 19.16 & 4.73 & 0.14 & 1.69 & 33.25 & 4.09 & 0.28 & 0.33 & 0.06 & 0.14 & 0.99 & 0.47 \\ **Memory overhead (%)** & **1.88** & **1.45** & **0.002** & **0.11** & **9.58** & **0.30** & **0.21** & **0.002** & **0.001** & **0.001** & **3.16** & **2.45** \\ \hline \hline \end{tabular} \end{table} Table 5. Runtime and memory overhead of ElasticNotebook’s workflow monitoring on selected notebooks. Figure 10. ElasticNotebook’s session upscaling time (D32as v5 VM\(\rightarrow\)D64as v5 VM) vs. existing tools. Times normalized w.r.t. **R**erunAll. **ElasticNotebook speeds up migration by 85%-98% and is up to 2.07\(\times\) faster than the next best alternative. ### Low Runtime Overhead This section investigates the overhead of ElasticNotebook's notebook workflow monitoring. We measure ElasticNotebook's total time spent in pre/post-processing steps before/after each cell execution for updating the AHG and cell runtimes (_Total cell monitoring time_), and total storage space taken to store the AHG, ID Graphs, and hashes at checkpoint time (ElasticNotebook _memory usage_). We report the results in Table 5. ElasticNotebook's cell monitoring incurs a maximum and median runtime overhead of (only) 2.21% and 0.6%; thus, ElasticNotebook can be seamlessly integrated into existing workflow. ElasticNotebook is similarly memory-efficient as its stored items (AHG, ID Graphs, and hashes) are all metadata largely independent of the size of items in the session: the median memory overhead is 0.25%, with the worst case being 9.58%. _Fine-grained Analysis_. To study the per-cell time and memory overheads during experimental notebook usage, we examined three notebooks from Homework category to confirm the maximum time and memory overheads were 92ms and 4.9MB, respectively. We report details in Appendix A.1. ### Lower Storage Overhead This section measures the storage cost of ElasticNotebook's checkpoint files: we compare the migration-centric checkpoint file sizes from ElasticNotebook and those from other baseline methods. We report select results in Fig 12. ElasticNotebook's AHG allows it to choose between storing and recomputing each variable, reflected in ElasticNotebook's checkpoint files being up to 67% smaller compared to DumpSession's. For example, on the Agriculture (Yeh et al., 2017) notebook, ElasticNotebook recomputes the train-test splits of the input dataframes X and Y (Cell 5, x_train, x_test,... = train_test_split(X, Y)) instead of storing them in the checkpoint file: this saves considerable storage space (2.5GB) in addition to speeding up migration. Conversely, CRIU's checkpoint file sizes can be 10\(\times\) larger than ElasticNotebook's as it additionally dumps memory occupied by the Python process itself and imported modules, no matter necessary or not, into the checkpoint file. Output sizes from RerunAll (i.e., notebook metadata size consisting of cell code and outputs) are provided for comparison. While metadata are significantly smaller than checkpoint files, the storage benefit is offset by significantly slower session recovery times (SS7.4). ### Performance Gains Across Environments This section demonstrates ElasticNotebook's operation in environments with varying specifications. We perform a parameter sweep on the NFS network bandwidth via rate limiting (Krizhevsky et al., 2017) and compare the migration time of ElasticNotebook, DumpSession (migrating all variables), and RerunAll. We report the results in Fig 13. ElasticNotebook's balancing of variables storage and recomputation ensures that it is always at least as fast as the faster of DumpSession and RerunAll. Notably, ElasticNotebook can adapt to the relative availability between network bandwidth and compute power: as the bandwidth decreases, the replication plan is changed accordingly to migrate more variables through recomputation rather than storage. For example, on the Stacking (Krizhevsky et al., 2017) notebook, at regular bandwidth (\(>\)400Mbps), ElasticNotebook's replication plan includes migrating most of the session state, opting only to recompute certain train/test splits (i.e., Cell 37, Y_train, Y_validation). At \(<\)400 Mbps, ElasticNotebook modifies its plan to recompute instead of store a computationally expensive processed dataframe (Cell 39, latest_record). At \(<\)100 Mbps, ElasticNotebook modifies its plan again to only store the imported class and function definitions (i.e., XGBRegressor, mean_squared_error in Cell 1) while recomputing the rest of the notebook. ### Scaling to Complex Workloads In this section, we test the scalability of ElasticNotebook's session replication on complex notebook sessions with a large number of cell executions and re-executions. Specifically, we choose 3 tutorial notebooks, on which we randomly re-execute cells and measure the (1) size of ElasticNotebook's AHG and (2) optimization time for computing the replication plan at up to 2000 cell re-executions7. Figure 14. Scalability of ElasticNotebook with cell execution count. The size of AHG increases linearly. Replication plan optimization time increases sub-linearly. Figure 12. ElasticNotebook’s checkpoint file size vs. existing tools. Times normalized w.r.t. output from DumpSession. ElasticNotebook’s checkpoint file size is up to 67% smaller compared to those from existing tools (excluding RerunAll). Figure 13. ElasticNotebook adapts to different environments for its replication plan. The lower the network bandwidth, the more variables are recomputed. We report the results in Fig 14. The memory consumption of ElasticNotebook's AHG exhibits linear scaling vs. the number of cell executions reaching only \(\prec\)4MB at 2000 cell re-executions, which is negligible compared to the memory consumption of the notebook session (\(\succ\)1GB) itself. ElasticNotebook's optimization time for computing the replication plan similarly exhibits linear scaling, reaching a negligible \(\prec\)150ms at 2000 cell re-executions: ElasticNotebook's chosen algorithm for solving min-cut, Ford-Fulkerson (Fulkerson, 1983), has time complexity \(O(Ef)\), where \(E\) is the number of edges in the AHG and \(f\) is the cost of the optimal replication plan: The former scales linearly while the latter is largely constant. ## 8. Related Work _Intermediate Result Reuse in Data Science._ The storage of intermediate results has been explored in various contexts in Data Science due to the incremental and feed-forward nature of tasks, which allows outputs from prior operations to be useful for speeding up future operations (Kraus et al., 2017; Kraus et al., 2018; Kraus et al., 2019; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020). Examples include caching to speed up model training replay for ML model diagnosis (Kraus et al., 2019; Kraus et al., 2020), caching to speedup materialized view refresh workloads (Kraus et al., 2020), caching to speed up anticipated future dataframe operations in notebook workflows (Kraus et al., 2020), and storage of cell outputs to facilitate graphical exploration of the notebook's execution history for convenient cell re-runs (Kraus et al., 2017; Kraus et al., 2020). There are related works (Kraus et al., 2020; Kraus et al., 2020) which algorithmically explore the most efficient way to (re)compute a state given currently stored items; compared to our work, while Helix (Helix, 2020) similarly features balancing loading and recomputation, its model lacks the linked variable constraint which may result in silently incorrect replication if directly applied to the computational notebook problem setting. _Data-level Session Replication._ Session replication on Jupyter-based platforms can be performed with serialization libraries (Kraus et al., 2017; Kraus et al., 2018; Kraus et al., 2020; Kraus et al., 2020). There exists a variety of checkpoint tools built on these serialization libraries: IPython's %Store (Kraus et al., 2020) is a Pickle-based (Kraus et al., 2020) interface for saving variables to a key-value store; however, it breaks object references as linked variables are serialized into separate files. The Dill-based (Dill-based, 2017) DumpSession (Dill-Dill, 2017) correctly resolves object references, yet it still fails if the session contains unserializable objects. Tensorflow (Kraus et al., 2018) and Pytorch (Kraus et al., 2020) offer periodical checkpointing during ML model training limited to objects within the same library. Jupyter's native checkpointing mechanism (Dill-Dill, 2017) only saves cell metadata and often fails to exactly restore a session due to the common presence of hidden states. Compared to existing data-level tools, session replication with ElasticNotebook is both more efficient and robust: the Application History Graph enables balancing state storage and recomputation, which achieves considerable speedup while avoiding failure on unserializable objects. _System-Level Session Replication._ Session replication can similarly be performed using system-level checkpoint/restart (C/R) tools, on which there is much existing work (Kraus et al., 2017; Kraus et al., 2018; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020). Applicable tools include DMTCP (Brock et al., 2018) and CRIU (Kraus et al., 2020); recently, CRUM (Kraus et al., 2020) and CRAC (Kraus et al., 2020) have explored extending C/R to CUDA applications. Elsa (Kraus et al., 2020) integrates CRIU with JupyterHub to enable C/R of JupyterHub servers. Compared to ElasticNotebook, system-level tools are less efficient and robust due to their large memory dump sizes and limited cross-platform portability, respectively. _Lineage Tracing._ Lineage tracing has seen extensive use in state management to enable recomputation of data for more efficient storage of state or fault tolerance (Kraus et al., 2017; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020). Recently, the usage of data lineage in computational notebooks has enabled multi-version notebook replay (Kraus et al., 2020), recommending notebook interactions (Kraus et al., 2020), and creating reproducible notebook containers (Brock et al., 2018), and program slicing, i.e., finding the minimal set of code to run to compute certain variable(s) (Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020; Kraus et al., 2020). This work adopts lineage tracing techniques to capturing inter-variable dependencies (the Application History Graph) for optimization; to the best of our knowledge, existing works on Python programs focus on capturing value modifications (via equality comparisons); however, our techniques additionally identifies and captures _strucal changes_ via the ID graph, which is crucial for preserving variable aliases and avoiding silent errors during state replication. _Replicating Execution Environment._ An identical execution environment may be necessary for session replication on a different machine. There is some recent work exploring environment replication for Jupyter Notebook via containerizing input files and modules (Brock et al., 2018; Kraus et al., 2020). While useful in conjunction with ElasticNotebook, we consider these works to be largely orthogonal. _Notebook Parameterization and Scripts._ There exists works on executing notebooks in parameterized form for systematic experimentation (e.g., in the form of a script (Kraus et al., 2020; Kraus et al., 2020) or papermill (Kraus et al., 2020)). While ElasticNotebook is designed for use within interactive notebook interfaces, it is similarly applicable for the migration of parameterized notebook execution results. ## 9. Conclusion In this work, we have proposed ElasticNotebook, a new computational notebook system that newly offers elastic scaling and checkpointing/restoration. To achieve this, ElasticNotebook introduces a transparent data management layer between the user interface and the underlying kernel, enabling robust, efficient, and platform-independent state replication for notebook sessions. Its core contributions include (1) low-overhead, on-the-fly application history construction and (2) a new optimization for combining copying and re-computation of variables that comprise session states. We have demonstrated that ElasticNotebook can reduce upscaling, downscaling, and restoration times by 85%-98%, 84%-99%, and 94%-99%, respectively, on real-world data science notebooks with negligible runtime and memory overheads of \(\prec\)2.5% and \(\prec\)10%, respectively. In the future, we plan to achieve higher efficiency and usability by tracing state changes at a finer level. Specifically, we will introduce _micro-cells_ to capture code blocks inside a cell that repeatedly runs (e.g., for-loop for machine learning training). Then, the system will automatically store intermediate models (along with other metadata) that will enable live migration and checkpointing/restoration for long-running cell executions. ## Acknowledgments The authors are grateful to Chandra Chekuri and Kent Quanrud for assistance with the derivation of the reduction to min-cut employed in ElasticNotebook. This work is supported in part by the National Center for Supercomputing Applications and Microsoft Azure
2307.16526
No Fair Lunch: A Causal Perspective on Dataset Bias in Machine Learning for Medical Imaging
As machine learning methods gain prominence within clinical decision-making, addressing fairness concerns becomes increasingly urgent. Despite considerable work dedicated to detecting and ameliorating algorithmic bias, today's methods are deficient with potentially harmful consequences. Our causal perspective sheds new light on algorithmic bias, highlighting how different sources of dataset bias may appear indistinguishable yet require substantially different mitigation strategies. We introduce three families of causal bias mechanisms stemming from disparities in prevalence, presentation, and annotation. Our causal analysis underscores how current mitigation methods tackle only a narrow and often unrealistic subset of scenarios. We provide a practical three-step framework for reasoning about fairness in medical imaging, supporting the development of safe and equitable AI prediction models.
Charles Jones, Daniel C. Castro, Fabio De Sousa Ribeiro, Ozan Oktay, Melissa McCradden, Ben Glocker
2023-07-31T09:48:32Z
http://arxiv.org/abs/2307.16526v1
# No Fair Lunch: A Causal Perspective on Dataset Bias in Machine Learning for Medical Imaging ###### Abstract As machine learning methods gain prominence within clinical decision-making, addressing fairness concerns becomes increasingly urgent. Despite considerable work dedicated to detecting and ameliorating algorithmic bias, today's methods are deficient with potentially harmful consequences. Our causal perspective sheds new light on algorithmic bias, highlighting how different sources of dataset bias may appear indistinguishable yet require substantially different mitigation strategies. We introduce three families of causal bias mechanisms stemming from disparities in prevalence, presentation, and annotation. Our causal analysis underscores how current mitigation methods tackle only a narrow and often unrealistic subset of scenarios. We provide a practical three-step framework for reasoning about fairness in medical imaging, supporting the development of safe and equitable AI prediction models. ## 1 Introduction Machine learning (ML) algorithms for medical image analysis are rapidly becoming powerful tools for supporting high-stakes clinical decision-making. However, these algorithms can reflect or amplify problematic biases present in their training data [1, 2, 3, 4]. While ML methods have potential to improve outcomes by improving diagnostic accuracy and throughput, they often generalise poorly across clinical environments [5] and may exhibit algorithmic bias leading to worse performance in underrepresented populations [6, 7, 8, 9, 10, 11]. In recent years, many methods for mitigating algorithmic bias in image analysis have been proposed [12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23], however, it remains unclear when each method is most appropriate or even valid to employ [24, 25, 26]. Bias mitigation methods may show convincing results on one benchmark yet appear useless or even harmful on others [13], raising the question: how should we effectively tackle bias in ML for medical imaging? In this perspective, we discuss how the language of causality can illuminate the study of fairness. Inspired by causal work in dataset shift [27, 28, 29, 30, 31] and domain adaptation [32, 33, 34], we argue that understanding the mechanisms of dataset bias which may _cause_ predictive models to become unfair is critical for reasoning about what steps to take in practice. We discuss how bias mitigation techniques may successfully combat one bias mechanism but be theoretically ruleit against others. We identify three key families of clinically relevant bias mechanisms: (i) _prevalence disparities_, (ii) _presentation disparities_, and (iii) _annotation disparities_. We highlight how careful analysis of the underlying characteristics of these mechanisms is critical for informing the selection of appropriate methods and metrics. Our causal considerations are intended to help guide future research in the field. ## 2 Dataset Bias in Imaging We consider image classification problems where we aim to learn a fair disease classifier, given a potentially biased training dataset of images and targets (e.g. class labels). Where much work focuses on algorithmic bias [35], concerning errors introduced by imperfect models, we take a step back and focus on dataset bias. From a machine learning perspective, dataset bias would be reflected even by a 'perfect' model with no error on its biased training dataset. By focusing on what such a model would learn from a given dataset, we can use tools from causal reasoning to study how the underlying mapping between images and targets shifts across groups and settings. This is especially relevant in today's paradigm of high-capacity deep learning models, which often achieve near-zero training error when trained with the standard approach of empirical risk minimization (ERM) [36]. Throughout this article, we take a graphical approach to causality, relying on the principles explained in our brief primer in Appendix A. We refer readers looking for further introductory material to Peters et al. [37] and to Pearl [38]. Causality and sensitive informationWe begin with a causal formulation of the image classification problem, examining the data-generating process behind medical datasets. In medical image analysis, we deal with medical scans \(X\) from patients with an underlying condition \(Z\), causing pathological structures to be visible in their scans (\(Z\to X\), read as '\(Z\)_causes_\(X\)'). We wish to learn a model which may classify the condition of unlabelled images in deployment. However, since \(Z\) is not directly observable, we must train our models to predict a proxy target \(Y\), collected as part of the training dataset. For didactic clarity, our examples in this article primarily focus on tasks where \(Y\) derives from \(Z\), such as confirmed diagnoses or patient outcomes. We may view this setup as generalising the common assumption of anticausal classification [27, 39]. Where anticausal tasks traditionally assume that the target is a gold standard for the underlying condition (\(Y\coloneqq Z\)), our formulation allows for the more general case where there may be label noise (\(Y\gets Z\)). Finally, we include a selection node \(S\) in our diagrams to consider possible selection biases, where all observed data points are conditioned on \(S\). When selection is random, we may omit \(S\) for simplicity. The defining theme of fair image analysis is that images contain sensitive information about individuals, which models may learn to exploit inappropriately. In group fairness, this information encodes membership of a clinically or socially relevant population subgroup, denoted by a sensitive attribute \(A\) (such as self-reported race, biological sex, age, socioeconomic status etc.). It is thus helpful for fairness analysis to construct our causal diagram with the image decomposed into two causal factors: \(X_{Z}\), representing the pathological structures directly caused by the disease (\(Z\to X_{Z}\)), and \(X_{A}\), features that encode subgroup-related sensitive information (\(A\to X_{A}\))1. When these causal factors are independent, \(X_{A}\) provides no information about the disease, so it is not useful for the classification task. This constraint is expressed as \(P(Y\mid X_{Z},X_{A},S)=P(Y\mid X_{Z},S)\), equivalent to the conditional independence statement \(Y\perp X_{A}\mid\{X_{Z},S\}\). We call this the "no-bias" criterion. With a no-bias dataset, a poorly trained model may exhibit performance disparities across groups, but this would result from algorithmic bias, not dataset bias. In Fig. 2, we illustrate the simplest structure that satisfies the no-bias criterion, where the independence relationship may be derived through d-separation [40, 41] (explained in Appendix A). Footnote 1: We define sensitive information as any image features that are predictive of the sensitive attribute. For example, skin pigmentation may be sensitive information if self-reported race is the sensitive attribute. If geographical location is the attribute and patients in different hospitals are scanned by different machines, artefacts caused by equipment settings may be considered sensitive. In practice, we often see that models trained with ERM to predict \(Y\) also acquire the ability to predict \(A\)[42, 43, 44]. This occurs when the no-bias criterion does not hold, and thus exploiting sensitive information improves task performance on the biased training dataset. We wish to represent such situations with our causal diagram. By applying d-separation to Fig. 2, we can see that any causal pathway linking \(A\) to \(Z\), \(X_{Z}\), or \(Y\) provides a connecting path between \(X_{A}\) and \(Y\) and hence violates the no-bias criterion. These are the core causal mechanisms behind dataset bias in medical imaging2 and are shown in Fig. 3. We refer to them as _prevalence disparities, presentation disparities_, and _annotation disparities_, respectively. Footnote 2: While there may be additional settings where the no-bias criterion is violated by unobserved confounding \(U\) between \(X_{A}\) and \(X_{Z}\), notice that this subgraph (\(X_{A}\gets U\to X_{Z}\)) is near-identical to the confounding in our formulation of presentation disparities. We thus expect insights on presentation disparities to be similarly applicable in settings where \(X_{A}\) and \(X_{Z}\) are intrinsically entangled due to causal inter-relationships. Detection and mitigation, however, may be more challenging when \(U\) is unobserved, unlike \(A\). Figure 1: Basic causal structures of medical imaging tasks. An underlying condition \(Z\) influences the image \(X\) and label \(Y\) for each individual. Selection \(S\) may be random (left) or dependent on any combination of \(\{X,Y,Z\}\) (right). In all cases, we wish to learn a model from observed data to predict \(P(Y\mid X)\). Unfilled nodes represent unobserved variables, and concentric nodes represent selection variables, which are conditioned upon. We use this convention throughout this section. Figure 3: Causal structures of dataset bias in medical imaging. The relevant causal pathways for each are highlighted in red. Prevalence disparities involve a path between the sensitive attribute and the disease prevalence Presentation disparities involve a path between the sensitive attribute and the disease presentation. Annotation disparities involve a path between the sensitive attribute and the annotation policy. These disparities may occur due to direct causal links (left diagrams) or collider biases from selection effects (middle diagrams). Each bias causes the optimal decision boundary to depend on sensitive information (right diagrams). Fair and unfair causal pathwaysAlthough dataset bias induces models to exploit sensitive information, this is not necessarily inappropriate in all cases. If a legitimate biological difference exists between groups, sensitive information may be relevant for disease prediction[45]. In this case, we would want a trained model to capture this information by learning group-specific disease mechanisms. In contrast, models may unfairly reflect spurious correlations caused by historical biases in healthcare provision or the diagnostic process. To account for both possibilities, we must distinguish between fair and unfair causal pathways[46]. Fair causal pathways contain information our model should use to inform predictions, whereas unfair pathways should be ignored or mitigated by a learning algorithm. For real-world datasets, determining whether a causal pathway is fair or unfair is challenging and requires specific knowledge, including the ethical considerations of the particular application domain[45]. A simplistic guide is to imagine an unbiased deployment setting and consider whether we would expect a particular causal pathway to remain present. If not, the path is likely unfair. For example, a historic labelling policy may cause some groups to be underdiagnosed in training data - if this effect is not expected to persist in deployment, we say that the policy is unfair. This heuristic allows us to convert any fairness problem into a dataset shift problem. A fair model in this context is one with perfect classification performance when deployed on a (potentially hypothetical) dataset with all unfair causal pathways removed. This problem framing helps to simplify and clarify the consideration of fairness methods and metrics. Measuring and mitigating biasGroup fairness metrics aim to quantify algorithmic bias by measuring how classifier properties differ across subgroups. However, they are difficult to interpret and are often mutually incompatible[47]. With our causal formulation, we may recontextualise group fairness metrics as (necessary but not sufficient) measures of properties we expect an unbiased prediction model to have. When fairness metrics are incompatible, we may often view each metric as assuming a different form of the hypothetical unbiased dataset. For example, when all causal pathways are considered fair, the unbiased deployment setting may be assumed to be independent and identically distributed (i.i.d.) to the training data. In this case, so-called 'bias-preserving' metrics[48], such as equal opportunity[49], should be favoured, and fairness is aligned with maximising performance at training time. In contrast, when our dataset contains unfair pathways, our hypothetical unbiased setting is not i.i.d. to the training dataset, so train-time performance may be misleading, and we should favour 'bias-transforming'[48] metrics such as demographic parity[50]. Causal analysis such as this may help practitioners to understand better the assumptions involved in choosing fairness metrics, as well as the illusive concept of fairness-accuracy tradeoffs under different sources of dataset bias[51, 52]. We refer readers to Plecko and Bareinboim[53] for a more detailed causal examination of fairness metrics. One advantage of framing the bias mitigation problem as one of generalisation to an unbiased dataset is that we can use causal transportability theory[54, 55, 56] to investigate the circumstances under which we may expect a model trained on the biased dataset to be appropriate for the unbiased deployment setting. Revisiting the setting where all causal pathways are considered fair, empirical risk minimisation (ERM) is an appropriate learning strategy, as we wish to maximise performance on an i.i.d. deployment setting[36]. Conversely, ERM is inappropriate in cases of unfair presentation disparities and unfair annotation disparities, as the underlying mapping between disease features and targets is expected to shift from training to deployment. The case of unfair prevalence disparities is particularly interesting, as the disparity may disappear in the limit of infinite data or with targeted data collection. However, in practice, the class imbalance is likely to lead to ERM models becoming miscalibrated, and there may be further issues if the observed data does not cover the support of the unbiased distribution[27]. Detailed causal transportability analysis of further bias mitigation methods, such as adversarial training[14, 15, 16, 17], is beyond the scope of this article but is fertile ground for future work. We briefly discuss the applicability of prominent methods from the literature in Section 3, emphasising that no method may effectively tackle all biases. No fair lunchWhen viewing a fair model as one which generalises from a biased training dataset to an unbiased deployment one, a simple problem reveals itself: no model may attain perfect performance over all possible deployment settings[57]. While we've demonstrated in anticausal settings how different biases may require different mitigation strategies, the general case is even more challenging. For any given training dataset, we could always construct opposing causal models which are equally compatible with the observed data[58] and would render the same model or metric appropriate or inappropriate. This is the crux of this article. _All methods for mitigating bias and metrics for measuring it must make causal assumptions about the structure of the observed dataset, including ethical assumptions about which causal pathways must be preserved or mitigated_. This is closely related to the recently established fundamental problem of causal fairness analysis (FPCFA)[53]. With this in mind, we now focus on how this problem is relevant in medical imaging. ## 3 Medical Mechanisms of (un)Fairness In application areas where datasets have a common causal structure and unambiguous sources of bias, the FPCFA may not be insurmountable; it may be possible to develop a standardised toolbox of methods and metrics for mitigating bias (for example, conditional independence testing may be sufficient to identify shifts from a limited set of causal structures[59]). In medical imaging, however, we deal with a wide range of dataset characteristics stemming from the use of various imaging modalities, different patient populations, clinical tasks, diagnostic processes and workflows, each contributing to the underlying causal processes with different potential sources of bias[60]. We illustrate these complications with six clinically inspired examples of dataset biases (one pair for each disparity in Fig. 3). In each example pair, the causal structures are indistinguishable from observational data alone - without causal analysis, it would be impossible to mitigate them appropriately. **Prevalence Disparities** When there is a causal pathway between the sensitive attribute and the prevalence of the disease, we observe _individuals belonging to different subgroups being represented with different prevalence among the targets_. When a dataset exhibits an unfair prevalence disparity, ERM models trained on it will be miscalibrated when deployed in settings where this bias is not present - leading to under- or over-diagnosis of the condition in one or more groups. We demonstrate an example in Fig. 4b, involving predicting biopsy outcomes from skin lesion images. In this example, patients from different racial groups may receive disparate access to healthcare[61, 62], skewing the observed prevalence of the disease at training time. This is a well-studied situation. Approaches for mitigating class imbalance, such as subgroup-aware resampling[21] and data augmentation[13], may be theoretically sound for tackling prevalence disparities. Similarly, methods for fair representation learning[50], which prevent the model from using sensitive information, may also be appropriate. In particular, adversarial training methods have empirically shown promise in mitigating prevalence disparities on benchmark datasets[14, 15, 16, 17]. In some cases, prevalence disparities may not be unfair. For example, although age is often considered a sensitive attribute, in Fig. 4a, we demonstrate the task of predicting patient outcomes from brain MRI images. Here, age is a clinically meaningful risk factor for Alzheimer's disease[63, 64] - the correlation between age and disease is not spurious and is not expected to disappear when translating a trained model into clinical practice. We should thus encourage our model to extract age-related features from the image to inform and calibrate its predictions correctly. Methods such as adversarial training on the age attribute would inappropriately worsen performance by forcing models to ignore clinically meaningful information. Suppose our model cannot accurately determine age from the image alone. In that case, it may even be necessary to supply this metadata explicitly at inference time, such as in domain discriminative training[12] or decoupled classification[65]. We draw an analogy, in this case, to human experts, who often use risk factors and other background information beyond the disease features alone to inform their decisions[66]. **Presentation Disparities** When there is a causal pathway between the sensitive attribute and the disease-related physiology, we observe _individuals belonging to different subgroups presenting different disease features for the same underlying condition_. Presentation disparities are complex and often overlooked bias mechanisms and are especially relevant in medical imaging. It is not uncommon for different groups to be scanned with different equipment or to have natural variations in physical characteristics, causing the underlying condition to present differently. In Fig. 5b, we see an unfair example in diagnostic ultrasound. Here, patients in different geographic locations are referred for scans at different points in their disease progression due to differences in referral policy[67, 68]. This causes the disease to appear systematically different for groups in each location. This disparity fundamentally alters the mapping from images to targets across groups - an ERM model trained on this dataset will pick up on location-specific scanner artefacts and will not be transportable to a setting without the bias. Today, it is unclear if it is possible to mitigate unfair presentation disparities in the general case, aside from simply collecting better data. In fact, we postulate that hidden presentation disparities may be a factor behind surprising recent results demonstrating the failure of bias mitigation methods[12, 13]. For instance, Wang et al.[12] observe that adversarial training methods worsen task performance on the CIFAR-S benchmark because they are unable to disentangle the sensitive information from the task-relevant information fully. One explanation for this result may be that generating the CIFAR-S dataset (by setting some images to greyscale) introduces a presentation disparity by changing the appearance of the class-specific features across subgroups. Fair presentation disparities may occur due to biological differences between groups, such as our example in Fig. 5a, showing how tissue density differences may affect breast cancer manifestation in men and women. Like fair prevalence disparities, empirical risk minimisation is appropriate in fair cases of presentation disparities and, again, there may even be situations where we should encourage models to use sensitive information[12, 65]. Importantly, when datasets contain presentation disparities, there is no guarantee that the disease should be equally predictable in all groups, so fairness metrics that measure performance disparities across groups are unlikely to be meaningful measures of algorithmic bias when used in isolation. Figure 4: Prevalence disparities in medical imaging. Red arrows represent unfair causal pathways. In our fair example (a), age is a clinically recognised risk factor for Alzheimer’s disease. In our unfair example (b), patient race is spuriously correlated with disease prevalence due to disparities in access to healthcare. We mark access to healthcare with * because it is a selection node – patients may only appear in the dataset if they have access to a dermatology clinic, so the dataset is conditioned upon this variable. Figure 5: Presentation disparities in medical imaging. Red arrows represent unfair causal pathways. In our fair example (a), breast cancer manifests differently in men and women due to natural differences in breast density. In our unfair example (b), patients in different locations are scanned at different stages due to inconsistent ultrasound referral policies. **Annotation Disparities** When there is a causal pathway between the sensitive attribute and the annotation policy, we observe _individuals belonging to different subgroups being diagnosed, labelled, or annotated with different criteria_. We present two examples of annotation disparities in Fig. 6. These are the most complex causal diagrams in this article, demonstrating how our causal reasoning approach may be applied in messy real-world situations beyond the simple cases of anticausal classification demonstrated so far. Our unfair example in Fig. 5(b) is particularly relevant today, showing how the practice of releasing datasets with automatically-generated labels [69, 70] may introduce dataset bias if the label quality varies across subgroups. Consider a hypothetical setting where a natural language model is used to label chest X-ray data from radiology reports. If the language model has inconsistent performance across languages (e.g. because it was trained on a larger English corpus than Spanish), we may see patients in different locations being systematically misdiagnosed based on the language of the radiology reports. Image classifiers trained on such a dataset will pick up on the spurious correlation between location-specific scanner artefacts and label quality and show disparate performance for patients in different locations [44]. While it may be possible to mitigate unfair annotation disparities by making assumptions on the function class that corrupts the labels [71], such assumptions may be too strong to justify in practice. Under suspected annotation disparities, it also becomes difficult to interpret evaluation results as there is no reliable ground truth [60]. In this case, it may be necessary to relabel the data with a consistent diagnosis policy. Our fair example of an annotation disparity (Fig. 5(a)) involves a setting with differential diagnosis of osteoporosis from CT scans. Since the underlying prevalence of osteoporosis and similar diseases is age-dependent, patients with the same disease features may receive different diagnoses depending on their age due to the clinical practice of differential diagnosis. ERM-trained models will likely reflect this process by picking up on age-related image features such as bone density, which may be desirable, provided we expect to deploy the trained model on an i.i.d. dataset. When deploying such a model, it is crucial to consider whether we expect the deployment hospitals to use the same diagnosis criteria as the training dataset. ## 4 Discussion While the study of algorithmic bias is important and has gained significant interest in recent years, underlying dataset biases remain poorly understood. We demonstrated how the causal nature of dataset bias has profound consequences for deep learning algorithms and explored several plausible bias mechanisms in medical datasets. Today, theoretically sound methods exist for tackling a small subset of disparities, but there remains a wide world of underexplored bias mechanisms in medical data. Worse still, even if a method is theoretically appropriate for tackling dataset bias, there is no guarantee that it is easily trainable, and it may yet introduce algorithmic bias if there are issues with training or model selection [13, 26]. Importantly, no bias mitigation method or metric can be successful against all mechanisms of dataset bias; any attempt at mitigating dataset bias must make causal and ethical assumptions about the problem at hand. Causal diagrams provide a clear, mathematically principled way of making assumptions about dataset bias explicit. To encourage researchers to consider causality in future problems, we provide a simple three-step framework to aid with causal thinking, displayed in Table 1. It may be challenging to apply this approach to many problems; however, even attempting to infer causal relationships can provide valuable insights and may highlight important knowledge gaps. As a result, researchers may identify opportunities for engaging with experts from other disciplines. Oftentimes, we do not have perfect knowledge about the data-generating process, and a medical ethicist will likely be needed to evaluate the fairness of causal paths. However, we emphasise that these assumptions must be made (explicitly or implicitly) by _all_ methods, so stating them up-front will only improve transparency in machine learning solutions. The bias mechanisms we explored are particularly relevant to medical imaging but are not an exhaustive list. Our proposed mechanisms focus on the common case of anticausal classification and contain one causal pathway connecting the sensitive attribute to the primary task; however, real-world datasets may simultaneously contain multiple fair and unfair paths or have a different underlying causal structure (e.g. image segmentation may be better modelled as \(Z\to X\to Y\)). Today, there is little work combating compound bias mechanisms, and we are unaware of any image analysis methods that can simultaneously handle fair and unfair bias mechanisms. Meanwhile, we call for medical imaging datasets to be more transparent regarding the processes involved in collection and curation [45, 74, 75, 76, 77] - without such knowledge, we cannot make informed assumptions about the nature of dataset biases. One of the simplest yet most effective methods for bias mitigation remains the active collection of unbiased data [72], and a promising future direction may be to apply counterfactual generative models [78, 79] for leveraging causal assumptions to generate debiased synthetic data [73]. There remains a great opportunity for machine learning to improve patient outcomes in healthcare. Still, the tendency of today's methods to produce unfair and inequitable predictions will continue to restrict true progress in the field. Causal reasoning provides a principled formulation of the problems we face and should play a role in future analyses of the issues. Figure 6: Annotation disparities in medical imaging. Red arrows represent unfair causal pathways. In our fair example (a), patient age is considered a factor in epidemiology-based differential diagnosis of osteoporosis. In our unfair example (b), disease labels are generated automatically from radiology reports with a language model; reports in different languages receive different annotation quality. ###### Acknowledgements. C.J. is supported by Microsoft Research and EPSRC through the Microsoft PhD Scholarship Programme. B.G. received support from the Royal Academy of Engineering as part of his Kheiron/RAEng Research Chair.
2306.17511
Computational Complexity in Algebraic Combinatorics
Algebraic Combinatorics originated in Algebra and Representation Theory, studying their discrete objects and integral quantities via combinatorial methods which have since developed independent and self-contained lives and brought us some beautiful formulas and combinatorial interpretations. The flagship hook-length formula counts the number of Standard Young Tableaux, which also gives the dimension of the irreducible Specht modules of the Symmetric group. The elegant Littlewood-Richardson rule gives the multiplicities of irreducible GL-modules in the tensor products of GL-modules. Such formulas and rules have inspired large areas of study and development beyond Algebra and Combinatorics, becoming applicable to Integrable Probability and Statistical Mechanics, and Computational Complexity Theory. We will see what lies beyond the reach of such nice product formulas and combinatorial interpretations and enter the realm of Computational Complexity Theory, that could formally explain the beauty we see and the difficulties we encounter in finding further formulas and ``combinatorial interpretations''. A 85-year-old such problem asks for a positive combinatorial formula for the Kronecker coefficients of the Symmetric group, another one pertains to the plethysm coefficients of the General Linear group. In the opposite direction, the study of Kronecker and plethysm coefficients leads to the disproof of the wishful approach of Geometric Complexity Theory (GCT) towards the resolution of the algebraic P vs NP Millennium problem, the VP vs VNP problem. In order to make GCT work and establish computational complexity lower bounds, we need to understand representation theoretic multiplicities in further detail, possibly asymptotically.
Greta Panova
2023-06-30T09:56:05Z
http://arxiv.org/abs/2306.17511v1
# Computational complexity in algebraic combinatorics ###### Abstract. Algebraic Combinatorics originated in Algebra and Representation Theory, studying their discrete objects and integral quantities via combinatorial methods which have since developed independent and self-contained lives and brought us some beautiful formulas and combinatorial interpretations. The flagship hook-length formula counts the number of Standard Young Tableaux, which also gives the dimension of the irreducible Specht modules of the Symmetric group. The elegant Littlewood-Richardson rule gives the multiplicities of irreducible GL-modules in the tensor products of GL-modules. Such formulas and rules have inspired large areas of study and development beyond Algebra and Combinatorics, becoming applicable to Integrable Probability and Statistical Mechanics, and Computational Complexity Theory. We will see what lies beyond the reach of such nice product formulas and combinatorial interpretations and enter the realm of Computational Complexity Theory, that could formally explain the beauty we see and the difficulties we encounter in finding further formulas and "combinatorial interpretations". A 85-year-old such problem asks for a positive combinatorial formula for the Kronecker coefficients of the Symmetric group, another one pertains to the plethysm coefficients of the General Linear group. In the opposite direction, the study of Kronecker and plethysm coefficients leads to the disproof of the wishful approach of Geometric Complexity Theory (GCT) towards the resolution of the algebraic P vs NP Millennium problem, the VP vs VNP problem. In order to make GCT work and establish computational complexity lower bounds, we need to understand representation theoretic multiplicities in further detail, possibly asymptotically. "It's hard to look for a black cat in a dark room, especially if there is no cat..." Confucius ## 1. Introduction It was the best of times, it was the worst of times, it was the epoch of unwavering faith in mathematical conjectures, it was the era of crushing despair in the wake of disproofs. In the realm of Algebraic Combinatorics, where beauty had long flourished in the form of graceful formulas and elegant combinatorial interpretations, a shifting tide of uncanny difficulties now swept across the landscape. The hope for solely aesthetic solutions would fade in the shadows of the rigorous framework of Computational Complexity Theory and the imprecision of asymptotic analysis. What is Algebraic Combinatorics? According to Wikipedia, it "is an area of mathematics that employs methods of abstract algebra, notably group theory and representation theory, in various combinatorial contexts and, conversely, applies combinatorial techniques to problems in algebra." Here, we will narrow it down to the intersection of Representation Theory and Discrete Mathematics, and in more concrete terms the area consisting of Symmetric Function Theory, and Representation Theory of \(S_{n}\) and \(GL_{N}\) which house our favorite standard and semi-standard Young tableaux. The counterpart in our study, Computational Complexity theory, is about the classification of computational problems by how efficiently with respect to the given resource (space or time) they can be solved by an algorithm. It is home to the P vs NP problem, and its algebraic version the VP vs VNP problem. The two fields come together in two ways. Algebraic Combinatorics has old classical quantities (structure constants) resisting a formula or even a "combinatorial interpretation" for more than 80 years. Computational Complexity can formalize these difficulties and explain what is [not] happening. On the other side, these same structure constants appear central to the problem of showing that \(\mathsf{VP}\neq\mathsf{VNP}\) using Geometric Complexity Theory, and in particular the search for "multiplicity obstructions". ### Problems in Algebraic Combinatorics through the prism of Computational Complexity The dawn of Algebraic Combinatorics was lit by beautiful formulas and elegant combinatorial interpretations. The number \(f^{\lambda}\) of standard Young tableaux of shape \(\lambda\) is given by the hook-length formula of Frame, Robinson and Thrall. From the representation theoretic correspondences, this is also the dimension of the irreducible \(S_{n}\) representation, the Specht module \(\mathbb{S}_{\lambda}\). The tensor product of two \(GL_{N}\) irreducibles \(V_{\mu}\) and \(V_{\nu}\) factors into irreducibles with multiplicities given by the Littlewood-Richardson coefficients \(c^{\lambda}_{\mu\nu}\). While no "nice" formula is known for those numbers, they are equal to the number of certain semi-standard Young tableaux, which is their "combinatorial interpretation". Looking at the analogous structure constants for \(S_{n}\), the Kronecker coefficients \(g(\lambda,\mu,\nu)\) give the multiplicities of \(S_{n}\)-irreducible representations \(\mathbb{S}_{\lambda}\) in the tensor product of two others \(\mathbb{S}_{\mu}\otimes\mathbb{S}_{\nu}\). Yet, despite their innocuous definition about 85 years ago, mimicking the Littlewood-Richardson one, no formula nor positive combinatorial interpretation is known for them. Likewise, no positive combinatorial interpretation is known for the plethysm coefficients of \(GL_{N}\). But what is a "combinatorial interpretation"? It would be a set of easily defined combinatorial objects, whose cardinality gives our desired structure constants. Yet, what do we consider combinatorial objects? For the most part, we know them once we see them, just like the LR tableaux. But could it be that no such nice objects exist, and, if so, could we prove that formally and save ourselves the trouble of searching for black cats in dark rooms, an endeavor particularly difficult when there is no cat. This is when Computational Complexity Theory comes into play and provides the formal framework we can settle these questions in. In its classical origins, Computational Complexity Theory classifies computational problems according to usage of resources (time and/or space) needed to obtain an answer. In our context, the resource is time as measured by the number of elementary steps needed to be performed by an algorithm solving that problem. Problems are thus divided into computational complexity classes depending on how fast they can be solved. For decision problems, that is when we are looking for a Yes/No answer, the main classes are \(\mathsf{P}\), of problems solvable in polynomially (in the input size) many steps, and \(\mathsf{NP}\) is the class of problems, for which if the answer is Yes, then it could be verified in polynomial time. We have that \(\mathsf{P}\subset\mathsf{NP}\) and the \(\mathsf{P}\) vs \(\mathsf{NP}\) Millennium problem asks whether \(\mathsf{P}\neq\mathsf{NP}\), which is the widely believed hypothesis. The class \(\mathsf{NP}\) is characterized by its complete problems like 3SAT or HAMCYCLE, which asks, given a graph \(G\), whether it has a Hamiltonian cycle. If the graph has such a cycle, then one can specify it by its sequence of vertices, and verify in linear time that there is an edge between two consecutive ones. For the counting problems, where the answer should be a nonnegative integer, the corresponding classes are \(\mathsf{FP}\) and \(\mathsf{\#P}\). The class \(\mathsf{\#P}\) can also be defined as counting exponentially many polynomially computable discrete objects, and a \(\mathsf{\#P}\) formula is a naturally nonnegative [exponentially large] sum of counting objects computable in polynomial time. When there are "nice" formulas, like the hook-length formula, or the determinantal formulas for skew standard Young tableaux, the corresponding problem (e.g. to compute \(f^{\lambda}\)) is in \(\mathsf{FP}\). When we have a "combinatorial interpretation", the corresponding problem is in \(\mathsf{\#P}\), see [11, 12]. Thus, to show that a "reasonable combinatorial interpretation" does not exist, we may want to show that the given problem is not in #P under some widely accepted assumptions, e.g. \(\mathsf{P}\neq\mathsf{NP}\) or that the polynomial hierarchy \(\mathsf{PH}\) does not collapse. In our quest to compute the Kronecker or plethysm coefficients, we ask whether the corresponding problem is in #P. As a proof of concept we show that a similar problem, the computation of the square of an \(S_{n}\) character, is not in #P, given that the polynomial hierarchy does not collapse to second level, [17]. ### Geometric Complexity Theory In the opposite direction, Algebraic Combinatorics is used in Geometric Complexity Theory, a program aimed at finding computational lower bounds and distinguishing classes using Algebraic Geometry and Representation Theory. In his landmark paper [14] from 1979, Valiant defined algebraic complexity classes for computing polynomials in formal variables. Later these classes were denoted by \(\mathsf{VP}\) and \(\mathsf{VNP}\), and represented the algebraic analogues of the original \(\mathsf{P}\) and \(\mathsf{NP}\) classes.1 The flagship problem in arithmetic complexity theory is to show that \(\mathsf{VP}\neq\mathsf{VNP}\) and is closely related to \(\mathsf{P}\neq\mathsf{NP}\), see [1]. As with \(\mathsf{P}\) vs \(\mathsf{NP}\), the general strategy is to identify complete problems for \(\mathsf{VNP}\), i.e. complete polynomials, and show they do not belong to \(\mathsf{VP}\). Valiant identified such \(\mathsf{VNP}\)-complete polynomials, most notably the permanent of a \(n\times n\) variable matrix. At the same time he showed that the determinant polynomial is \(\mathsf{VP}\)-universal, i.e. every polynomial from \(\mathsf{VP}\) can be computed as a polynomially sized determinant of a matrix whose entries are affine linear forms in the original variables. This sets the general strategy of distinguishing \(\mathsf{VP}\) from \(\mathsf{VNP}\) by showing that the permanent is not a determinant of some poly-sized matrix. Footnote 1: As we shall see later, there is a fine distinction of the algebraic versions of \(\mathsf{P}\): \(\mathsf{VP}_{\text{ws}},\mathsf{VBP},\mathsf{VP}\), but it is often ignored. Formally, the relevant class is \(\mathsf{VBP}\). Geometric Complexity Theory aims to distinguish such algebraic complexity classes via the algebro-geometric properties of the corresponding complete/universal polynomials. In two landmark papers [12, 13] Mulmuley and Sohoni suggested distinguishing these polynomials by studying the algebraic varieties arising from the group action corresponding to all the linear transformations. In particular, to distinguish polynomials, one can consider the representation theoretic structure of these varieties ['s coordinate rings] and find some irreducible representations appearing with different multiplicities in the two. Because of the many symmetries of the polynomials and the equivariant action of \(GL_{N}\), usually such multiplicities can be naturally expressed via the fundamental structure constants Kronecker, Littlewood-Richardson, plethysms etc. from SS3, and the methods to study them revolve around the combinatorics of Young Tableaux and generalizations. Since such multiplicities are even harder than the Kronecker and plethysm coefficient, a simpler approach would have been to study just the occurrence of irreducible representations rather than the value of the multiplicity. If an irreducible \(GL\) module appears in the [coordinate ring orbit closure of the] permanent of an \(m\times m\) matrix, but not for the determinant of an \(n\times n\) matrix, that would imply a lower bound, namely the \(\text{per}_{m}\) cannot be equal to \(\det_{n}\) (of affine linear forms). If this happens for \(n>poly(m)\) (i.e. bigger than any polynomial in \(m\)), then \(\mathsf{VP}\neq\mathsf{VNP}\). Such irreducible representations are called occurrence obstructions, and unfortunately do not exist [1] for this model. Thus we have to compare the actual multiplicities or explore other models besides permanent versus determinant. Understanding their growth starts with finding bounds and later asymptotics for Kronecker and plethysm coefficients, see [10] for further discussions. ### Paper structure In Section 2 we will define the basic objects in Algebraic Combinatorics and Representation Theory and recall important facts on SYTs and symmetric functions. In Section 3 define the culprit structure constants Kronecker and plethysm coefficients and recall some of the major open problems. In Section 4 we will discuss Computational Complexity Theory from the point of view of a mathematician. In Section 5 we will discuss how Computational Complexity can be applied in Algebraic Combinatorics, stating various hardness and completeness results and conjectures on Kostka, LR, Kronecker coefficients and the characters of the symmetric group. In Section 6 we will discuss Geometric Complexity Theory in more detail, explain the connection with Algebraic Combinatorics and some of the recent advances in the area. The text will aim to be as self-contained as possible. For other open problems on structure constants, in particular positivity and asymptotics, see [10]. **Disclaimer.** The current paper is a detailed transcript of the author's Current Developments in Mathematics talks (April 2023). This work does not attempt to be a broad survey on the topic, and is naturally concentrated on the author's viewpoint and work. **Acknowledgments.** The author is grateful to Christian Ikenmeyer and Igor Pak for the years of fruitful collaborations on the subject addressed here. Many thanks also to Sara Billey, Allen Knutson, Alex Yong for many useful questions and discussions on these topics. The author has been partially supported by the NSF. ## 2. Algebraic Combinatorics Here we will describe the basic objects and facts from Algebraic Combinatorics which will be used later. For further details on the combinatorial sides see [14, 15] and for the representation theoretic aspects see [13, 12]. ### Partitions and Tableaux _Integer partitions_\(\lambda\) of \(n\), denoted \(\lambda\vdash n\), are sequences of non-negative integers \(\lambda=(\lambda_{1},\lambda_{2},\ldots,\lambda_{k})\), such that \(\lambda_{1}\geq\lambda_{2}\geq\cdots\geq 0\) and \(\lambda_{1}+\cdots+\lambda_{k}=n\). We denote by \(\ell(\lambda)=\max\{i:\lambda_{i}>0\}\) the _length_ of the partition, which is the number of nonzero parts, and by \(|\lambda|=\lambda_{1}+\lambda_{2}+\cdots+\lambda_{\ell}\) its _size_. A partition can be represented as a _Young diagram_, which is a left-justified array of squares, such that row \(i\) (indexing top to bottom) has exactly \(\lambda_{i}\) squares. For example \(\lambda=(4,3,1)\vdash 8\) has \(\ell(\lambda)=3\) and its Young diagram is \(\yng(4,3,1)\). Here we will denote by \([\lambda]\) the Young diagram and think of it as a set of squares with coordinates \((i,j)\) with \((1,1)\) being the topmost leftmost box, so the box at \((2,3)\) is \(\yng(4,3,1)\). We denote by \((1^{k})=(\underbrace{1,\ldots,1}_{k})\) the single column partition of \(k\) boxes, call _one-row_ partition the partitions with only one nonzero part, _two-row_ partitions of the form \((n-k,k)\), and _hooks_ partitions of the kind \((n-k,1^{k})\). We denote by \((a^{k})=(\underbrace{a,\ldots,a}_{k})\) the _rectangular_ partition whose Young diagram is a \(k\times a\) rectangle. The transpose or _conjugate_ partition of \(\lambda\) is denoted \(\lambda^{\prime}\) and is the one whose shape is obtained by transposing along the main diagonal \([\lambda]\), e.g. for \(\lambda=(4,3,1)\) we have \(\lambda^{\prime}=(3,2,2,1)\). The _skew_ partition \(\lambda/\mu\) is obtained by removing the squares occupied by \(\mu\) from \(\lambda\), so for example the Young diagram of \((5,4,3,1)/(2,1)\) is \(\yng(4,3,1)\). The set of partitions of \(n\) will be denoted by \(\mathcal{P}(n)\) and its cardinality by \(p(n)\). While there is no closed form formula for \(p(n)\), there is a nice generating function \[\sum_{n=0}^{\infty}p(n)t^{n}=\prod_{i=1}^{\infty}\frac{1}{1-t^{i}}.\] A _standard Young tableaux_ (SYT) of _shape_\(\lambda\vdash n\) is a bijection \(T:[\lambda]\xrightarrow{\sim}\{1,\ldots,n\}\), such that \(T(i,j)<T(i+1,j)\) and \(T(i,j)<T(i,j+1)\). For example, the SYTs of shape \((2,2,1)\) are The _hook_ of a box \((i,j)\) in \([\lambda]\) is the collection of squares \(\{(i,j),(i+1,j),\ldots,(\lambda^{\prime}_{j},j),(i,j+1),\ldots,(i,\lambda_{i})\}\) below \((i,j)\) in the same column, or to the right in the same row. For example, for box \((2,2)\) in \((5,4,3,3)\) the hook is The hook-length \(h_{i,j}\) is the number of boxes in the hook of \((i,j)\). Let \(f^{\lambda}\) be the number of standard Young tableaux of shape \(\lambda\vdash n\). Then the _hook-length formula_ (HLF) of Frame-Robinson-Thrall [10] gives \[f^{\lambda}=\frac{n!}{\prod_{(i,j)\in[\lambda]}h_{i,j}}. \tag{2.1}\] The following remarkable identity \[\sum_{\lambda}(f^{\lambda})^{2}=n! \tag{2.2}\] gives rise to an even more remarkable bijection between pairs of same shape SYTs and permutations, known as RSK for Robinson-Schensted-Knuth. For example The _semi-standard Young tableaux_ (SSYT) of shape \(\lambda\) and content \(\alpha\) are maps \(T:[\lambda]\to\mathbb{N}\), such that \(|T^{-1}(i)|=\alpha_{i}\), i.e. \(\alpha_{i}\) many entries are equal to \(i\), and the entries increase weakly along rows and strictly down columns, i.e. \(T(i,j)\leq T(i,j+1)\) and \(T(i,j)<T(i+1,j)\). We denote the set of such tableaux by \(\operatorname{SSYT}(\lambda;\alpha)\). For example the SSYTs of shape \(\lambda=(3,3,1)\) and type \(\alpha=(2,2,2,1)\) are Completely analogously, we can define the _skew SSYT_ of shape \(\lambda/\mu\) as the fillings of \([\lambda/\mu]\) with integers weakly increasing along rows and strictly down columns, e.g. \[\begin{array}{c}\framebox{$23$}\\ \framebox{$4$}\end{array}\] shape \((4,4,2)/(2,1)\) of type \((1,2,2,2)\). ### Symmetric functions Let \(\Lambda[\boldsymbol{x}]\) be the ring of _symmetric functions_\(f(x_{1},x_{2},\ldots)\), where the symmetry means that \(f(\boldsymbol{x})=f(\boldsymbol{x}_{\sigma})\) for any permutation \(\sigma\) of the variables, and \(f\) is a formal power series. When all but finitely many variables are \(0\) then \(f\) becomes a symmetric polynomial. The ring \(\Lambda\) is graded by the total degree, and its component of degree \(n\) has dimension \(p(n)\) as a \(\mathbb{C}\)-vector spaces. There are several useful bases for \(\Lambda\) - the monomial, the elementary, power sum, (complete) homogeneous, and Schur functions. The _monomial symmetric functions_\(m_{\lambda}(x_{1},\ldots,x_{k})\) are defined as the sum of all distinct monomials of the form \(x_{\sigma(1)}^{\lambda_{1}}\cdots x_{\sigma(k)}^{\lambda_{k}}\), where \(\sigma\in S_{k}\). For example, \(m_{311}(x_{1},\ldots,x_{n})=x_{1}^{3}x_{2}x_{3}+x_{1}^{3}x_{2}x_{4}+\ldots\), and each monomial appears with coefficient \(0\) or \(1\). Let \(p_{k}:=m_{(k)}=x_{1}^{k}+x_{2}^{k}+\ldots\). The _power sum symmetric functions_ are then defined as \(p_{\lambda}:=p_{\lambda_{1}}\,p_{\lambda_{2}}\cdots\) The _elementary symmetric functions_\(\{e_{\lambda}\}\) are defined as follows \[e_{k}:=\sum_{i_{1}<i_{2}<\cdots<i_{k}}x_{i_{1}}x_{i_{2}}\cdots x_{i_{k}},\qquad \text{ and }\qquad e_{\lambda}:=e_{\lambda_{1}}e_{\lambda_{2}}\cdots.\] For example \[e_{2,1}(x_{1},x_{2},x_{3})=(x_{1}x_{2}+x_{1}x_{3}+x_{2}x_{3})(x_{1}+x_{2}+x_{3} )=m_{2,1}(x_{1},x_{2},x_{3})+3m_{1,1,1}(x_{1},x_{2},x_{3}).\] The _homogeneous symmetric functions_\(h_{\lambda}\) are given by \[h_{k}:=\sum_{i_{1}\leq i_{2}\leq\cdots\leq i_{k}}x_{i_{1}}x_{i_{2}}\cdots x_{i _{k}},\qquad\text{ and }\qquad h_{\lambda}:=h_{\lambda_{1}}h_{\lambda_{2}}\cdots.\] The _Schur functions_ can be defined as the generating functions of SSYTs, where \(\boldsymbol{x}^{\alpha}:=x_{1}^{\alpha_{1}}\cdots x_{k}^{\alpha_{k}}\), namely \[s_{\lambda}=\sum_{\alpha}\sum_{T\in\operatorname{SSYT}(\lambda,\alpha)} \boldsymbol{x}^{\alpha},\] where \(\alpha\) goes over all weak compositions of \(n\). For example, \(s_{1^{k}}=e_{k}\), \(s_{k}=h_{k}\) and \[s_{(2,1)}(x_{1},x_{2},x_{3})=x_{1}^{2}x_{2}+x_{1}x_{2}^{2}+x_{1}^{2}x_{3}+x_{1 }x_{3}^{2}+x_{2}^{2}x_{3}+x_{2}x_{3}^{2}+2x_{1}x_{2}x_{3}.\] We can also define the _skew Schur functions_\(s_{\lambda/\mu}\) as the analogous generating function for \(\operatorname{SSYT}(\lambda/\mu)\). They can also be defined and computed via the _Weyl determinantal formula_ \[s_{\lambda}(x_{1},\ldots,x_{k})\,:=\,\det\Bigl{(}x_{i}^{\lambda_{j}+k-j}\Bigr{)} _{i,j=1}^{k}\prod_{1\leq i<j\leq n}\frac{1}{x_{i}-x_{j}}\] or the _Jacobi-Trudi identity_ \[s_{\lambda/\mu}=\det[h_{\lambda_{i}-i-\mu_{j}+j}]_{i,j=1}^{\ell(\lambda)}.\] The ring \(\Lambda\) has an inner product \(\langle\cdot,\cdot\rangle\), where the Schur functions form an orthonormal basis and the power sums are orthogonal. Namely \[\langle s_{\lambda},s_{\mu}\rangle=\delta_{\lambda,\mu}\qquad\langle p_{ \lambda},p_{\mu}\rangle=z_{\lambda}\delta_{\lambda,\mu}.\] Additionally \(\langle h_{\lambda},m_{\mu}\rangle=\delta_{\lambda,\mu}\), where \(\delta_{\lambda,\mu}=0\) if \(\lambda\neq\mu\) and \(1\) if \(\lambda=\mu\) and \(z_{\lambda}=\frac{n!}{\prod_{i}{}^{m_{i}}m_{i}!}\) when \(\lambda=(1^{m_{1}}2^{m_{2}}\cdots)\), i.e. there are \(m_{i}\) parts equal to \(i\). The involution \(\omega\) is defined as \(\omega(e_{\lambda})=h_{\lambda}\), \(\omega^{2}=id\) and we have that \(\omega(s_{\lambda})=s_{\lambda^{\prime}}\). The Schur functions posses beautiful combinatorial properties, for example they satisfy the _Cauchy identity_ \[\sum_{\lambda}s_{\lambda}(\boldsymbol{x})s_{\lambda}(\boldsymbol{y})=\prod_{i,j}\frac{1}{1-x_{i}y_{j}},\] which can also be proven via RSK. ### Representations of \(S_{n}\) and \(Gl_{n}\) A _group representation_\(\rho\) of a group \(G\) is a group homomorphism \(\rho:G\to GL(V)\), which can also be interpreted as an action of the group on a vector space \(V\). We often refer to the vector space \(V\) as the representation. An irreducible representation is such a vector space \(V\) which has no nontrivial invariant subspaces. If \(G\) is finite or reductive and the underlying field is \(\mathbb{C}\) then every representation can be uniquely decomposed as direct sum of irreducible representations. Such decompositions can be easily studied via the characters, \(\chi^{\rho}(g):=trace(\rho(g))\), which are central functions, i.e. constant on conjugacy classes. The irreducible representations of the _symmetric group_\(S_{n}\) are the _Specht modules_\(\mathbb{S}_{\lambda}\) and are indexed by partitions \(\lambda\vdash n\). Using row and column symmetrizers in the group algebra \(\mathbb{C}[S_{n}]\) one can construct the irreducible modules as certain formal sums over tableaux of shape \(\lambda\). Each such element has a unique minimal tableau which is an SYT, and so a basis for \(\mathbb{S}_{\lambda}\) can be given by the SYTs. In particular \[\dim\mathbb{S}_{\lambda}=f^{\lambda}.\] We have that \(\mathbb{S}_{(n)}\) is the trivial representation assigning to every \(w\) the value \(1\) and \(\mathbb{S}_{1^{n}}\) is the sign representation. The _character_\(\chi^{\lambda}(w)\) of \(\mathbb{S}_{\lambda}\) can be computed via the _Murnaghan-Nakayama_ rule. Let \(w\) have type \(\alpha\), i.e. it decomposes into cycles of lengths \(\alpha_{1},\alpha_{2},\ldots,\alpha_{k}\). Then \[\chi^{\lambda}(w)=\chi^{\lambda}(\alpha)=\sum_{T\in MN(\lambda;\alpha)}(-1)^{ ht(T)},\] where \(MN\) is the set of rim-hook tableaux of shape \(\lambda\) and type \(\alpha\), so that the entries are weakly increasing along rows and down columns, and all entries equal to \(i\) form a rim-hook shape (i.e. connected, no \(2\times 2\) boxes) of length \(\alpha_{i}\). The height of each rim-hook is one less than the number of rows it spans, and \(ht(T)\) is the sum of all these heights. For example, \[\begin{array}{|c|c|c|}\hline 11&2&3&3&3\\ 1&2&2&3&4\\ 2&2&3&3&4\\ \end{array}\] is a Murnaghan-Nakayama tableau of shape \((6,5,5)\), type \((3,5,6,2)\) and has height \(ht(T)=1+2+2+1=6\). As we shall see, the characters are also the transition matrices between the \(\{s_{\lambda}\}\) and \(\{p_{\lambda}\}\) bases of \(\Lambda\). The irreducible polynomial representations of \(GL_{N}(\mathbb{C})\) are the _Weyl modules_\(V_{\lambda}\) and are indexed by all partitions with \(\ell(\lambda)\leq N\). Their characters are exactly the Schur functions \(s_{\lambda}(x_{1},\ldots,x_{N})\), where \(x_{1},\ldots,x_{N}\) are the eigenvalues of \(g\in GL_{N}(\mathbb{C})\). The dimension is just \(s_{\lambda}(1^{N})\) and can be evaluated as a product via the _hook-content formula_ \[s_{\lambda}(1^{N})=\prod_{(i,j)\in[\lambda]}\frac{N+j-i}{h_{i,j}}.\] ## 3. Multiplicities and structure constants ### Transition coefficients As the various symmetric function families form bases in \(\Lambda\), it is natural to describe the transition matrices between them. The coefficients involved have significance beyond that. We have that \[h_{\lambda}=\sum_{\mu}CT(\lambda,\mu)m_{\mu},\] where \(CT(\lambda,\mu)\) is the number of _contingency arrays_\(A\) with marginals \(\lambda,\mu\), namely \(A\in\mathbb{N}_{0}^{\ell(\lambda)\times\ell(\mu)}\) and \(\sum_{j}A_{ij}=\lambda_{i}\), \(\sum_{i}A_{ij}=\mu_{j}\). Similarly, \[e_{\lambda}=\sum_{\mu}CT_{0}(\lambda,\mu)m_{\mu},\] where \(CT_{0}(\lambda,\mu)\) is the number of \(0-1\) contingency arrays \(A\) with marginals \(\lambda,\mu\), i.e. \(A\in\{0,1\}^{\ell(\lambda)\times\ell(\mu)}\). We have that \[p_{\lambda}=\sum_{\mu}P(\lambda,\mu)m_{\mu},\] where for any two integer vectors \(\mathbf{a},\mathbf{b}\), we set \[P(\mathbf{a},\mathbf{b}):=\#\{(B_{1},B_{2},\ldots,B_{k}):B_{1}\sqcup B_{2} \sqcup\ldots\sqcup B_{k}=[m],\sum_{i\in B_{j}}a_{i}=b_{j}\text{ for all }j=1,\ldots,k\}\] be the number of _ordered set partitions_ of items \(\mathbf{a}\) into bins of sizes \(\mathbf{b}\). The _Kostka numbers_\(K_{\lambda\mu}\), \(\lambda,\mu\vdash n\) are defined by \[h_{\lambda}\,=\,\sum_{\mu\vdash n}\,K_{\lambda\mu}\,s_{\lambda}\] and by orthogonality also as \[s_{\lambda}\,=\,\sum_{\mu\vdash n}\,K_{\lambda\mu}\,m_{\mu}\,.\] By definition we have that \(L_{\lambda,\mu}=|\mathrm{SSYT}(\lambda,\mu)|\), i.e. the number of SSYTs of shape \(\lambda\) and type \(\mu\). Finally, the symmetric group characters appear as \[p_{\alpha}=\sum_{\lambda}\chi^{\lambda}(\alpha)s_{\lambda}\] or equivalently, as \[s_{\lambda}=\sum_{\alpha}\chi^{\lambda}(\alpha)z_{\alpha}^{-1}p_{\alpha}.\] ### Tensor products and structure constants Once the irreducible representations have been sufficiently understood, it is natural to consider what other representations can be formed by them and how such representations decompose into irreducibles. Such problems are often studied in quantum mechanics under the name _Clebsh-Gordon coefficients_. In the case of \(GL_{N}(\mathbb{C})\) these coefficients are the _Littlewood-Richardson coefficients_ (LR) \(c_{\mu\nu}^{\lambda}\) defined as the multiplicity of \(V_{\lambda}\) in \(V_{\mu}\otimes V_{\nu}\), so \[V_{\mu}\otimes V_{\nu}=\bigoplus_{\lambda}V_{\lambda}^{\oplus c_{\mu\nu}^{ \lambda}}.\] Via their characters, they can be equivalently defined as \[s_{\mu}s_{\nu}=\sum_{\lambda}c_{\mu\nu}^{\lambda}s_{\lambda},\quad\text{ and }\quad s_{\lambda/\mu}=\sum_{\nu}c_{\mu\nu}^{\lambda}s_{\nu}.\] While no nice product formula exists for their computation, they have a _combinatorial interpretation_, the so called _Littlewood-Richardson rule_. This rule was first stated by Littlewood and Richardson in 1934 [14], survived through several incomplete proofs, until formally proven using the new technology listed here by Schutzenberger and, separately, Thomas in the 1970s. **Theorem 3.1** (Littlewood-Richardson rule).: _The Littlewood-Richardson coefficient \(c_{\mu\nu}^{\lambda}\) is equal to the number of skew SSYT \(T\) of shape \(\lambda/\mu\) and type \(\nu\), whose reading word is a ballot sequence._ The _reading word_ of a tableau \(T\) is formed by reading its entries row by row from top to bottom, such that the rows are read from the back. For example the reading word of \(\young It is easy to see the similarity with the Kostka numbers, and indeed, Kostka is a special case of LR via the following \[K_{\lambda,\mu}=c^{\theta}_{\lambda_{1}^{\ell-1},\tau}, \tag{3.1}\] where \(\ell=\ell(\mu)\), \(\theta=(\lambda_{1}^{\ell-1}+\eta,\lambda)\) with \(\eta=(\mu_{1}*(\ell-1),\mu_{1}*(\ell-2),\ldots,\mu_{1})\) and \(\tau=\eta+\mu\). **Example 3.3**.: _Let \(\lambda=(3,2)\) and \(\mu=(2,2,1)\), then the suggested \(LR\) tableaux by the above formula would be \(\young(2,2)\) and \(\young(2,2)\). As we see the regular SSYTs of shape \(\lambda\) and type \(\mu\)_ \(\young(2,2)\)__ _emerge in the bottom. The top parts are forced and their reading words have many more 1s than 2s, more 2s than 3s etc so that they overwhelm the ballot and the ballot condition is trivially satisfied by any SSYT in the bottom part._ The _Kronecker coefficients_\(g(\lambda,\mu,\nu)\) of the symmetric group are the corresponding structure constants for the ring of \(S_{n}\)- irreducibles. Namely, \(S_{n}\) acts diagonally on the tensor product of two Specht modules and the corresponding module factors into irreducibles with multiplicities \(g(\lambda,\mu,\nu)\) \[\mathbb{S}_{\lambda}\otimes\mathbb{S}_{\mu}=\bigoplus_{\nu}\mathbb{S}_{\nu}^{ \oplus g(\lambda,\mu,\nu)},\text{ i.e. }\ \ \chi^{\lambda}\chi^{\mu}=\sum_{\nu}g(\lambda,\mu,\nu)\chi^{\nu}\] In terms of characters we can write them as \[g(\lambda,\mu,\nu)=\langle\chi^{\lambda}\chi^{\mu},\chi^{\nu}\rangle=\frac{1}{ n!}\sum_{w\in S_{n}}\chi^{\lambda}(w)\chi^{\mu}(w)\chi^{\nu}(w). \tag{3.2}\] The last formula shows that they are symmetric upon interchanging the underlying partitions \(g(\lambda,\mu,\nu)=g(\mu,\nu,\lambda)=\cdots\) which motivates us to use such symmetric notation. The _Kronecker product_\(*\) on \(\Lambda\) is defined on the Schur basis by \[s_{\lambda}*s_{\mu}=\sum_{\nu}g(\lambda,\mu,\nu)s_{\nu},\] and extended by linearity. The Kronecker coefficients were defined by Murnaghan in 1938 [10], who was inspired by the Littlewood-Richardson story. In fact, he showed that **Theorem 3.4** (Murnaghan).: _For every \(\lambda,\mu,\nu\), such that \(|\lambda|=|\mu|+|\nu|\) we have that_ \[c^{\lambda}_{\mu\nu}=g((n-|\lambda|,\lambda),(n-|\mu|,\mu),(n-|\nu|,\nu)),\] _for sufficiently large \(n\)._ In particular, one can see that \(n=2|\lambda|+1\) would work. Note that this also implies that these Kronecker coefficients stabilize as \(n\) increases, which is also true in further generality. Thanks to the Schur-Weyl duality, they can also be interpreted via Schur functions as \[s_{\lambda}[\boldsymbol{x}\boldsymbol{y}]=\sum_{\mu,\nu}g(\lambda,\mu,\nu)s_ {\mu}(\boldsymbol{x})s_{\nu}(\boldsymbol{y}),\] where \(\boldsymbol{x}\boldsymbol{y}=(x_{1}y_{1},x_{1}y_{2},\ldots,x_{2}y_{1},\ldots)\) is the vector of all pairwise products. In terms of \(GL\) representations they give us the dimension of the invariant space \[g(\lambda,\mu,\nu)=\dim(V_{\mu}\otimes V_{\nu}\otimes V_{\lambda}^{*})^{GL_{N }\times GL_{M}},\] where \(V_{\mu}\) is considered a \(GL_{N}\) module, and \(V_{\nu}\) a \(GL_{M}\) module. Using this interpretation for them as dimensions of highest weight spaces, see [12], one can show the following **Theorem 3.5** (Semigroup property).: _Let \((\alpha^{1},\beta^{1},\gamma^{1})\) and \((\alpha^{2},\beta^{2},\gamma^{2})\) be two partition triples, such that \(|\alpha^{1}|=|\beta^{1}|=|\gamma^{1}|\) and \(|\alpha^{2}|=|\beta^{2}|=|\gamma^{2}|\). Suppose that \(g(\alpha^{1},\beta^{1},\gamma^{1})>0\) and \(g(\alpha^{2},\beta^{2},\gamma^{2})>0\). Then_ \[g(\alpha^{1}+\alpha^{2},\beta^{1}+\beta^{2},\gamma^{1}+\gamma^{2})\geq\max\{g (\alpha^{1},\beta^{1},\gamma^{1}),g(\alpha^{2},\beta^{2},\gamma^{2})\}.\] Other simple properties we can see using the original \(S_{n}\) characters are that \[g(\lambda,\mu,\nu)=g(\lambda^{\prime},\mu^{\prime},\nu)\] since \(\mathbb{S}_{\lambda}=\mathbb{S}_{\lambda^{\prime}}\otimes\mathbb{S}_{1^{n}}\), where \(\chi^{1^{n}}(w)=\operatorname{sgn}(w)\) is simply the sign representation. Similarly, we have \(g(\lambda,\mu,(n))=\delta_{\lambda,\mu}\) for all \(\lambda\) and \(g(\lambda,\mu,1^{n})=\delta_{\lambda,\mu^{\prime}}\). **Example 3.6**.: _By the above observation we have that_ \[h_{k}[\textbf{xy}]=s_{k}[\textbf{xy}]=\sum_{\lambda\vdash k}s_{\lambda}( \textbf{x})s_{\lambda}(\textbf{y}).\] _Using the Jacobi-Trudi identity we can write_ \[s_{2,1}[\textbf{xy}]=h_{2}[\textbf{xy}]h_{1}[\textbf{xy}]-h_{3}[ \textbf{xy}]=\left(s_{2}(\textbf{x})s_{2}(\textbf{y})+s_{1,1}(\textbf{x})s_{1, 1}(\textbf{y})\right)s_{1}(\textbf{x})s_{1}(\textbf{y})\] \[-s_{3}(\textbf{x})s_{3}(\textbf{y})-s_{2,1}(\textbf{x})s_{2,1}( \textbf{y})-s_{1,1,1}(\textbf{x})s_{1,1,1}(\textbf{y})\] \[=s_{2,1}(\textbf{x})s_{2,1}(\textbf{y})+s_{2,1}(\textbf{x})s_{3}( \textbf{y})+s_{3}(\textbf{x})s_{2,}(\textbf{y})+s_{1,1,1}(\textbf{x})s_{2,1}( \textbf{y})+s_{2,1}(\textbf{x})s_{1,1,1}(\textbf{y}).\] _So we see that \(g((2,1),(2,1),(2,1))=1\)._ The _plethysm coefficients_\(a_{\mu,\nu}^{\lambda}\) are multiplicities of an irreducible GL representation in the composition of two GL representations. Namely, let \(\rho^{\mu}:GL_{N}\to GL_{M}\) be one irreducible, and \(\rho^{\nu}:GL_{M}\to GL_{K}\) be another. Then \(\rho^{\nu}\circ\rho^{\mu}:GL_{N}\to GL_{K}\) is another representation of \(GL_{N}\) which has a character \(s_{\nu}[s_{\mu}]\), which decomposes into irreducibles as \[s_{\nu}[s_{\mu}]=\sum_{\lambda}a_{\mu,\nu}^{\lambda}s_{\lambda}.\] Here the notation \(f[g]\) is the evaluation of \(f\) over the monomials of \(g\) as variables, namely if \(g=\textbf{x}^{\alpha^{1}}+\textbf{x}^{\alpha^{2}}+\cdots\), then \(f[g]=f(\textbf{x}^{\alpha^{1}},\textbf{x}^{\alpha^{2}},\ldots)\). **Example 3.7**.: _We have that_ \[s_{(2)}[s_{(1^{2})}]=h_{3}[e_{2}]=h_{2}(x_{1}x_{2},x_{1}x_{3}, \ldots)=x_{1}^{2}x_{2}^{2}+x_{1}^{2}x_{2}x_{3}+3x_{1}x_{2}x_{3}x_{4}+\cdots\] \[=s_{2,2}(x_{1},x_{2},x_{3},\ldots)+s_{1,1,1,1}(x_{1},x_{2},x_{3}, \ldots),\] _so \(a_{(2),(1,1)}^{(2,2)}=1\) and \(a_{(2),(1,1)}^{(3,1)}=0\)._ We will be particularly interested when \(\mu=(d)\) or \((1^{d})\) which are the \(d\)th symmetric power \(Sym^{d}\) and the \(d\)th wedge power \(\Lambda^{d}\), and \(\nu=(n)\). We denote this plethysm coefficient by \(a_{\lambda}(d[n]):=a_{(d),(n)}^{\lambda}\) and \[h_{d}[h_{n}]=\sum_{\lambda}a_{\lambda}(d[n])s_{\lambda}.\] The following can easily be derived using similar methods, see [10]. **Proposition 3.8**.: _We have that \(g(\lambda,n^{d},n^{d})=a_{\lambda}(d[n])=p_{\lambda_{2}}(n,d)-p_{\lambda_{2}-1 }(n,d)\) for \(\lambda\vdash nd\), such that \(\ell(\lambda)\leq 2\). Here \(p_{r}(a,b)=\#\{\mu\vdash r:\mu_{1}\leq a,\ell(\mu)\leq b\}\) are the partitions of \(r\) which fit inside a rectangle._ In particular, these are the coefficients in the \(q\)-binomials \[\sum_{r}p_{r}(a,b)q^{r}=\binom{a+b}{a}_{q}:=\prod_{i=1}^{a}\frac{(1-q^{i+b})}{ 1-q^{i}}.\] As a curious application, these identities were used in [13, 14] to prove the strict version of Sylvester's unimodality theorem and find bounds on the coefficients of the \(q\)-binomials. Later in [13], using tilted random geometric variables, we found tight asymptotics for the differences of \(p_{r}(a,b)\) and hence obtained tight asymptotics for this family of Kronecker coefficients. ## 4. Computational Complexity Theory Here we will define, in broad and not fully precise terms, the necessary computational complexity classes and models of computation. For background on the subject we refer to [1, 13, 14]. ### Decision and counting problems Computational problems can be classified depending on how much of a given resource (time or memory) is needed to solve it via an algorithm, i.e. produce the answer for any given input of certain size. Depending on the model of computation used (e.g. Turing machines, boolean circuits, quantum computers etc) the answers could vary. Here we will only focus on classical computers and will consider complexity depending on the time an algorithm takes, which is essentially equivalent to the number of elementary steps an algorithm performs. Let \(I\) denote the input of the problem and let \(|I|=n\) be its size as the number of bits it takes to write down in the computer. Depending on the encoding of the problem the size can vary, and then the "speed" of the algorithm as a function of the size will change. There are two main ways to present an input: _binary_ versus _unary_. If the input is an integer \(N\), then in binary it would have size about \(\lceil\log_{2}(N)\rceil\), for example when \(N=2023\), in binary it is \(11111100111\) and the input size is \(11\). In unary, we will write \(N\) as \(\underbrace{111\ldots 1}_{N}\) and in our case take up \(N=2023\) bits. As we shall see soon, the encoding matters significantly on how fast the algorithms are as functions of the input size. From complexity standpoint, encoding in binary or in any other base \(b>1\), does not make a difference as the input size is just rescaled \(\log_{b}N=\log_{b}(2)\log_{2}N\). A _decision problem_, often referred to as _language_, is a problem, whose answer should be Yes/No. For example, in the problem PRIMES we are given input \(N\) and have to output Yes if \(N\) is a prime number. The _complexity class_\(\mathcal{P}\) consists of the decision problems, such that the answer can be obtained in _polynomial time_, that is \(O(n^{k})\) for some fixed \(k\) (fixed for the given problem, but independent of the input). Thanks to the [1] breakthrough result, we now know that \(\mathrm{PRIMES}\in\mathcal{P}\). The _complexity class_\(\mathcal{NP}\) consists of decision problems, such that if the answer is Yes then it can be _verified_ in polynomial time, i.e. they have a _poly-time witness_. The problem is phrased as "given input \(I\), is the set \(C(I)\) nonempty". It is in \(\mathcal{NP}\) iff whenever \(C(I)\neq\emptyset\), then there would be an element \(X(I)\in C(I)\), such that we can check whether \(X(I)\in C(I)\) in \(O(n^{k})\) time for some fixed \(k\). For example, in \(\mathsf{HAMCYCLE}\), the input is a graph \(G=(V,E)\) (encoded as its adjacency matrix, so the input size is \(O(|V|^{2})\)), and the question is "does \(G\) have a Hamiltonian cycle". In this case \(C(G)\) would be the set of all Hamiltonian cycles in \(G\) (encoded as permutations of the vertices), and given one such cycle \(X(G)=v_{1}\ldots v_{m}\) we can check in \(O(m)\) time whether it is indeed a Hamiltonian cycle by checking whether \((v_{i},v_{i+1})\in E\) for all \(i=1,\ldots,m\). We say that a problem is _\(\mathcal{NP}\)-complete_ if it is in \(\mathcal{NP}\) and every other problem from \(\mathcal{NP}\) can be reduced to it in polynomial time. A set of \(\mathcal{NP}\)-complete problems includes \(\mathsf{HAMCYCLE}\), \(\mathsf{3SAT}\), \(\mathsf{BINPACKING}\) etc. A problem is _\(\mathcal{NP}\)-hard_ if every problem in \(\mathcal{NP}\) is reducible to it in poly time. **Example 4.1**.: _Here is an example when input size starts to matter. The problem \(\mathsf{KNAPSACK}\) is as follows:_ _Given an input \(a_{1},\ldots,a_{m},b\) of \(m+1\) integers, determine whether there is a subset \(S\subset\{1,\ldots,m\}\), such that \(\sum_{i\in S}a_{i}=b\). If the input integers \(a_{i}\) are encoded in binary then the problem is \(\mathcal{NP}\)-complete. However, if they are encoded in unary then there is a dynamic programming algorithm that would output the answer in polynomial time. It is said that such problems can be solved _in_ pseudopolynomial time_. However the modern treatment would consider these problems as two different computational problems, one for each input encoding._ We have that \(\mathsf{P}\subset\mathsf{NP}\), but we are nowhere near showing that the containment is strict. **Problem 4.2** (The \(\mathsf{P}\) vs \(\mathsf{NP}\) Millennium problem).: _Show that \(\mathsf{P}\neq\mathsf{NP}\)._ However, most researchers believe (and assume) that \(\mathsf{P}\neq\mathsf{NP}\). The _class_\(\mathsf{coNP}\) consists of the decision problems, such that if the answer is No, then there exists a poly-time witness proving that. \(X\in\mathsf{coNP}\) if and only if \(\overline{X}\in\mathsf{NP}\). For example, \(\mathsf{HAMCYCLE}\) would be the problem of deciding whether a graph does NOT have a Hamiltonian cycle. If the answer is no, then the graph has such a cycle and we can check it as above. The _polynomial hierarchy_\(\mathsf{PH}\) is a generalization of \(\mathsf{NP}\) and \(\mathsf{coNP}\) and is, informally, the set of all problems which can be solved by some number of augmentations by an oracle. Specifically, denote by \(B^{A}\) the set of problems which can be solved by an algorithm from \(B\) augmented with an "oracle" (another machine) from \(A\). Then we set \(\Delta_{0}^{\mathsf{P}}:=\Sigma_{0}^{\mathsf{P}}:=\Pi_{0}^{\mathsf{P}}:= \mathsf{P}\) and recursively \(\Delta_{i+1}^{\mathsf{P}}:=\mathsf{P}^{\Sigma_{i}^{\mathsf{P}}}\), \(\Sigma_{i+1}^{\mathsf{P}}:=\mathsf{NP}^{\Sigma_{i}^{\mathsf{P}}}\) and \(\Pi_{i+1}^{\mathsf{P}}:=\mathsf{coNP}^{\Sigma_{i}^{\mathsf{P}}}\). We set \(\mathsf{PH}:=\cup_{i}\left(\Sigma_{i}^{\mathsf{P}}\cup\Pi_{i}^{\mathsf{P}} \cup\Delta_{i}^{\mathsf{P}}\right)\). We have that \(\Sigma_{i}\subset\Delta_{i+1}\subset\Sigma_{i+1}\) and \(\Pi_{i}\subset\Delta_{i+1}\subset\Pi_{i+1}\), and it is yet another big open problem to prove the containments are strict. A widely believed hypothesis is that \(\mathsf{NP}\neq\mathsf{coNP}\) and that \(\mathsf{PH}\) does not collapse to any level (i.e. \(\Sigma_{i}\neq\Sigma_{i+1}\) etc). _Counting problems_ ask for the number of elements in \(C(I)\) given input \(I\). There are two main complexity classes \(\mathsf{FP}\) and \(\mathsf{\#P}\), also believed to be different. \(\mathsf{FP}\) is the class of problems, such that \(|C(I)|\) can be found in \(O(n^{k})\) time for some fixed \(k\). _The class_\(\mathsf{\#P}\) is the counting analogue of \(\mathsf{NP}\) and can be defined as \(\mathsf{\#P}\) is the class of functions \(f:\{0,1\}^{*}\to\mathbb{N}\), such that there exists a polynomial \(p\) and a verifier \(V\) so that for an input \(I\) we have \[f(I)=|\{y\in\{0,1\}^{p(|I|)}:V(I,y)=1\}|=\sum_{y=0}^{2^{p(|I|)}-1}V(I,y). \tag{4.1}\] The verifier should be an algorithm running in polynomial time. That is, \(\mathsf{\#P}\) is the set of functions \(f\) which can be expressed as _exponentially large sums of terms \(V\in\{0,1\}\) which are determined in polynomial time_. **Example 4.3**.: \(\mathsf{\#PERFCTMATCHINGS}\)_, \(\mathsf{\#HAMCYCLES}\), \(\mathsf{\#SETPARTITIONS}\) are all \(\mathsf{\#P}\)-complete problems. In the case of HAMCYCLE, we have the input \(I:=G\) a graph on \(m\) vertices and \(|I|=O(m^{2})\) given by the edge pairs. Then \(f\) counts the number of Hamiltonian cycles, so it can be computed by going over all \(m!=O(2^{m\log m})\) permutations \(y:=v_{\sigma}\) of the vertices \(\{v_{1},\ldots,v_{m}\}\) and the verifier is \(V(G,v_{\sigma})=1\) iff \((v_{\sigma(i)},v_{\sigma(i+1)})\in E(G)\) is an edge for every \(i\)._ The _class_\(\mathsf{GapP}\) is the class of problems which are the difference of two \(\mathsf{\#P}\) functions, namely \(\mathsf{GapP}=\{f-g:f,g\in\mathsf{\#P}\}\). The class \(\mathsf{GapP}_{\geq 0}=\mathsf{GapP}\cap\{f\geq 0\}\) is the class of \(\mathsf{GapP}\) functions whose values are nonnegative. We define \(\mathsf{C}_{=}\bar{\mathsf{P}}=[\mathsf{GapP}=0]\), the class of decision problems on whether two \(\mathsf{\#P}\) functions are equal. The application of Computational Complexity theory in Combinatorics revolves around the following paradigm, see [10] for detailed treatment. \begin{tabular}{l l} \hline _Counting and characterizing combinatorial_ & _Solve: is_ \(X\in C(I)\)_?_ _or compute_ \(|C(I)|\) \\ _objects given input data_ \(I\) \\ \hline \multicolumn{2}{l}{"Nice formula"} & The problem is in \(\mathsf{P},\mathsf{FP}\) \\ \hline \multicolumn{2}{l}{Positive combinatorial formula} & The problem is in \(\mathsf{NP}\), \(\mathsf{\#P}\) \\ \hline \multicolumn{2}{l}{No "combinatorial interpretation"} & The problem is not in \(\mathsf{\#P}\) \\ \hline \end{tabular} **Remark 4.4**.: The class \(\#\mathsf{P}\) is quite large. While it contains essentially all positive combinatorial formulas/interpretations we encounter in practice, it may actually be too large and other complexity classes like \(\mathsf{AC}\) could be more appropriate for certain problems. **Remark 4.5**.: The above table does not address how the input is presented, but we can argue that there are _natural_ encodings for the problems we will consider. Namely, we will see that if the inputs are in unary of input size \(n\) then our problems of interest are in GapP and are nonnegative functions. This makes it natural to ask for the positive combinatorial formula to be in \(\#\mathsf{P}\). Moreover, the bounds on the sizes of our answers would be at most exponential in the input size \(n\), i.e. \(O(2^{p(n)})\) for some fixed degree polynomial \(p\), which again suggests that a positive combinatorial formula is exactly of the kind (4.1). Thus, problems like "the number of sets of subsets of a given set" are excluded from this consideration being "too large" for their input size. Besides the classical computational complexity, there is also _quantum complexity_, informally defined by the minimal size of a quantum circuit needed to compute an answer. Here the input would be encoded in \(n\) qubits and lives in the Hilbert space \(\ell^{2}(\{0,1\}^{n})\), and a simple gate in the circuit is a reversible unitary transformation on some of the qubits. The output is a measurement of some qubits. Quantum mechanics does not allow us to perform exact measurements, and so our output is naturally probabilistic. A quantum algorithm solves a decision problem, iff the probability that it outputs the correct answer (Yes/No) is \(\geq\frac{2}{3}\) (this constant can be changed). The quantum analogues of \(\mathsf{P}\) and \(\mathsf{NP}\) are BQP and QMA: BQP is the class of decision problems for which there is a polynomially-sized quantum circuit computing the answer with high probability, and QMA is the class of problems, for which when the answer is Yes, there exists a poly-sized quantum circuit verifying the answer with high-probability. The counting analogue of \(\#\mathsf{P}\) is thus \(\#\mathsf{BQP}\) and can be thought of as counting the number of accepting witnesses to a QMA verifier. ### Algebraic Complexity Theory Arithmetic (algebraic) Complexity theory is the study of computing polynomials \(f(x_{1},\ldots,x_{n})\in\mathbb{F}[x_{1},\ldots,x_{n}]\) in \(n\) formal variables using simple operations \(*,+,-,/\), where the input are the variables \(x_{1},\ldots,x_{n}\) and arbitrary constants from the underlying field. The complexity of the given polynomial \(f\) is then the minimal number of such operations needed to compute the polynomial within the given model. There are three basic models of computations - formulas, algebraic branching programs (ABPs) and circuits. For details on Algebraic Complexity theory see [1, 10]. Throughout the polynomials \(f\) will be assumed to have \(O(poly(n))\) bounded total degrees. The algebraic complexity classes \(\mathsf{VP}\) and \(\mathsf{VNP}\) were introduced by Valiant [14, 15], as the algebraic analogues of \(\mathsf{P}\) and \(\mathsf{NP}\) (we refer to [10] for formal definitions and properties). The _class_\(\mathsf{VP}\) is the class of sequences of polynomials for which there is a constant \(k\) and a \(O(n^{k})\)-sized arithmetic circuit computing them. By _arithmetic circuit_ we mean a directed acyclic graph with source nodes containing variables \(x_{1},\ldots,x_{n}\) or constants from the field, and the other vertices contain one simple operation performed with input from the nodes pointing to that vertex. There is only one sink, which should contain the result of all the computations, our polynomial \(f\). Let \(S(f)\) denote the minimal possible size of a circuit computing \(f\). **Example 4.6**.: _This circuit computes the polynomial \(f=x_{2}x_{3}(x_{1}+x_{2})(3+x_{3})\) using 4 input nodes and 6 internal operations._ _The class \(\mathsf{VP}_{\mathrm{ws}}\) is the class of polynomials \(f\), which have \(O(n^{k})\)-sized formulas. A formula is a circuit whose graph is a binary tree, so no output can be used twice. Let \(L(f)\) denote the minimal formula size of \(f\). Then a formula is recursively composed of operations \(f=g*h\) or \(f=g+h\), so we have \(L(f)\leq L(g)+L(h)+1\)._ **Example 4.7**.: _Let \(f=x_{1}^{2}+x_{2}^{2}+x_{3}^{2}+x_{1}x_{3}+x_{1}x_{2}+3x_{2}x_{3}=(x_{1}+x_{2} )*(x_{1}+x_{3})+(x_{2}+x_{3})(x_{2}+x_{3})\), which has formula size \(3+1+3\), so \(L(f)\leq 7\)._ We have that \(S(f)\leq L(f)\) by definition, and according to [VSRR], \(S(f)\leq L(f)^{\log n}\). Finally, _the class \(\mathsf{VBP}\)_ is the class of polynomials \(f\) which can be computed with a poly-sized Algebraic Branching Program. Informally, this is a directed acyclic graph with \(\deg(f)\)-many layers, one source \(s\) and one sink \(t\), each edge \(e\) is labeled by a linear function in the variables \(x_{1},\ldots,x_{n}\) called \(w(e)\) and the output is computed by going over all directed paths \(p\): \[f=\sum_{p:s\to t}\prod_{e\in p}w(e).\] The size of the branching program is defined as the maximal number of nodes in a layer, and by our assumptions is polynomially equivalent to the size of the given graph. Let \(M(f)\) be the minimal size \(\mathrm{ABP}\) needed to compute \(f\), then \(\mathsf{VBP}\) is the class of families of polynomials \(f_{n}\) for which there is a fixed \(k\) with \(M(f_{n})=O(n^{k})\). _The class \(\mathsf{VNP}\)_ is the class of polynomials \(f\), such that there exists a fixed \(k\) and a polynomial \(g\in\mathsf{VP}\) with \(m=O(n^{k})\) variables, such that \[f(x_{1},\ldots,x_{n})=\sum_{b\in\{0,1\}^{m-n}}g(x_{1},\ldots,x_{n},b_{1}, \ldots,b_{m}).\] In particular, every polynomial whose coefficients in the monomial expansion are easy to determine, would be in \(\mathsf{VNP}\). It is clear that \(\mathsf{VP}_{\mathrm{ws}}\subset\mathsf{VBP}\subset\mathsf{VP}\subset\mathsf{ VNP}\), but are these classes different? **Conjecture 4.8** (Valiant).: _We have that \(\mathsf{VBP}\neq\mathsf{VNP}\)._ We also believe that \(\mathsf{VP}\neq\mathsf{VNP}\), but this problem is even harder to approach. As Valiant showed, for every polynomial \(f\in\mathbb{C}[x_{1},\ldots,x_{n}]\) there exists a \(K\) and a \(K\times K\) matrix \(A\), s.t. \(A=A_{0}+\sum_{i=1}^{N}A_{i}x_{i}\) with \(A_{j}\in\mathbb{C}^{K\times K}\), such that \[\mathrm{det}A=f.\] The smallest such \(K\) is the _determinantal complexity_\(\mathsf{dc}(f)\) of \(f\) and it is _finite for every \(f\)_. **Example 4.9**.: _Let \(f=x_{1}^{2}+x_{1}x_{2}+x_{2}x_{3}+2x_{1}\), then let_ \[A:=\begin{bmatrix}x_{1}+2&x_{2}\\ -x_{3}+2&x_{1}+x_{2}\end{bmatrix},\] _so \(f=\det A\). Since \(\deg(f)=2\), the smallest such matrix would be \(2\times 2\) and so \(\operatorname{\mathsf{dc}}(f)=2\)._ As Valiant also showed, we have \[\operatorname{\mathsf{dc}}(f)\leq 2L(f),\] and also that \(M(f)\) and \(\operatorname{\mathsf{dc}}(f)\) are polynomially equivalent. Thus from now on we can use _dc as complexity measure_. We say that _determinant is universal for VBP_, in the sense that \(f\in\textsf{VBP}\) iff \(\operatorname{\mathsf{dc}}(f)=poly(n)\). The classical universal VNP-complete polynomial is the _permanent_ \[\operatorname{per}_{m}[X_{ij}]_{i,j=1}^{m}=\sum_{\sigma\in S_{m}}\prod_{i=1}^{ m}X_{i\sigma(i)}\] in the sense that every \(f\in\textsf{VNP}\) of \(\deg(f)\leq n^{c}\) can be written as a \(\operatorname{per}_{m}[A]\) for some matrix \(A\) of affine linear entries, and of polynomial size \(m=O(n^{k})\). It is much more powerful than the determinant. Thus, to show that \(\textsf{VBP}\neq\textsf{VNP}\) we need to show the following **Conjecture 4.10** (Valiant [20]).: _The determinantal complexity \(\operatorname{\mathsf{dc}}(\operatorname{per}_{m})\) grows superpolynomially in \(m\)._ It is known that \(\operatorname{\mathsf{dc}}(\operatorname{per}_{m})\leq 2^{m}-1\)[12], and \(\operatorname{\mathsf{dc}}(\operatorname{per}_{m})\geq\frac{m^{2}}{2}\)[13]. The connection between P vs NP and VP vs VNP is exemplified in the following statement. **Theorem 1** ([14]).: _If one shows that \(\textsf{V}\!P=\textsf{VNP}\) over a finite field then \(\textsf{P}=\textsf{NP}\). If \(\textsf{V}\!P=\textsf{VNP}\) over \(\mathbb{C}\) and the Generalized Riemann Hypothesis holds then \(\textsf{P}=\textsf{NP}\)._ From here on we will only work over the field of constants \(\mathbb{C}\). We believe that separating VBP from VNP is the easier problem as the algebraic structure gives more tools. An approach to show \(\textsf{VBP}\neq\textsf{VNP}\) is the _Geometric Complexity Theory_, which will be discussed in Section 6. ## 5. Applications of Computational Complexity in Algebraic Combinatorics Here we discuss some open problems in Algebraic Combinatorics. These can be phrased more formally using computational complexity theory and potentially answered within its framework. ### Open problems: combinatorial interpretation Semistandard Young Tableaux, the hook-length formula, the RSK correspondence, the Littlewood-Richardson rule are all examples of beautiful combinatorics. Besides aesthetically appealing, such results are also quite useful. Within Representation Theory they provide effective tools to understand the structure of group representations. Within asymptotics and statistical mechanics they give tools to understand behavior of lozenge tilings (dimer covers of the hexagonal grid), longest increasing subsequences of permutations, behavior of random sorting networks, random matrix eigenvalues etc. Following the discovery of the Littlewood-Richardson rule in 1934, Murnaghan [15] defined the Kronecker coefficients of \(S_{N}\) and observed that computing even simple special cases is difficult. Interest in specifically nonnegative combinatorial interpretation can be found in [16, 11], and was formulated explicitly by Stanley as Problem 10 in his list "Open Problems in Algebraic Combinatorics" [18]2. Footnote 2: See this for the original list and updates on the problems [https://mathoverflow.net/questions/349406/](https://mathoverflow.net/questions/349406/) **Open Problem 5.1**.: _Find a combinatorial interpretation of \(g(\lambda,\mu,\nu)\), i.e. a family of combinatorial objects \(C(\lambda,\mu,\nu)\), such that \(g(\lambda,\mu,\nu)=|C(\lambda,\mu,\nu)|\)._ Over the years, there has been very little progress on the question. In 1989 Remmel determined \(g(\lambda,\mu,\nu)\) when two of the partitions are hooks [10]. In 1994 Remmel and Whitehead [11] determined \(g(\lambda,\mu,\nu)\) when two of the partitions are two-rows, i.e. \(\ell(\lambda),\ell(\mu)\leq 2\). This case was subsequently studied also in [1]. In 2006 Ballantine and Orellana [1] determined a rule for \(g(\lambda,\mu,\nu)\) when one partition is a two-row, e.g. \(\mu=(n-k,k)\), and the first row of one of the others is large, namely \(\lambda_{1}\geq 2k-1\). The most general rule was determined by Blasiak in 2012 [10] when one partition is a hook, and this was later simplified by Blasiak and Liu [10, 11]; informally it states that \(g(\lambda,\mu,(n-k,1^{k}))\) is equal to the number of tableau in \(\bar{1}<1<\bar{2}<\cdots\) of shape \(\lambda\), type \(\mu\) with restrictions on the ordering and certain entries. Other very special cases have been computed in the works of Bessenrodt-Bowman [1] for multiplicity-free products; when the marginals correspond to pyramids in Ikenmeyer-Mulmuley-Walter [12]; near-rectangular partitions by Tewari [13] etc. **Remark 5.2**.: Most combinatorial interpretations in the area count tableaux or permutations with various restrictions. That, however, should not limit our scope. Consider the following labeled rooted _partition trees_\(T(m,\ell,k)\) whose vertices are labelled by \((a,b,\lambda,j)\), \(j\leq ab\), \(\lambda\vdash b\). The leaves correspond to labels with \(b=1\) and can thus be labeled by only \((a,j)\) with \(0\leq j\leq a\). Let the root be \((m,\ell,\lambda,k)\) for some \(\lambda\vdash\ell\). We impose the following local conditions between vertices and their children. Let a vertex be labeled \((a,b,\lambda,j)\), with \(\lambda=(1^{b_{1}},\ldots,n^{b_{n}})\). Then it has at most \(n\) children and their labels are of the form \((a_{1},b_{1},\lambda^{1},j_{1}),\ldots,(a_{n},b_{n},\lambda^{n},j_{n})\). s.t. * \(a_{i}=i(a+2)-2(\lambda^{\prime}_{1}+\cdots+\lambda^{\prime}_{i})\) for all \(i=1,\ldots,n\). * \(j_{1}+\cdots+j_{n}=j-2\sum_{i}(i-1)\lambda_{i}\). Finally, let the leaves be \(\{(a_{0},i_{0}),\ldots,(a_{t},i_{t})\}\). Then we must have for each \(r<t\): \(i_{r}\geq 2(i_{r+1}+\cdots+i_{t})-(a_{r+1}+\cdots+a_{t})\). **Theorem 5.3** (Pak-Panova, 2014, see [12]).: _The Kronecker coefficient \(g((m^{\ell}),(m^{\ell}),(m\ell-k,k))\) is equal to the number of partition trees \(T(m,\ell,k)\)._ The proof follows from two observations. The first is the fact that \(g((m^{\ell}),(m^{\ell}),(m\ell-k,k))=p_{k}(m,\ell)-p_{k-1}(,m,\ell)\), where \(p_{k}(m,\ell)=\#|\{\alpha\vdash k,\alpha\subset(m^{\ell})\}|\) is the number of partitions of \(k\) fitting inside a \(m\times\ell\) rectangle (see e.g. [10, 11, 12]). Alternatively, these are the coefficients in the expansion of the \(q\)-binomials \[\sum_{k}p_{k}(m,\ell)q^{k}=\binom{m+\ell}{m}_{q}.\] The second part is to unwind the recursive proof of the unimodality of those coefficients via Zeilberger's KOH identity [13]. The recursion then gives the tree \(T\). Motivated by other developments, further questions on the Kronecker coefficients have appeared. Following their work in [11] on the square of the Steinberg character for finite groups of Lie type, Saxl conjectured that \(g(\delta_{k},\delta_{k},\mu)>0\) for all \(k\) and \(\mu\vdash\binom{k+1}{2}\) and \(\delta_{k}=(k,k-1,\ldots,1)\) is the staircase partition. This was initially studied in [12], where its generalization was formulated as **Conjecture 5.4** (Tensor square conjecture).: _For every \(n\geq 9\) there exists a symmetric partition \(\lambda\vdash n\), such that \(\mathbb{S}_{\lambda}\otimes\mathbb{S}_{\lambda}\) contains every irreducible \(S_{n}\) module. In other words, \(g(\lambda,\lambda,\mu)>0\) for all \(\mu\vdash n\)._ The above conjecture raises the question on simply determining when \(g(\lambda,\mu,\nu)>0\). Advances on the tensor square conjecture were initially made in [12, 13, 14], see [12] for a list of many more recent works. It is a consequence of representation theory that for \(n>2\) for every \(\mu\vdash n\) there is a \(\lambda\), such that \(g(\lambda,\lambda,\mu)>0\) (see [14, Ex. 7.82]), but even that has no combinatorial proof. Positivity results were proved using a combination of three methods - the semigroup property constructing recursively positive triples from building blocks, explicit heighest weight constructions using the techniques in [10], and an unusual comparison with characters, which was originally stated in by Bessenrodt and Behns [1] for when \(g(\lambda,\lambda,\lambda)>0\), later generalized in [11], and in its final form in [11]. **Proposition 5.5** ([11]).: _Let \(\lambda,\mu\vdash n\) and \(\lambda=\lambda^{\prime}\). Let \(\hat{\lambda}=(2\lambda_{1}-1,2\lambda_{2}-3,3\lambda_{3}-5,\ldots)\) be the principal hooks of \(\lambda\). Then_ \[g(\lambda,\lambda,\mu)\geq|\chi^{\mu}(\hat{\lambda})|.\] In 2020, with Christine Bessenrodt we generalized the conjecture as follows. **Conjecture 5.6** (Bessenrodt-Panova 2020).: _For every \(n\) there exists a \(k(n)\), such that for every \(\lambda\vdash n\) with \(\lambda=\lambda^{\prime}\) and \(d(\lambda)>k_{n}\) which is not the square partition, we have \(g(\lambda,\lambda,\mu)>0\) for all \(\mu\vdash n\)._ Here \(d(\lambda)=\max\{i:\lambda_{i}\geq i\}\) is the Durfee square size of the partition. Partial progress on that conjecture will appear in the work of Chenchen Zhao. Another question motivated by Quantum Information Theory, pertaining to the so called "_quantum marginal problem_", is for which triples of rational vectors \((\alpha,\beta,\gamma)\) is \(g(k\alpha,k\beta,k\gamma)>0\) for some \(k\in\mathbb{N}\), see e.g. [1]. Thanks to the semigroup property these triples form a convex polytope, a special case of the so-called _Moment polytope_ of a compact connected Lie group and a unitary representation. The Kronecker polytope can actually be described in a certain concrete sense, see [12]. The analogous question on positivity of Littlewood-Richardson coefficients is the Horn problem on spectra of matrices \(A,B,C\), such that \(A+B=C\). The resolution of the "Saturation conjecture" in [10] established that the inequalities cutting out the polytope of eigenvalue triples coincide with the inequalities defining triples of positive LR coefficients. Similar questions pertain to the plethysm coefficients. The following problem is number 9 in Stanley's list [14]. **Open Problem 5.7** (Stanley).: _Find a combinatorial interpretation for the plethysm coefficients \(a_{\mu,\nu}^{\lambda}\)._ Even the simple case for \(a_{\lambda}(d[n])=\langle s_{\lambda},h_{d}[h_{n}]\rangle\) is not known. A detailed survey on the partial results and methods can be found in [13]. There is no direct connection between the Kronecker and plethysm coefficients. Yet we know that when \(\ell(\lambda)\leq 2\) \[a_{\lambda}(d[n])=g(\lambda,n^{d},n^{d}).\] An inequality between them in their stable limit is given in Theorem 5 and was obtained using their interpretations within GCT. There is one major conjecture on plethysm coefficients. **Conjecture 5.8** (Foulkes).: _Let \(d>n\), then_ \[a_{\lambda}(d[n])\geq a_{\lambda}(n[d])\] _for all \(\lambda\vdash nd\)._ This conjecture is related to Alon-Tarsi conjecture, and appeared in the GCT approaches as well. In [1] we proved it for some families of 3-row partitions, see Section 6.3. ### Complexity problems in Algebraic Combinatorics We will now study the important quantities in Algebraic Combinatorics with respect to their computational complexity leading to a classification by such "hardness". This gives a paradigm to understand these constants and either explain when a nice formula could be found, or show that a combinatorial interpretation is unlikely as it would violate computational complexity assumptions like \(\mathsf{P}\neq\mathsf{NP}\). Such questions have been quite common in other branches of combinatorics, like Graph Theory, and many of the graph theoretic problems are at the heart of Computational Complexity. Investigating computational complexity properties of structure constants was initiated when Algebraic Complexity Theory was developed. It came to prominence when Geometric Complexity Theory put understanding Littlewood-Richardson, Kronecker and plethysm coefficients in the spotlight. Most recently, understanding computational complexity has been developed as a framework to formalize combinatorial properties of its own interest as in [10]. **Example 5.9**.: _Consider the problem #SYT: given the input partition \(\lambda\), compute the number of standard Young tableaux of shape \(\lambda\). The answer would depend on how the input is encoded. Suppose that \(\lambda\) is encoded in unary, i.e. each part \(\lambda_{i}\) takes up \(\lambda_{i}\) bits, and so the input size is \(n=|\lambda|\). Using the HLF formula we can compute the product in \(O(n)\) time and thus the problem is in \(\mathsf{FP}\). If the input is in binary, then the input size is \(|I|=O(\log_{2}(\lambda_{1})\ell(\lambda))\) and \(n=|\lambda|=O(2^{|I|})\). For most such partitions we would have \(f^{\lambda}=o(2^{n})=o(2^{2^{|I|}})\). This answer is too big, as it would require exponential space to even encode. This shows that binary input would not be appropriate for this problem at all._ As the example shows, the number of SYTs of shape \(\lambda\) can be computed in polynomial time (when the input is in unary). We have that \(f^{\lambda}=K_{\lambda,1^{n}}\), so the next natural problem is to compute the number of SSYTs of shape \(\lambda\) and given type \(\alpha\). This time, however, there is no product nor determinantal formula known. KostkaPos: Input : \((\lambda_{1},\lambda_{2},\ldots),(\alpha_{1},\alpha_{2},\ldots)\) Output : Is \(K_{\lambda,\alpha}>0\)? This is the problem on deciding positivity of Kostka numbers. We know that \(K_{\lambda,\mu}>0\) if and only if \(\lambda\succ\mu\) in the dominance order, which is the set of linear inequalities for every \(i=1,\ldots,\ell(\lambda)\) \[\lambda_{1}+\cdots+\lambda_{i}\geq\mu_{1}+\cdots+\mu_{i}.\] Thus, given \(\lambda\) and \(\alpha\) in either binary or unary, we can check these inequalities in \(O(\ell(\lambda))\) time, so \(\textsc{KostkaPos}\in\mathsf{P}\). The computational problem, however, is far from trivial ComputeKostka: Input : \((\lambda_{1},\lambda_{2},\ldots),(\alpha_{1},\alpha_{2},\ldots)\) Output : Value of \(K_{\lambda,\alpha}\). **Theorem 5.10** (Narayanan [14]).: _When the input \(\lambda,\alpha\) is in binary, ComputeKostka is #P-complete._ It is not apriori clear why the problem (with binary input) would be in #P given that the SSYTs themselves have exponentially many entries. Yet \(K_{\lambda,\alpha}\) can be computed as the number of integer points in the Gelfand-Tsetlin polytope, defined by \(O(\ell(\alpha)^{2})\) many linear inequalities. These inequalities can be verified in polynomial time. The proof of completeness uses a reduction to KNAPSACK, which is well known to be #P-complete in binary, but it can be solved by a pseudopolynomial dynamic algorithm, so in unary it is in \(\mathsf{FP}\). 3 Footnote 11: We use the notation \(\mu\) to denote the \(\mu\)-th element of \(\mu\). Yet, when the input is in unary for Kostka, in the general case, reduction to \(\mathsf{KNAPSACK}\) does not give anything. Nonetheless, we conjecture that it is still hard. **Conjecture 5.11** (Pak-Panova 2020).: _When the input is unary we have that \(\mathsf{ComputeKostka}\) is \(\mathsf{\#P}\)-complete._ Here it is easy to see that the problem is in \(\mathsf{\#P}\), but not that it is hard. Next we turn towards the Littlewood-Richardson coefficients. \(\mathsf{LRPos}\): \(\mathsf{Input}\colon\lambda,\mu,\nu\) \(\mathsf{Output}\colon\) Is \(c^{\lambda}_{\mu\nu}>0\)? The proof of the Saturation Conjecture by Knutson and Tao [13] showed that an LR coefficient is nonzero if and only if the corresponding hive polytope is nonempty, see [1, 14]. This polytope is a refinement of the Gelfand-Tsetlin polytope, defined by \(O(\ell(\lambda)^{3})\) many inequalities. Showing that the polytope is nonempty is thus a linear programming problem, which can be solved in polynomial time. Thus **Theorem 5.12**.: _We have that \(\mathsf{LRPos}\in\mathsf{P}\) when the input is binary (and unary ditto)._ \(\mathsf{ComputeLR}\): \(\mathsf{Input}\colon\)\(\lambda,\mu,\nu\) \(\mathsf{Output}\colon\) Value of \(c^{\lambda}_{\mu\nu}\). Using the polytopal interpretation to show that even when the input is in binary we have \(\mathsf{ComputeLR}\in\mathsf{\#P}\), the fact that Kostka is a special case of LR, see (3.1), we get the following. **Theorem 5.13** (Narayanan [14]).: _When the input \(\lambda,\alpha\) is in binary, \(\mathsf{ComputeLR}\) is \(\mathsf{\#P}\)-complete._ Yet again, when the input is in unary, we do not know whether the problem is still that hard. **Conjecture 5.14** (Pak-Panova 2020).: _When the input is in unary we have that \(\mathsf{ComputeLR}\) is \(\mathsf{\#P}\)-complete._ We have that computing LR coefficients is in \(\mathsf{\#P}\) thanks to the Littlewood-Richardson rule and its polytopal equivalent formulation. If the input is unary, then the LR tableaux are the polynomial verifier, and one can check in \(O(n^{2})\) time if the tableaux satisfies all the conditions. The hard part here again is to show that computing them is still hard, namely that an \(\mathsf{\#P}\)-complete problem like 3SAT would reduce to \(\mathsf{ComputeLR}\). None of the above has been possible for the Kronecker and plethysm coefficients, however, due to the lack of any positive combinatorial formula. \(\mathsf{KronPos}\): \(\mathsf{Input}\colon\lambda,\mu,\nu\) \(\mathsf{Output}\colon\) Is \(g(\lambda,\mu,\nu)>0\)? The Kronecker coefficients have particular significance in GCT, see Section 6. In the early stages Mulmuley conjectured [14] that they would be like the Littlewood-Richardson, so \(\mathsf{KronPos}\in\mathsf{P}\), which was recently disproved. **Theorem 5.15** ([14]).: _When the input \(\lambda,\mu,\nu\) is in unary, \(\mathsf{KronPos}\) is NP-hard._ The proof uses the fact that in certain cases \(g(\lambda,\mu,\nu)\) is equal to the number of pyramids with marginals \(\lambda,\mu,\nu\), see [20], and deciding if there is such a pyramid is NP-complete. However, the problem is not yet in NP, because we do not have polynomially verifiable witnesses showing that \(g(\lambda,\mu,\nu)>0\) when this happens. Needless to say, the problem would be even harder when the input is in binary, and we do not consider that here. Mulmuley also conjectured that computing the Kronecker coefficients would be in \(\#\mathsf{P}\), again mimicking the Littlewood-Richardson coefficients. ComputeKron: Input: \(\lambda,\mu,\nu\) Output: Value of \(g(\lambda,\mu,\nu)\). **Open Problem 5.16** (Pak).: _Show that ComputeKron is not in \(\#\mathsf{P}\) under reasonable complexity theoretic assumptions such as PH not collapsing._ If the above is proven, that would make any solution to Open Problem 5.1 as unlikely as the polynomial hierarchy collapsing. Any reasonable combinatorial interpretation as counting certain objects would show that the problem is in \(\#\mathsf{P}\), as the objects would likely be verifiable in polynomial time. Note that ComputeKron\(\in\) GapP ([1]) as it is easy to write an alternating sum for its computation, for example using contingency arrays, see [14]. This further shows that \(\#\mathsf{P}\) would be a natural class for this problem as it is already in GapP\({}_{\geq 0}\). The author's experience with Kronecker coefficients seems to suggest that some particular families would be as hard as the general problem. **Conjecture 5.17** (Panova).: _We have that ComputeKron is in \(\#\mathsf{P}\) when \(\ell(\lambda)=2\) if and only if ComputeKron is in \(\#\mathsf{P}\) in the general case. Likewise, ComputeKron for \(\mu=\nu=(n^{d})\) and \(\lambda\vdash nd\) as the input is in \(\#\mathsf{P}\) if and only if the general problem is in \(\#\mathsf{P}\)._ The last part concerns the _rectangular Kronecker coefficients_ of special significance in GCT, see Section 6 and [13]. It is worth noting that when the partitions have fixed lengths, we have that ComputeKron is in FP even when the input is in binary, see [1, 14]. Moreover, from the asymptotic analysis and symmetric function identities in [14], it follows that **Proposition 5.18**.: _Let \(k\) be fixed and \((\lambda,\mu,\nu)\vdash n\) be partitions with diagonals at most \(k\), i.e. \(d(\lambda),d(\mu),d(\nu)\leq k\). Then \(g(\lambda,\mu,\nu)\) can be computed in time \(O(n^{4k^{3}})\)._ Note that in this case we have \(g(\lambda,\mu,\nu)\leq C_{k}n^{4k^{3}+13k^{2}+31k}\) for an explicit constant \(C_{k}\). This in itself does not guarantee the efficiency of the algorithm computing them, but in this case can be easily derived. On the other hand, when the lengths of the partitions are bounded by \(k\) the efficient algorithms run in time \(O((\log n)^{k^{3}\log k})\). We do not expect that a similar efficient algorithm exists in the more general case of fixed diagonals. PlethPos: Input: \(\lambda,\mu,\nu\) Output: Is \(a_{\mu,\nu}^{\lambda}>0\)? ComputePleth: Input: \(\lambda,\mu,\nu\) Output: Value of \(a_{\mu,\nu}^{\lambda}\). Using symmetric function identities, it is not hard to find an alternating formula for the plethysms and show that they are also in GapP, see [15]. They also show that PlethPos is NP-hard. We suspect that ComputePleth may not be in \(\#\mathsf{P}\) in the general case, but also when \(\mu,\nu\) are single row partitions. The coefficient then \(a_{\lambda}(d[n])\) has special significance in GCT, see Section 6. **Open Problem 5.19**.: _Determine whether PlethPose NP and ComputePleth\(\in\) #P under reasonable complexity theoretic assumptions._ The representation theoretic significance of these structure constants poses the natural question on their computation via quantum circuits. Quantum computing can be powerful on algebraic and number theoretic problems. The structure constants in question are dimensions of various vector spaces, and it is natural to expect that such quantities could be amenable to efficient quantum computation. While this is not straightforward, Beals' quantum Fourier transform over the symmetric group [1] gives the following **Theorem 5.20**.: KronPos _is in_ QMA_. ComputeKron _is in_ #BQP_._ These statements have been claimed in [10]. The first statement and a weaker version of the second were shown in [1]. The full proof of the second statement appears in [11]. As the first group noted, the statements should be true even when the input is in binary. **Open Problem 5.21**.: _Show that when the input \((\lambda,\mu,\nu)\) is in binary, then KronPos is in_ QMA _and_ ComputeKron _is in_ BQP_._ With the input in binary we can no longer use the symmetric group \(S_{n}\) as \(n\) would be too large and we will have to use the \(GL\) interpretation of the Kronecker coefficients. ### Proof of concept: character squares are not in #P Underlying all the representation theoretic multiplicities mentioned above are the characters of the symmetric group. For example, equation (3.2) expresses the Kronecker coefficients via characters, and the other structure constants can also be expressed in similar ways. What then can we say about computing the characters and can this be used in any way to help with the problems in Section 5.2 The characters satisfy some particularly nice identities coming from the orthogonality of the rows and columns of the character table in \(S_{n}\). We have that \[\sum_{\lambda\vdash n}\chi^{\lambda}(w)^{2}=\prod_{i}i^{c_{i}}c_{i}!\,, \tag{5.1}\] where \(c_{i}=\) number of cycles of length \(i\) in \(w\in S_{n}\). When \(w=\operatorname{id}\), we have that \(\chi^{\lambda}(\operatorname{id})=f^{\lambda}\), the number of SYTs and the identity becomes equation 2.2. That equation, as mentioned in Section 2.1, can be proven via the beautiful RSK bijection. The first step in this proof is to identify \((f^{\lambda})^{2}\) as the number of pairs of SYTs of the same shape. Could anything like that be done for equation (5.1)? The first step would be to understand what objects \(\chi^{\lambda}(w)^{2}\) counts, does it have any positive combinatorial interpretation? We formulate it again using the CC paradigm as ComputeCharSq: Input: \(\lambda,\alpha\vdash n\), unary. Output: the integer \(\chi^{\lambda}(\alpha)^{2}\). **Theorem 5.22** ([11]).: _ComputeCharSq\(\not\in\) #P unless \(PH=\Sigma_{2}^{P}\)._ The last condition says "polynomial hierarchy collapses to the second level", which is almost as unlikely as \(\mathsf{P}=\mathsf{NP}\), and is a widely believed complexity theoretic assumption. The proof uses the intermediate vanishing decision problem CharVanish: Input: \(\lambda,\alpha\vdash n\), unary. Output: Is \(\chi^{\lambda}(\alpha)=0\)? **Theorem 5.23** ([11]).: _We have that CharVanish is \(\mathsf{C}_{=}\mathsf{P}\) -complete under many-to-one reductions._ In order to prove this we use the Jacobi-Trudi identity to write \(\chi^{\lambda}(\alpha)\) as an alternating sum of ordered set partition functions. Let \(\lambda\vdash n\) with \(\ell(\lambda)\leq\ell\), and let \(\alpha\) be a composition of \(n\). Then \[\chi^{\lambda}(\alpha)\,=\,\sum_{\sigma\in S_{\ell(\lambda)}}\operatorname{sgn} (\sigma)\,P(\alpha,\lambda+\sigma-\operatorname{id}).\] Using number theoretic restrictions we limit the entries to just two: **Proposition 5.24**.: _Let \(\mathbf{c}\) and \(\mathbf{d}\) be two sequences of nonnegative integers, such that \(|\mathbf{c}|=|\mathbf{d}|+6\). Then there are partitions \(\lambda\) and \(\alpha\) of size \(O(\ell|\mathbf{c}|)\) determined in linear time, such that_ \[\chi^{\lambda}(\alpha)\,=\,P\big{(}\mathbf{c},\overline{\mathbf{d}}\big{)}\,- \,P\big{(}\mathbf{c},\overline{\mathbf{d}^{\prime}}\big{)},\] _where \(\overline{\mathbf{d}}:=(2,4,d_{1},d_{2},\ldots)\) and \(\overline{\mathbf{d}^{\prime}}:=(1,5,d_{1},d_{2},\ldots)\)._ We then use the fact that matchings can be encoded as set partition problems, by encoding the edges/hyperedges as unique integers in a large basis as in [10]. After some constructions, putting two 3d-matching problem instances on one hypergraph, with hyperedges of 4 vertices we conclude that **Proposition 5.25** ([11]).: _For every two independent 3d matching problem instances \(E\) and \(E^{\prime}\), there exist \(\mathbf{c}\) and \(\mathbf{d}\) as above, such that_ \[\#3DM(E)-\#3DM(E^{\prime})=\frac{1}{\delta}\left(P\big{(}\mathbf{c},\overline {\mathbf{d}}\big{)}\,-\,P\big{(}\mathbf{c},\overline{\mathbf{d}^{\prime}} \big{)}\right)=\frac{1}{\delta}\chi^{\lambda}(\alpha),\] _where \(\delta\) is a fixed multiplicity factor equal to the number of orderings._ Finally, we observe that counting 3d matchings is a \(\#\mathsf{P}\)-complete problem. Thus the last equations shows that \(\chi^{\lambda}(\alpha)=0\) iff \(\#3DM(E)=\#3DM(E^{\prime})\), i.e. vanishing is equivalent to two independent \(\#\mathsf{P}\) functions being equal. This makes it \(\mathsf{C}_{=}\mathsf{P}\)-complete and proves Theorem 5.23. To show the next steps we use classical CC results. If \(\chi^{2}\in\#\mathsf{P}\) then \([\chi^{2}>0]\in\mathsf{NP}\), so \([\chi\neq 0]\in\mathsf{NP}\) and hence, by definition, \([\chi=0]\in\mathsf{coNP}\). Thus \(\mathsf{C}_{=}\mathsf{P}\subset\mathsf{coNP}\). By a result of Tarui we have \(\mathsf{PH}\subset\mathsf{NP}^{\mathsf{C}_{=}\mathsf{P}}\), and from the above we get \(\mathsf{PH}\subset\mathsf{NP}^{\mathsf{coNP}}=\Sigma_{2}^{\mathsf{P}}\). So \(\mathsf{PH}=\Sigma_{2}^{\mathsf{P}}\), and the proof follows. In contrast with this result, we note that Beals' quantum Fourier transform over the symmetric group [1] actually gives an efficient quantum algorithm for the characters. ## 6. Applications of Algebraic Combinatorics in Computational Complexity Theory ### Geometric Complexity Theory Towards answering Conjecture 4.10 and showing that \(\mathsf{VBP}\neq\mathsf{VNP}\), Mulmuley and Sohoni [14, 15] proposed an approach based on algebraic geometry and representation theory, for which they coined the name Geometric Complexity Theory (GCT). For an accessible and detailed treatment we refer to [1]. Informally, the idea is to show that an \(m\times m\) permanent of a variable matrix \([X_{i,j}]_{i,j=1}^{m}\) cannot be expressed as an \(n\times n\) determinant of a matrix with affine linear forms as entries for \(n=O(m^{k})\) for any \(k\). Set \(\mathbf{X}=(Z,X_{11},X_{12},\ldots,X_{mm})\) as the vector of variables in the matrix \(X\) plus the variable \(Z\) for the affine terms. Because we are considering all possible linear forms, we are looking at \(\det_{n}[M\mathbf{X}]\) for all matrices \(M\in\mathbb{C}^{n^{2}\times n^{2}}\) and we want to explore when \(\operatorname{per}_{m}[X]=\det_{n}[M\mathbf{X}]\). Replacing these matrices by invertible ones, and then taking the Euclidean closure would give us a, slightly larger, space of polynomials containing \(\{\det_{n}[M\mathbf{X}]:M\in\mathbb{C}^{n^{2}\times n^{2}}\}\subset\overline{ \det_{n}(GL_{n^{2}}\mathbf{X})}\). Here the tools of Algebraic Geometry and Representation theory become available, we can compare the orbit closures of \(\operatorname{per}_{m}\) and \(\det_{n}\) via their irreducible representations to show that containment is not possible for \(n=poly(m)\). More formally, as outlined in [1], the setup is as follows. Denote by \(\mathsf{Sym}^{n}V^{*}\) the space of homogeneous polynomial functions of degree \(n\) on a finite dimensional complex vector space \(V\). The group \(G:=\operatorname{GL}(V)\) acts on \(\mathsf{Sym}^{n}V^{*}\) in the canonical way by linear substitution: \((h\cdot f)(v):=f(h^{-1}v)\) for \(h\in G\), \(f\in\mathsf{Sym}^{n}V^{*}\), \(v\in V\). We denote by \(G\cdot f:=\{hf\mid h\in G\}\) the _orbit_ of \(f\). We assume now \(V:=\mathbb{C}^{n\times n}\), view the determinant \(\det_{n}\) as an element of \(\mathsf{Sym}^{n}V^{*}\), and consider its _orbit closure_: \[\Omega_{n}:=\overline{\operatorname{GL}_{n^{2}}\cdot\det_{n}}\subseteq \mathsf{Sym}^{n}(\mathbb{C}^{n\times n})^{*} \tag{6.1}\] with respect to the Euclidean topology, which is also the same as with respect to the Zariski topology. For \(n>m\) we consider the _padded permanent_ defined as \(X_{11}^{n-m}\mathrm{per}_{m}\in\mathsf{Sym}^{n}(\mathbb{C}^{m\times m})^{*}\) (here we replace the extra variable \(Z\) mentioned in the beginning by \(X_{11}\) directly). Via the standard projection \(\mathbb{C}^{n\times n}\to\mathbb{C}^{m\times m}\), we can view \(X_{11}^{n-m}\mathrm{per}_{m}\) as an element of the bigger space \(\in\mathsf{Sym}^{n}(\mathbb{C}^{n\times n})^{*}\). The following conjecture was stated in [10]. **Conjecture 6.1** (Mulmuley and Sohoni 2001).: _For all \(c\in\mathbb{N}_{\geq 1}\) we have \(X_{11}^{m^{c}-m}\mathrm{per}_{m}\not\in\Omega_{m^{c}}\) for infinitely many \(m\)._ As discussed in the beginning, if \(\mathrm{per}_{m}=\det_{n}[M\mathbf{X}]\) for some \(n\), using the fact that \(\operatorname{GL}_{n^{2}}\) is dense in \(\mathbb{C}^{n^{2}\times n^{2}}\), we have that \(\mathsf{dc}(\mathrm{per}_{m})\leq n\), and \(\mathrm{per}_{m}\in\Omega_{n}\). Thus, Conjecture 6.1 implies Conjecture 4.10. The following strategy towards Conjecture 6.1 was proposed by Mulmuley and Sohoni in [10]. We consider the space \(\Omega_{n}\) as an algebraic variety and study its structure via its coordinate ring. Specifically, the action of the group \(G=\operatorname{GL}(V)\) on \(\mathsf{Sym}^{n}V^{*}\) induces an action on its graded coordinate ring \(\mathbb{C}[\mathsf{Sym}^{n}V^{*}]=\oplus_{d\in\mathbb{N}}\mathsf{Sym}^{d} \mathsf{Sym}^{n}V\). The space \(\mathsf{Sym}^{d}\mathsf{Sym}^{n}V\) decomposes into irreducible \(GL_{n^{2}}\)-modules with multiplicities exactly the _plethysm_ coefficients. The _coordinate ring \(\mathbb{C}[\Omega_{n}]\) of the orbit closure_\(\Omega_{n}\) is obtained as the homomorphic image of \(\mathbb{C}[\mathsf{Sym}^{n}V^{*}]\) via the restriction of regular functions, and the \(G\)-action descends on this. In particular, we obtain the degree \(d\) part \(\mathbb{C}[\Omega_{n}]_{d}\) of \(\mathbb{C}[\Omega_{n}]\) as the homomorphic \(G\)-equivariant image of \(\mathsf{Sym}^{d}\mathsf{Sym}^{n}V\). As a \(G\)-module, the coordinate ring \(\mathbb{C}[\Omega_{n}]\) is a direct sum of its irreducible submodules since \(G\) is reductive. We say that \(\lambda\) occurs in \(\mathbb{C}[\Omega_{n}]\) if it contains an irreducible \(G\)-module of type \(\lambda\) and denote its multiplicity by \(\delta_{\lambda,d,n}\), so we can write \[\mathbb{C}[\Omega_{n}]_{d}=\mathbb{C}[\overline{GL_{n^{2}}\mathrm{det}_{n}}]_ {d}\simeq\bigoplus_{\lambda\vdash nd}V_{\lambda}^{\oplus\delta_{\lambda,d,n}} \tag{6.2}\] On the other side, we repeat the construction for the permanent. Let \(Z_{n,m}\) denote the orbit closure of the padded permanent \((n>m)\): \[Z_{n,m}:=\overline{\operatorname{GL}_{n^{2}}\cdot X_{11}^{n-m}\mathrm{per}_{m }}\subseteq\mathsf{Sym}^{n}(\mathbb{C}^{n\times n})^{*}. \tag{6.3}\] If \(X_{11}^{n-m}\mathrm{per}_{m}=\det_{n}[M\mathbf{X}]\), then it is contained in \(\Omega_{n}\), then \[Z_{n,m}\subseteq\Omega_{n}, \tag{6.4}\] and the restriction defines a surjective \(G\)-equivariant homomorphism \(\mathbb{C}[\Omega_{n}]\to\mathbb{C}[Z_{n,m}]\) of the coordinate rings. We can decompose this ring into irreducibles likewise, \[\mathbb{C}[Z_{n,m}]_{d}=\mathbb{C}[\overline{GL_{n^{2}}\mathrm{per}_{m}^{n}}]_ {d}\simeq\bigoplus_{\lambda\vdash nd}V_{\lambda}^{\oplus\gamma_{\lambda,d,n,m}}. \tag{6.5}\] If \(\mathbb{C}[\Omega_{n}]\to\mathbb{C}[Z_{n,m}]\), then we must have \(\gamma_{\lambda,d,n,m}\leq\delta_{\lambda,d,n}\) by Schur's lemma. A partition \(\lambda\) for which the opposite holds, i.e. \[\gamma_{\lambda,d,n,m}>\delta_{\lambda,d,n} \tag{6.6}\] is called a _multiplicity obstruction_. Its existence shows that the containment (6.4) is not possible and hence the permanent is not an \(n\times n\) determinant of affine linear forms. **Lemma 6.2**.: _If there exists an integer \(d\) and a partition \(\lambda\vdash n\), for which (6.6) holds, then \(\mathsf{dc}(\mathrm{per}_{m})>n\)._ The main conjecture in GCT is thus **Conjecture 6.3** (GCT, Mulmuley and Sohoni [14]).: _There exist multiplicity obstructions showing that \(\mathsf{dc}(\mathrm{per}_{m})>m^{c}\) for every constant \(c\). Namely, for every \(n=O(m^{c})\) there exists an integer \(d\) and a partition \(\lambda\vdash dn\), such that \(\gamma_{\lambda,d,n,m}>\delta_{\lambda,d,n}\)._ A partition \(\lambda\) which does not occur in \(\mathbb{C}[\Omega_{n}]\), but occurs in \(\mathbb{C}[Z_{n,m}]\), i.e. \(\gamma_{\lambda}>0,\delta_{\lambda}=0\), is called an _occurrence obstruction_. Its existence thus also proves that \(Z_{n,m}\not\subseteq\Omega_{n}\) and hence \(\mathsf{dc}(\mathrm{per}_{m})>n\). In [14, 14] it was suggested to prove Conjecture 6.1 by exhibiting occurrence obstructions. More specifically, the following conjecture was stated. **Conjecture 6.4** (Mulmuley and Sohoni 2001).: _For all \(c\in\mathbb{N}_{\geq 1}\), for infinitely many \(m\), there exists a partition \(\lambda\) occurring in \(\mathbb{C}[Z_{m^{c},m}]\) but not in \(\mathbb{C}[\Omega_{m^{c}}]\)._ This conjecture implies Conjecture 6.1 by the above reasoning. ### Structure constants in GCT Conjecture 6.3 and the easier Conjecture 6.4 on the existence of occurrence obstructions has stimulated a lot of research and has been the main focus of researchers in geometric complexity theory. Unfortunately, the easier Conjecture 6.4 turned out to be false. **Theorem 2** (Burgisser-Ikenmeyer-Panova [1]).: _Let \(n,d,m\) be positive integers with \(n\geq m^{25}\) and \(\lambda\vdash nd\). If \(\lambda\) occurs in \(\mathbb{C}[Z_{n,m}]\), then \(\lambda\) also occurs in \(\mathbb{C}[\Omega_{n}]\). In particular, Conjecture 6.4 is false._ Before we explain its proof, we will establish the connection with Algebraic Combinatorics. In [14] it was realized that the GCT-coefficients \(\gamma_{\lambda,d,n}\) can be bounded by rectangular Kronecker coefficients, we have \(\gamma_{\lambda,d,n}(\lambda)\leq g(\lambda,n^{d},n^{d})\) for \(\lambda\vdash nd\). In fact, the multiplicity of \(\lambda\) in the coordinate ring of the orbit \(\mathrm{GL}_{n^{2}}\cdot\mathrm{det}_{n}\) equals the so-called symmetric rectangular Kronecker coefficient \(\mathrm{sk}(\lambda,n^{d})\), see [1], which is in general defined as \[\mathrm{sk}(\lambda,\mu):=\mathrm{mult}_{\lambda}\mathsf{Sym}^{2}(\mathbb{S}_ {\mu})\leq g(\lambda,\mu,\mu).\] Note that an occurrence obstruction for \(Z_{n,m}\not\subseteq\Omega_{n}\) could then be a partition \(\lambda\) for which \(g(\lambda,n^{d},n^{d})=0\) and such that \(\lambda\) occurs in \(\mathbb{C}[Z_{n,m}]\). Since hardly anything was known about the actual coefficients \(\gamma_{\lambda,d,n}\), it was proposed in [14] to find \(\lambda\) for which the Kronecker coefficient \(g(\lambda,n^{d},n^{d})\) vanishes and such that \(\lambda\) occurs in \(\mathbb{C}[Z_{n,m}]\). **Conjecture 6.5** ([14]).: _There exist \(\lambda\), s.t. \(g(\lambda,n^{d},n^{d})=0\) and \(\gamma_{\lambda,d,n,m}>0\) for some \(n>poly(m)\)._ This was the first conjecture to be disproved. **Theorem 3** ([13]).: _Let \(n>3m^{4}\), \(\lambda\vdash nd\). If \(g(\lambda,n^{d},n^{d})=0\), then \(\gamma_{\lambda,d,n,m}=0\)._ In order to show this, we need a characterization of \(\gamma_{\lambda,d,n,m}\), which follows from the work of Kadish and Landsberg [11]. **Proposition 6.6** ([11]).: _If \(\gamma_{\lambda,d,n,m}>0\) then \(\ell(\lambda)\leq m^{2}\) and \(\lambda_{1}\geq d(n-m)\)._ The rest revolves around showing that for such \(\lambda\) the relevant Kronecker coefficients would actually be positive. **Theorem 4** ([13]).: _If \(\ell(\lambda)\leq m^{2}\), \(\lambda_{1}\geq nd-md\), \(d>3m^{3}\), and \(n>3m^{4}\), then \(g(\lambda,n\times d,n\times d)>0\), except for 6 special cases._ The proof of this Theorem uses two basic tools. One is the result of Bessendrodt-Behns [1], generalized to Proposition 5.5, that \[g(k^{k},k^{k},k^{k})>0\] for all \(k\geq 1\). The other is the semigroup property Proposition 3.5 applied in various combinations and settings together with conjugation. In particular, we also have that if \(\alpha+_{V}\beta:=(\alpha^{\prime}+\beta^{\prime})^{\prime}\), the vertical addition of Young diagrams, we have \(g(\alpha^{1}+\beta^{1},\alpha^{2}+_{V}\beta^{2},\alpha^{3}+_{V}\beta^{3})>0\) whenever both \(g(\alpha^{1},\alpha^{2},\alpha^{3})>0\) and \(g(\beta^{1},\beta^{2},\beta^{3})>0\). To prove the general positivity result, we cut our partitions \(\lambda\) into squares of sizes \(2\times 2,\ldots,m^{2}\times m^{2}\) and some remaining partition \(\rho\) with at most \(m^{3}\) many columns bigger than \(1\), namely \[\lambda=\sum_{k=2}^{m^{2}}n_{k}(k^{k})+\rho.\] The two rectangles can also be cut into such square pieces giving triples \((k^{k},k^{k},k^{k})\) of positive Kronecker coefficients that can be combined together using the semigroup properties. Finally, for the remaining partition \(\rho\), we show inductively that if \(g(\mu,k_{1}^{k_{1}},k_{1}^{k_{1}})>0\) for some \(\mu\vdash k_{1}^{2}\), then for all \(k,\ell,r\) with \(|\mu|+\ell+r=k^{2}\) we have all positive Kronecker coefficients \[g\left((\mu+\ell)+_{V}1^{r},k^{k},k^{k}\right)>0.\] Using the fact that determinantal complexity for all polynomials of fixed degree and number of variables is finitely bounded, the GCT setup and the bounds on the multiplicities we obtain the following unexpected relation between rectangular Kronecker coefficients and plethysms. Note that the range of \(d\) and \(n\) here put these multiplicities in _stable regime_, i.e. their values stabilize when \(n,d\) increase. **Theorem 5** ([17]).: _For every partition \(\rho\), let \(n\geq|\rho|\), \(d\geq 2\), \(\lambda:=(nd-|\rho|,\rho)\). Then_ \[g(\lambda,n^{d},n^{d})\geq a_{\lambda}(d[n]).\] In fact, the proof gives \(\operatorname{sk}(\lambda,n^{d})\geq a_{\lambda}(d[n])\). The ideas in the proof of Theorem 2 are similar in philosophy, but technically different. We have that \(\operatorname{\mathsf{dc}}(X_{1}^{s}+\cdots+X_{k}^{s})\leq ks\), as seen from the formula size relation and Valiant's proof [11]. Then, after homogenization, we have \(z^{n-s}(v_{1}^{s}+\cdots+v_{k}^{s})\in\Omega_{n}\) for \(n\geq ks\) and linear forms \(z,v_{1},\ldots,v_{k}\). Now we can consider \[\mathsf{Pow}_{k}^{n}:=\overline{\{\ell_{1}^{n}+\cdots+\ell_{k}^{n}\mid\ell_{ i}\in V\}}\in\Omega_{kn},\] and essentially prove, see also [17], that using the same setup for coordinate rings replacing the determinant with the power sum polynomial, see Proposition 6.8, that \[\operatorname{mult}_{\lambda}(\mathbb{C}[P_{k}^{n}]_{d})=a_{\lambda}(d[n])\text { for }k\geq d\] (for the partitions \(\lambda\) of relevance). Comparing multiplicities then we get \(\delta_{\lambda,d,n}=\operatorname{mult}_{\lambda}\mathbb{C}[\Omega_{n}]\geq a _{\lambda}(d[n])\). We show using explicit tableaux constructions, see [10], that \(a_{\lambda}(d[n])>0\) for the partitions \(\lambda\) such that \(\lambda_{1}\geq d(n-m)\) and \(\ell(\lambda)\leq m^{2}\). **Remark 6.7**.: In [1] we show that occurrence obstructions don't work not just for permanent versus determinant, but for permanent versus power sum polynomial. Power sums are clearly much weaker computationally than the determinant polynomial. The barrier towards occurrence obstructions comes from the padding of the permanent, which results in partitions \(\lambda\) with long first rows. The long first row makes the relevant multiplicities positive, as can be seen with the various applications of semigroup properties. ### Multiplicity obstructions In order to separate \(\mathsf{VP}\) from \(\mathsf{VNP}\) via determinant versus permanent it is now clear that occurrence obstructions would not be enough. To remedy this there are two approaches. We can replace the \(\det_{n}\) by the Iterated Matrix Multiplication tensor \(\operatorname{tr}(A_{1}\cdots A_{m})\), the trace of the product of \(m\) matrices with affine linear entries of size \(n\times n\). This is another \(\mathsf{VBP}\) universal model, and the measure of complexity is \(n\), the size of the matrices. In this case we will not be padding the permanent, and the partitions involved would not have long first rows. The drawback now is that computing the multiplicities is even more complicated. Alternatively, we can look for _multiplicity obstructions_, i.e. partitions \(\lambda\vdash dn\), for which \[\gamma_{\lambda,d,n,m}<\delta_{\lambda,d,m}\text{ for some }n\gg poly(m),\] where by \(poly(m)\) we mean any fixed degree polynomial in \(m\). As a proof of concept, we consider another separation of polynomials, as done in [10]. Consider the space \(\mathbb{A}_{m}^{n}:=\mathbb{C}[x_{1},\ldots,x_{m}]_{n}\) of complex homogeneous polynomials of degree \(n\) in \(m\) variables. Let \(V:=\mathbb{A}_{m}^{1}\) be the space of homogeneous degree \(1\) polynomials. We compare two subvarieties of \(\mathbb{A}_{m}^{n}\). The first is the so-called _Chow variety_ \[\mathsf{Ch}_{m}^{n}:=\{\ell_{1}\cdots\ell_{n}\mid\ell_{i}\in V\}\subseteq \mathbb{A}_{m}^{n},\] which is the set of polynomials that can be written as a product of homogeneous linear forms. In algebraic complexity theory this set is known as the set of polynomials that have homogeneous depth-two algebraic circuits of the form \(\Pi^{n}\Sigma\), i.e., circuits that consists of an \(n\)-ary top product gate of linear combinations of variables. The second variety is called a _higher secant variety of the Veronese variety_ and can be written as \[\mathsf{Pow}_{m,k}^{n}:=\overline{\{\ell_{1}^{n}+\cdots+\ell_{k}^{n}\mid\ell _{i}\in V\}}\subseteq\mathbb{A}_{m}^{n},\] which is the closure of the set of all sums of \(k\) powers of homogeneous linear forms in \(m\) variables, which also showed up in [10] as mentioned in SS6.2. In algebraic complexity theory this set is known as the set of polynomials that can be approximated arbitrarily closely by homogeneous depth-three powering circuits of the form \(\Sigma^{k}\Lambda^{n}\Sigma\), i.e., a \(k\)-ary sum of \(n\)-th powers of linear combinations of variables. We now consider when \(\mathsf{Pow}_{m,k}^{n}\not\subseteq\mathsf{Ch}_{m}^{n}\), or in other words, when is a power sum not factorizable as a product of linear forms. While this is easy to see explicitly, here we will show how GCT can work in practice when there are no _occurrence obstructions_, namely, we will find _multiplicity obstructions_. The approach is in complete analogy to the approach described in Section 6.1 to separate group varieties arising from algebraic complexity theory. Here we've replaced per by a power sum polynomial, and let by the product of linear forms. If \(\mathsf{Pow}_{m,k}^{n}\subseteq\mathsf{Ch}_{m}^{n}\), then the restriction of functions gives a canonical \(\operatorname{GL}_{m}\)-equivariant surjection \[\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d}\twoheadrightarrow\mathbb{C}[\mathsf{Pow} _{m,k}^{n}]_{d}.\] Decomposing the two modules into irreducibles and comparing multiplicities for each \(V_{\lambda}\) we have that \[\operatorname{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})\geq \operatorname{mult}_{\lambda}(\mathbb{C}[\mathsf{Pow}_{m,k}^{n}]_{d}). \tag{6.7}\] for all partitions \(\lambda\) with \(\ell(\lambda)\leq m\). Therefore, a partition \(\lambda\) that violates (6.7) proves that \(\mathsf{Pow}_{m,k}^{n}\not\subseteq\mathsf{Ch}_{m}^{n}\) and is called a _multiplicity obstruction_. If additionally \(\operatorname{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})=0\), then \(\lambda\) would be an _occurrence obstruction_. Since these are \(GL_{m}\) modules we must have \(\ell(\lambda)\leq m\) and since the total degree is \(dn\) we have \(\lambda\vdash dn\). **Theorem 6** ([19]).: (1) Asymptotic result: _Let \(m\geq 3\), \(n\geq 2\), \(k=d=n+1\), \(\lambda=(n^{2}-2,n,2)\). We have \(\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})<\mathrm{mult}_{ \lambda}(\mathbb{C}[\mathsf{Pow}_{m,k}^{n}]_{d})\), i.e., \(\lambda\) is a multiplicity obstruction that shows \(\mathsf{Pow}_{m,k}^{n}\not\subseteq\mathsf{Ch}_{m}^{n}\)._ (2) Finite result: _In two finite settings we can show a slightly stronger separation:_ (a) _Let \(k=4\), \(n=6\), \(m=3\), \(d=7\), \(\lambda=(n^{2}-2,n,2)=(34,6,2)\). Then \(\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})=7<8=\mathrm{ mult}_{\lambda}(\mathbb{C}[\mathsf{Pow}_{m,k}^{n}]_{d})\), i.e., \(\lambda\) is a multiplicity obstruction that shows \(\mathsf{Pow}_{m,k}^{n}\not\subseteq\mathsf{Ch}_{m}^{n}\)._ (b) _Similarly, for \(k=4\), \(n=7\), \(m=4\), \(d=8\), \(\lambda=(n^{2}-2,n,2)=(47,7,2)\) we have \(\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})<11=\mathrm{mult }_{\lambda}(\mathbb{C}[\mathsf{Pow}_{m,k}^{n}]_{d})\), i.e., \(\lambda\) is a multiplicity obstruction that shows \(\mathsf{Pow}_{m,k}^{n}\not\subseteq\mathsf{Ch}_{m}^{n}\)._ _Both separations_ (a) _and_ (b) _cannot be achieved using occurrence obstructions, even for arbitrary \(k\): for all partitions \(\lambda\) of \(\ell(\lambda)\leq m\) that satisfy \(a_{\lambda}(d[n])>0\) we have \(\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d^{\prime}})>0\) in these settings._ The proof involves two facts which relate the desired multiplicities with plethyss. We have that \(a_{\lambda}(d[n])=\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{A}_{m}^{n}]_{d})\) **Proposition 6.8** ([19]).: _Let \(\lambda\vdash dn\) with \(\ell(\lambda)\leq m\). If \(k\geq d\) then \(\mathrm{mult}_{\lambda}\mathbb{C}[\mathsf{Pow}_{m,k}^{n}]_{d}=a_{\lambda}(d[n])\)._ We also have that **Lemma 6.9** ([19]).: _Let \(\lambda\vdash nm\) with \(\ell(\lambda)\leq m\leq n\). Then \(\mathrm{mult}_{\lambda}(\mathbb{C}[\mathsf{Ch}_{m}^{n}]_{d})\leq a_{\lambda}(n [d])\)._ Finally we find explicit values and relations for the plethysm coefficients and prove in particular the following **Theorem 6.10** ([19]).: _Let \(\lambda=(n^{2}-2,n,2)\vdash n(n+1)\) and let \(d=n+1\). Then_ \[a_{\lambda}(d[n])=1+a_{\lambda}(n[d]).\] In particular, this confirms Foulkes conjecture 5.8. ## 7. Discussion As we have seen, structure constants from Algebraic Combinatorics, mostly the Kronecker and plethysm coefficients, play a crucial role in Geometric Complexity Theory in the quest for separating algebraic complexity classes or simply separating two explicit polynomials. In order to achieve such separation we need to understand the multiplicities of irreducible components in the coordinate rings of the orbit closures of the given polynomials. As it turned out just considering whether multiplicities are \(0\) or not is not enough in most cases of interest. This implies that we need to understand better what these multiplicities are and how large they can be. One aspect of this understanding would be to find their combinatorial interpretation. For the Kronecker coefficients this has been an open problem in Algebraic Combinatorics and Representation Theory for more than \(80\) years. The fact that just deciding whether Kronecker coefficients is \(\mathsf{NP}\)-hard, and that the value of the character square of the symmetric group is not \(\mathsf{\#P}\), is evidence that sometimes these problems, as fundamental as they are, may not be doable the way we expect. Computational Complexity theory can help answer these questions and would be especially useful for _negative_ answers, if the situation happens to be such. Finally, moving beyond positivity and complexity of structure constants, in the lack of exact formulas, we turn towards their asymptotic properties and effective bounds. Estimating how large these multiplicities are for certain families is yet another big open problem, see [10]. Such estimates could potentially close the cycle to GCT.
2309.17394
Topological interfaces crossed by defects and textures of continuous and discrete point group symmetries in spin-2 Bose-Einstein condensates
We systematically and analytically construct a set of spinor wave functions representing defects and textures that continuously penetrate interfaces between coexisting, topologically distinct magnetic phases in a spin-2 Bose-Einstein condensate. These include singular and nonsingular vortices carrying mass or spin circulation that connect across interfaces between biaxial- and uniaxial nematic, cyclic and ferromagnetic phases, as well as vortices terminating as monopoles on the interface ("boojums"). The biaxial-nematic and cyclic phases exhibit discrete polytope symmetries featuring non-Abelian vortices and we investigate a pair of non-commuting line defects within the context of a topological interface. By numerical simulations, we characterize the emergence of non-trivial defect core structures, including the formation of composite defects. Our results demonstrate the potential of spin-2 Bose-Einstein condensates as experimentally accessible platforms for exploring interface physics, offering a wealth of combinations of continuous and discrete symmetries.
Giuseppe Baio, Matthew T. Wheeler, David S. Hall, Janne Ruostekoski, Magnus O. Borgh
2023-09-29T16:56:41Z
http://arxiv.org/abs/2309.17394v2
Topological interfaces crossed by defects and textures of continuous and discrete point group symmetries in spin-2 Bose-Einstein condensates ###### Abstract We systematically and analytically construct a set of spinor wave functions representing defects and textures that continuously penetrate interfaces between coexisting, topologically distinct magnetic phases in a spin-2 Bose-Einstein condensate. These include singular and nonsingular vortices carrying mass or spin circulation that connect across interfaces between biaxial- and uniaxial nematic, cyclic and ferromagnetic phases, as well as vortices terminating as monopoles on the interface ("boojums"). The biaxial-nematic and cyclic phases exhibit discrete polytope symmetries featuring non-Abelian vortices and we investigate a pair of non-commuting line defects within the context of a topological interface. By numerical simulations, we characterize the emergence of non-trivial defect core structures, including the formation of composite defects. Our results demonstrate the potential of spin-2 Bose-Einstein condensates as experimentally accessible platforms for exploring interface physics, offering a wealth of combinations of continuous and discrete symmetries. ## I Introduction When topologically distinct phases coexist in a continuous and coherent ordered medium, a topological interface may form at the phase boundary where the different broken symmetries of their order-parameters connect smoothly. Such interfaces were first discussed in the context of domain walls in the early universe [1; 2; 3], where they may form the termination points of cosmic strings [4], and later in brane models in superstring theory [5; 6; 7]. They appear universally across many areas of physics, from the \(A\)-\(B\) phase boundary in superfluid liquid \({}^{3}\)He [8; 9; 10; 11; 12; 13], via atomic Bose-Einstein condensates (BECs) [14; 15; 16; 17; 18; 19], to quantum chromodynamics [20; 21; 22]. The different order-parameter symmetries imply that the bulk medium on either side of the interface supports different families of topological defects and textures, which therefore cannot cross the interface unchanged. Consequently, defects and textures penetrating through the interface must either terminate at the interface or continuously transform into different defects and textures of the corresponding phases. Due to the ubiquitous nature of topological interfaces, their study in controlled experiments becomes of general importance, inspiring the use of laboratory systems as emulators for interface physics in contexts otherwise not amenable to experimental observations, such as the simulation of brane-collision processes [12]. Topological-interface physics becomes especially intriguing when the medium on either or both sides of the interface exhibits discrete polytope point-group order-parameter symmetry [23], leading to defects whose charges depend on the presence of other defects in the system and whose dynamics is highly constrained [24; 25]. Such order-parameter symmetries arise in particular phases of spin-2 and spin-3 BECs [26; 27; 28; 29; 30; 31; 32; 33; 34; 35; 36], which have consequently been proposed, e.g., as candidates for quantum-computation applications using non-Abelian vortex braiding [37]. Optically trapped spinor BECs [38; 39], where the internal spin-degrees of freedom are not frozen out by strong magnetic fields [40], provide an ideal testing ground for investigating interface physics with different bulk regions exhibiting different magnetic phases and defects [16; 17; 18]. Topological interfaces can also form at vortex cores in spinor BECs when the singularity of the bulk order parameter with one symmetry is accommodated by filling the defect core with atoms in a different magnetic phase and symmetry [41], as experimentally realized in spin-1 [42; 43] and spin-2 [23] BECs. Here we consider engineering of spatially extended topological interfaces between coexisting bulk regions of distinct spin-2 magnetic phases that are analogous to topological bulk interfaces studied in high-energy physics, superfluid liquid \({}^{3}\)He, and in spin-1 BECs. Both sides of the interface may then harbor defects and textures with continuous and discrete polytope symmetries that terminate at or cross the interface. The phase diagrams of spinor BECs [38] exhibit a rich variety of magnetic phases with different order-parameter symmetries, supporting different families of topological defects [44]. These include singular vortices carrying both integer [45; 46; 47; 48; 49; 30; 41; 42; 43; 41; 43; 44; 45; 46; 47; 48; 49] and fractional [23; 34; 35; 36; 41; 42; 43; 44; 45; 46; 47; 48; 49] charges, as well as nonsingular vortices (2D Skyrmions) [42; 54; 55; 56; 57; 58; 59; 60; 61; 62]--whose corresponding objects in magnetic solid-state systems have attracted recent interest [63; 64]--wall-vortex complexes [65; 66], and monopoles [67; 68; 69; 70; 71; 72; 73; 74; 75]. In addition, spinor BECs support 3D Skyrmions [76; 77; 78; 79; 80; 81; 82; 83]--topologically non-trivial textures first proposed in nuclear physics [84]--and knotted solitons [85; 86] with parallels in classical field theories [87; 88; 89] and even magnetic materials [90]. It has recently been theoretically proposed that these complex topological objects can be generated in atomic systems through optical excitation [91]. Here we analytically construct spinor wave functions representing continuous connections of defects and textures across topological interfaces in spin-2 BECs and numerically simulate their core structure for illustrative examples. Radically different symmetry properties of the magnetic phases mean that their defects and textures exhibit distinct and generally incommensurate topologies that can inhibit connections across the interfaces. We systematically identify and explicitly con struct a set of allowed connections for interfaces between biaxial nematic (BN), uniaxial nematic (UN), cyclic (C), and ferromagnetic (FM) phases. These include singular and non-singular vortices carrying mass circulation as well as spin vortices. Numerical simulation reveals the appearance of complex defect-core structures, including non-axisymmetric cores at the UN-BN interface and composite cores [62] of C-BN spin vortices. Defects may also terminate at the interface, either with a vortex-free state on the opposite side, or as a monopole on the interface, similar to "boojums" on the \(A\)-\(B\) phase boundary in superfluid liquid \({}^{3}\)He [13; 92]. We demonstrate that monopole solutions exist on C-BN, C-FM, and UN-BN interfaces as the termination point of singular vortices, and numerically show the formation of half-quantum Alice rings [69; 75] from singular point defects due to dissipation. With techniques for controlled creation of vortices in several phases with different internal symmetries having recently been developed [23], our analytical and numerical results highlight how spinor-BEC systems are poised as immediate candidates for the realization of topological interfaces. Spin-2 topological interfaces offer the potential even for non-Abelian defect physics, which we numerically simulate here by constructing a pair of singular non-commuting fractional BN vortices that terminate at an interface. This article is organized as follows: Section II, provides a brief overview of the mean-field theory and magnetic phases and order-parameter symmetries of spin-2 BECs, as well as establishes defect nomenclature. In Sec. III, we then present continuously interpolating spinors across topological interfaces and construct interface-crossing defects and textures. Numerical results are in Sec. IV before concluding remarks and experimental discussion in Sec. V. ## II Spin-2 BECs mean-field theory In the Gross-Pitaevskii mean-field theory, the spin-2 BEC is described by a five-component wave function \(\Psi=\sqrt{n}(\zeta_{2},\zeta_{1},\zeta_{0},\zeta_{-1},\zeta_{-2})^{\rm T}\), where \(n({\bf r})\) is the condensate density, which together with the normalized spinor \(\zeta({\bf r})\) gives the field in the \(m=+2,+1,0,-1,-2\) magnetic sublevels. The Hamiltonian density reads [38; 93] \[\mathcal{H}=\mathcal{H}_{0}+\frac{c_{0}}{2}n^{2}+\frac{c_{1}}{2}n^{2}|(\hat{ \bf F})|^{2}+\frac{c_{2}}{2}n^{2}|A_{20}|^{2}, \tag{1}\] with the single-particle contribution \[\mathcal{H}_{0}=\frac{\hbar^{2}}{2M}|\nabla\Psi|^{2}+\left(\frac{1}{2}M\omega ^{2}r^{2}-p\langle\hat{F}_{z}\rangle+q\langle\hat{F}_{z}^{2}\rangle\right)n, \tag{2}\] where \(M\) is the atomic mass and \(\omega\) is the angular frequency of the harmonic trap, which we, for simplicity, assume to be isotropic. The term \(p=-g\mu_{B}B\) is a linear Zeeman shift arising from a uniform magnetic field \(B\) oriented along \(z\), where \(g=1/2\) is the Lande factor for \(F=2\) and \(\mu_{B}\) is the Bohr magneton. A quadratic Zeeman shift \(q\) also arises from such a magnetic field, and its exact form is obtained by means of the Breit-Rabi formula [94]. Besides magnetic fields, the values of \(q\) and \(p\) can be experimentally controlled by ac Stark shifts induced by microwaves or lasers [95; 96]. Interaction terms in Eq. (1) arise from the three possible \(s\)-wave scattering channels of colliding spin-2 atoms with scattering lengths \(a_{f}\), corresponding to total angular momentum \(f=0\), 2 and 4, respectively. These combine to form three interactions terms in Eq. (1): First, a contribution of strength \(c_{0}=4\pi\hbar^{2}(3a_{4}+4a_{2})/7M\) that depends only on the atomic density. A second interaction term of strength \(c_{1}=4\pi\hbar^{2}(a_{4}-a_{2})/7M\) also depends on the magnitude of the local condensate spin vector \(\langle\hat{\bf F}\rangle=\zeta^{\dagger}\hat{\bf F}\zeta\), constructed from the vector of spin-2 angular momentum operators \(\hat{\bf F}=\left(\hat{F}_{x},\hat{F}_{y},\hat{F}_{z}\right)\). In addition to the density- and spin-dependent interactions, a third interatomic interaction term arises that is proportional to \[\left|A_{20}\right|^{2}=\frac{1}{5}\left|2\zeta_{2}\zeta_{-2}-2\zeta_{1}\zeta _{-1}+\zeta_{0}^{2}\right|^{2}, \tag{3}\] where \(A_{20}\) is the spin-singlet duo amplitude. The strength of this interaction is \(c_{2}=4\pi\hbar^{2}(4a_{4}-10a_{2}+7a_{0})/7M\). We consider magnetic phases as stationary solutions to the Gross-Pitaevskii energy functional (1) when we ignore the harmonic trapping potential. The steady-state spinors are found as optimal points of the mean-field density functional. These satisfy the general condition \(\delta\mathcal{H}/\delta\zeta_{m}^{\star}=0\), i.e., \[\left[-p\hat{F}_{z}+q\hat{F}_{z}^{2}+\tilde{c}_{0}\,\zeta^{\dagger}\zeta+ \tilde{c}_{1}\,\langle\hat{\bf F}\rangle\cdot\hat{\bf F}+\frac{\tilde{c}_{2}} {5}\langle\hat{\mathcal{T}}\zeta\rangle^{\dagger}\,\zeta\hat{\mathcal{T}}- \mu\right]\zeta=0, \tag{4}\] where \(\tilde{c}_{0,1,2}=n\,c_{0,1,2}\), \(\mu\) is the chemical potential, and the time-reversal operator \(\hat{\mathcal{T}}\) is defined by the action \((\hat{\mathcal{T}}\zeta)_{m}=(-1)^{\rm m}\zeta_{-m}^{\star}\) on the spinor components. The definition in Eq. (3) can then be written \(A_{20}=(\hat{\mathcal{T}}\zeta)^{\dagger}\zeta/\sqrt{5}\)[33]. Equation (4) generally results in a homogeneous, nonlinear, algebraic system for the unknowns \(\zeta_{m}\)[97], which may be solved to find the stationary states. Spinor-BEC experiments [23; 43; 61; 86; 87; 26] frequently rely on dynamically stable stationary solutions (i.e., robust with respect to small dynamical perturbations) that are long-lived on the experimental time scale. A subset of the stationary solutions may also be energetically (meta-)stable, corresponding to local energetic minima. In the absence of Zeeman shifts (\(p=q=0\)), the uniform spin-2 BEC exhibits five distinct magnetic phases, of which four also appear as uniform ground states for different values of the interatomic interactions \(c_{0,1,2}\)[26; 38; 97]. They are characterized by the symmetries of the corresponding order parameter, which are illustratively visualized using a spherical-harmonics representation of the spinor (see also Ref. [23] for a detailed discussion): \[Z(\theta,\varphi)=\sum_{n=-2}^{2}\zeta_{m}Y_{F=2}^{m}(\theta,\varphi), \tag{5}\] where \(Y_{F}^{m}(\theta,\varphi)\) denotes the spherical harmonic of degree \(F\) and order \(m\). For each phase we state a representative spinor, from which any other may be reached by application of a gauge transformation together with a spin rotation, defined by three Euler angles, such that \[\begin{split}\zeta\to e^{i\pi}\hat{U}(\alpha,\beta,\gamma)\, \zeta,\\ \hat{U}(\alpha,\beta,\gamma)=\exp\!\left(-i\hat{F}_{z}\alpha \right)\exp\!\left(-i\hat{F}_{y}\beta\right)\exp\!\left(-i\hat{F}_{z}\gamma \right)\!.\end{split} \tag{6}\] The family of states thus generated forms the order-parameter space \(\mathcal{M}\) of each magnetic phase as a subset of the full \(G=\mathrm{U}(1)\times\mathrm{SO}(3)\) symmetry group of the Hamiltonian density at zero level shifts. The spin-2 FM (FM\({}_{2}\)) phase, exemplified by \[\xi^{\mathrm{FM}_{2}}=(1,0,0,0,0)^{\mathrm{T}}, \tag{7}\] is characterized by \(|\langle\hat{\mathbf{F}}\rangle|=2\) and \(|A_{20}|^{2}=0\). The order-parameter space \(\mathcal{M}_{\mathrm{FM}_{2}}=\mathrm{SO}(3)/\mathbb{Z}_{2}\)[23; 98] is characterized by a continuous spin-gauge rotation symmetry [Fig. 1(a)]. A further FM phase with \(|\langle\hat{\mathbf{F}}\rangle|=1\) (FM\({}_{1}\)), exemplified by \[\xi^{\mathrm{FM}_{1}}=(0,1,0,0,0)^{\mathrm{T}} \tag{8}\] also forms a stationary state with an \(\mathcal{M}_{\mathrm{FM}_{1}}=\mathrm{SO}(3)\) order-parameter space [98] similar to the FM phase in spin-1 BECs [54; 55; 42]. The C phase, exemplified by \[\zeta^{\mathrm{C}}=\frac{1}{2}\left(1,0,i\sqrt{2},0,1\right)^{\mathrm{T}}, \tag{9}\] by contrast, has a discrete polytope order-parameter symmetry [Fig. 1(e)] where the tetrahedral subgroup of rotations combined with the condensate phase, collectively \(T_{\pm\mathbf{F}}\), gives rise to the manifold \(\mathcal{M}_{\mathrm{C}}=[\mathrm{SO}(3)\times\mathrm{U}(1)]/T_{\pm\mathbf{F}}\)[30; 99], with spinors characterized by \(|\langle\hat{\mathbf{F}}\rangle|=0\) as well as \(|A_{20}|^{2}=0\). In the C phase, trios of atoms combine to form a spin-singlet state. The amplitude of singlet-trio formation reads \[A_{30}=\frac{3\sqrt{6}}{2}\left(\zeta_{1}^{2}\zeta_{-2}+\zeta_{-1}^{2}\zeta_{ 2}\right)+\zeta_{0}\left(\zeta_{0}^{2}-3\zeta_{1}\zeta_{-1}-6\zeta_{2}\zeta_{ -2}\right), \tag{10}\] such that for any C spinor, \(|A_{30}|^{2}=2\)[93; 38]. The remaining two stationary solutions at zero field are nematic phases with \(|\langle\hat{\mathbf{F}}\rangle|=0\) and \(|A_{20}|^{2}=1/5\). In the UN phase, represented by \[\zeta^{\mathrm{UN}}=(0,0,1,0,0)^{\mathrm{T}}, \tag{11}\] the order parameter manifold is \(\mathcal{M}^{\mathrm{UN}}=\mathrm{U}(1)\times(S^{2}/\mathbb{Z}_{2})\), i.e., parametrized by an unoriented vector, the nematic axis \(\hat{\mathbf{d}}\) corresponding to a local symmetry axis, and a condensate phase [99; 100] as illustrated in Fig. 1(c). The BN phase exemplified by \[\zeta^{\mathrm{BN}}=\frac{1}{\sqrt{2}}(1,0,0,0,1)^{\mathrm{T}}, \tag{12}\] by contrast, exhibits a discrete, four-fold dihedral symmetry that combines with \(\pi\)-shifts of the condensate phase [Fig. 1(d)], denoted \((D_{4})_{\pm\mathbf{F}}\). The order-parameter space is thus \(\mathcal{M}^{\mathrm{BN}}=[\mathrm{SO}(3)\times\mathrm{U}(1)]/(D_{4})_{\pm \mathbf{F}}\)[35; 99; 100]. In addition to the different order-parameter symmetry, the two nematic states can also be distinguished by the spin-singlet trio amplitude, where \(|A_{30}|=1\) in the UN phase and \(|A_{30}|=0\) for the BN. The UN and BN phases compete as the likely ground-state phase for spin-2 BECs of \({}^{87}\)Rb [97; 101; 102], \({}^{85}\)Rb [101], and \({}^{23}\)Na [97], though uncertainties overlap with the C phase. The two nematic phases are energetically degenerate at the mean-field level for \(p=q=0\). Beyond mean-field, the degeneracy may be broken by quantum fluctuations through order-by-disorder processes [103; 100], but it can also be lifted already at the mean-field level [35] through a quadratic level shift. When \(p\neq 0\) and \(q\neq 0\) in Eq. (2), the expressions for the magnetic phases as stationary solutions to the Gross-Pitaevskii energy functional (1) become considerably more complex. These solutions and their dynamical and energetic stability have been investigated for both spin-1 and spin-2 BECs [104; 105; 106; 107; 97]. In particular, the stationary solutions of the magnetic phases no longer follow the straightforward classification and the Zeeman shifts can, e.g., cause the condensate to adopt the properties of a FM phase even when nematic or C phases are favoured by the interactions, and vice versa. The magnetic phases are then described by spinors where each component is a function of the Zeeman shifts and interatomic interactions, continuously interpolating between magnetic phases that would exist for \(p=q=0\). The symmetries of the order parameter determine the topological properties of defects and textures [25]. The non-trivial elements of the first homotopy group \(\pi_{1}(\mathcal{M})\) represent singular line defects. These include singly and multiply quantized vortices defined by a winding of the condensate phase \(\tau\) in Eq. (6), and also vortices that arise purely from spin rotations and therefore carry spin circulation but no superfluid mass current (spin vortices). Mass and spin circulation may also combine such that a fractional \(2\pi w\) winding of \(\tau\) is compensated by a simultaneous spin rotation through \(2\pi\sigma\) (about some symmetry axis of the order parameter). We may thus denote vortex charges by \((w,\sigma)\) whenever these are uniquely defined [38]. For convenience, vortices with integer \(w\) defined by winding of the condensate phase alone will in the following be referred to as phase vortices. Singular vortices with \(w=1/2\) are referred to as half-quantum vortices (HQVs) and appear in the BN phase [35]. By analogy, spin vortices with \(w=0\) (hence carrying no mass circulation) and \(\sigma=1/2\) are called spin-HQVs and appear in the UN [38], BN [35] and C phases [30; 98]. The C phase also supports \(1/3\)- and \(2/3\)-quantum vortices (i.e., \(|w|=1/3\) or \(2/3\)) [30; 34; 98]. Additionally, spin-2 phases support nonsingular vortices, which are textures that carry mass and/or spin circulation. Such vortices are trivial in \(\pi_{1}(\mathcal{M})\), but may be topologically classified by the second homotopy group \(\pi_{2}(\mathcal{M})\) when the boundary conditions on the texture are fixed. The non-trivial elements of \(\pi_{2}(\mathcal{M})\) also correspond to the topological charges of singular point defects (monopoles) [25]. The UN phase is the only example of a spin-2 order parameter manifold where the second homotopy group is non-trivial [99], and thus supports topologically stable monopoles, in which the nematic axis forms a radial hedgehog texture, i.e., \(\hat{\mathbf{d}}=(\cos\varphi\sin\theta,\sin\varphi\sin\theta,\cos\theta)\), where \((\theta,\varphi)\) are the spherical coordinates centered on the point singularity. Despite \(\pi_{2}(\mathcal{M})=0\) in the remaining magnetic phases, radial hedgehog textures can still exist, albeit with at least one associated line singularity extending away from the monopole. In the FM phases, these are the generalizations of the spin-1 Dirac monopoles [68; 71], where the radial hedgehog appears in the condensates spin (\(\hat{\mathbf{F}}\)), while in the BN and C phases, the monopole can be formed by a chosen order-parameter symmetry axis. ## III Topological interfaces and defect connections in spin-2 Becs In a spinor superfluid, magnetic phases with different order-parameter symmetries can coexist. For example, this situation arises spontaneously due to energy relaxation of defect cores [49; 49; 108; 41; 49], as also observed in detailed experiments [42; 23; 43]. The size of the defect core can then be understood from the healing lengths arising from the contributions to the interaction energy. There are consequently three such length scales in the spin-2 BEC: \[\xi_{d}=\ell\sqrt{\frac{\hbar\omega}{2nc_{0}}},\ \ \xi_{F}=\ell\sqrt{\frac{ \hbar\omega}{2n|c_{1}|}},\ \ \xi_{a}=\ell\sqrt{\frac{\hbar\omega}{2n|c_{2}|}}, \tag{13}\] where \(\ell=(\hbar/M\omega)^{1/2}\) is the harmonic oscillator length. These describe, respectively, the distance over which perturbations of the superfluid density, condensate spin, and singlet duo amplitude heal to the bulk value. Typically, we have \(\xi_{F},\xi_{a}>\xi_{d}\) in current experimental realizations, allowing the core of a singular defect to reduce its energy by expanding and filling with a different superfluid phase [69]. The condensate wave function then smoothly interpolates between the coexisting phases in the bulk and the defect core, establishing a topological interface between them. An extended topological interface may also be purposefully engineered to create spatially separate bulk regions with different order-parameter symmetries within the same, continuous superfluid. This can be achieved through spatial variation of interaction strengths in the Hamiltonian (1), such that different regions exhibit the characteristics of different magnetic phases [16; 17]. The \(s\)-wave scattering lengths could be manipulated, e.g., using optical or microwave Feshbach resonances [109; 110]. Alternatively, it is possible to exploit stationary solutions in the presence of non-vanishing Zeeman shifts, in which case the BEC with spatially varying \(p\) or \(q\) can continuously interpolate between magnetic phases that would exist for \(p=q=0\)[18]. Both approaches can result in stable stationary wave functions that interpolate between the bulk phases of different order-parameter symmetries for defects and textures, separated by a coherent topological interface. Here we use these to explicitly construct wave functions that smoothly connect vortices, other defects or nonsingular textures in the limiting phases. From a representative spinor interpolating between chosen bulk phases, spinor vortices can typically be constructed by defining how the complex argument \(\chi_{m}=\mathrm{Arg}(\zeta_{m})=k_{m}\varphi\), with \(k_{m}\in\mathbb{Z}\), of each component winds as a function of the azimuthal angle \(\varphi\) about the vortex line. Moreover, textures and defects that occupy several spin components are constructed by the symmetry-group transformation of Eq. (6), where the Euler angles can be spatially dependent. Provided that a single transformation yields well-defined, single-valued states in both sides of the interface, a continuous connection across an interface is generated. Examples of this are shown in Secs. III.1-III.4. Similar solutions can also describe filled vortex cores [41] and composite defects [62] when the interpolation parameter varies with the radial distance from a singular defect line. ### Uniaxial to biaxial nematic (UN-BN) We first focus on stationary spinors satisfying Eq. (4) and interpolating between UN and BN phases. Since we therefore require \(\langle\hat{\mathbf{F}}\rangle=0\) and constant \(A_{20}\), the five equations in the steady-state system (4) can be treated as two independent \(2\times 2\) linear systems corresponding to \(m=\pm 2\) and \(m=\pm 1\), respectively, and a single equation for \(m=0\)[38]. We choose to work in the three-component limit with \(\zeta_{\pm 1}=0\), resulting in the spinor parametrization \[\zeta^{\mathrm{UN-BN}}=\frac{1}{2}\left(e^{i\chi_{2}}\sqrt{1-\eta},0,e^{i\chi _{2}}\sqrt{2(1+\eta)},0,e^{i\chi_{-2}}\sqrt{1-\eta}\right)^{\mathrm{T}}, \tag{14}\] where \(\chi_{m}\) are arbitrary phase coefficients that can assume fixed values or be spatially wound in defects and textures. Crucially, Figure 1: Spherical-harmonic representations [Eq. (5)] of the magnetic phases of a spin-2 BEC. (a) \(\mathrm{FM}_{2}\) and (b) \(\mathrm{FM}_{1}\) whose order-parameter spaces represent spatial rotations (here \(\langle\hat{\mathbf{F}}\rangle/|\langle\hat{\mathbf{F}}\rangle|=\hat{\mathbf{ e}}_{0,0,1}\)). (c) The C phase combines the discrete symmetry of a tetrahedron with the condensate phase. (d) The UN phase, whose order parameters is given by an unoriented, nematic axis \(\hat{\mathbf{d}}=\hat{\mathbf{e}}_{0,0,1}\)), together with the condensate phase. (e) The BN phase combines the discrete symmetry of a square with the phase. \(\eta\in[-1,1]\) now forms an interpolating parameter between UN and BN magnetic phases: for \(\eta=1\), only the \(\zeta_{0}\) component is non-zero, representing the familiar UN phase in Eq. (11), with the nematic axis aligned with the \(z\)-axis. Similarly, for \(\eta=-1\), we retrieve the familiar BN phase in Eq. (12). However, the interpolating region harbours additional complexity, revealed by the variation of the spin-singlet duo and trio amplitudes as a function of \(\eta\) and \(\chi_{m}\). Calculating these using Eq. (14) gives \[|A_{20}|^{2} =\frac{1}{10}\bigg{[}\left(1-\eta^{2}\right)\cos\left(\chi_{2}+ \chi_{-2}-2\chi_{0}\right)+1+\eta^{2}\bigg{]}, \tag{15}\] \[|A_{30}|^{2} =\frac{1+\eta}{4}\bigg{[}3\left(\eta^{2}-1\right)\cos\left(\chi_{ 2}+\chi_{-2}-2\chi_{0}\right)\] \[\qquad\qquad+\eta\left(5\eta-8\right)+5\bigg{]}. \tag{16}\] Hence, we can study the behavior of Eqs. (15) and (16) in the parameter space \((\chi,\eta)\), where \(\chi=\chi_{2}+\chi_{-2}-2\chi_{0}\), as shown in Fig. 2. First consider \(\chi=0\). We then notice from the variation of \(|A_{30}|^{2}\) in Fig. 2b that instead of a monotonic growth from the minimum \(|A_{30}|^{2}=0\) (BN), to \(|A_{30}|^{2}=1\) (UN), the BN phase reappears around \(\eta=0.5\). If instead \(\chi=\pi\), the C phase arises in the vicinity of \(\eta=0\), where \(|A_{20}|^{2}=0,|A_{30}|^{2}=2\) and the spinor (14) coincides with Eq. (9). For all values of \(\chi\), the UN and BN limits at \(\eta=\pm 1\) remain unchanged. The mean-field energy, calculated for a uniform spin-2 BEC in the state (14), reads \((\tilde{c}_{0,1,2}=nc_{0,1,2})\) \[\mathcal{E}^{\text{UN-BN}}=\mathcal{H}\left[\Psi^{\text{UN-BN}}\right]-\frac{ \tilde{c}_{0}}{2}=2q(1-\eta)+\frac{\tilde{c}_{2}}{2}|A_{20}|^{2}, \tag{17}\] where \(|A_{20}|^{2}\) is given in Eq. (15). The convexity of \(|A_{20}|^{2}\) as a function of \((\eta,\chi)\) means that for \(q=0\), the energy (17) is minimized along the \(\chi=0,2\pi\) and \(\eta=\pm 1\) edges in Fig. 2a when \(\tilde{c}_{2}<0\) (the case \(\tilde{c}_{2}>0\) will be discussed in Sec. III.2). This corresponds to the UN-BN degeneracy at zero level shifts. In the presence of an external field such that \(q\neq 0\), the \(q\)-dependent contribution (linear in \(\eta\)) in Eq. (17) shifts the symmetry axis such that the value of \(\eta\) that minimizes Eq. (17) now depends on \(q\). \(\mathcal{E}^{\text{UN-BN}}\) is minimized in the BN limit (\(\eta=-1\)) for \(q\leq 0\), and in the UN limit (\(\eta=1\)) for \(q\geq 0\). The state (14) thus represents a stationary solution such that one nematic phase is energetically favored over the other depending on the sign of the quadratic level shift, while also providing a smooth interpolation between these limits. Now letting the interpolating parameter \(\eta=\eta(z)\) vary between separated bulk regions along the \(z\) axis, the solution provides a smooth UN-BN topological interface. This may be stabilized by varying \(q=q(z)\), such that the BN and UN phases are favored on either side of the interface. Numerical examples of such interface engineering are provided in Sec. IV.1. We are now ready to construct vortex states that cross a UN-BN interface, generated by different azimuthal phase windings \(\chi_{m}\). These are summarized in Tab. 1. _Phase vortices penetrating the interface_--We first consider a phase vortex in both UN and BN limits, obtained from Eqs. (11) and (12), with \(k\) winding \[\zeta_{\text{pv}}^{\text{UN}}=\exp(ik\varphi)\zeta^{\text{UN}},\quad\zeta_{ \text{pv}}^{\text{BN}}=\exp(ik\varphi)\zeta^{\text{BN}}. \tag{18}\] These two solutions continuously connect across the interface according to the interpolating spinor in Eq. (14) with \(\chi_{0,\pm 2}=k\varphi\)1 Footnote 1: This is equivalent to choosing \(\tau=k\varphi\) and constant Euler angles in Eq. (6), acting on a uniform spinor \(\zeta^{\text{UN-BN}}\) on the form Eq. (14) with \(\chi_{m}=0\). \[\zeta_{\text{pv}}^{\text{UN-BN}}=\frac{e^{ik\varphi}}{2}\left(D_{-},0,\sqrt{2} D_{+},0,D_{-}\right)^{\text{T}}, \tag{19}\] representing a singular, interface-penetrating, \(k\)-quantized phase vortex, carrying mass-circulation. Here we have also introduced the shorthand \[D_{\pm}\equiv\sqrt{1\pm\eta}. \tag{20}\] _Phase vortices terminating at the interface_--Phase vortices may also terminate at the UN-BN interface. Such states are constructed by choosing \(\chi_{m}\) in Eq. (14) to introduce circulation in one limit of the interface only. For example, the choice \(\chi_{\pm 2}=k\varphi\), \(\chi_{0}=0\) yields phase vortices in the BN limit terminating to a vortex-free state in the UN while the spinor superfluid remains continuous and coherent everywhere: \[\zeta_{\text{vf-pv}}^{\text{UN-BN}}=\frac{1}{2}\left(e^{ik\varphi}D_{-},0, \sqrt{2}D_{+},0,e^{ik\varphi}D_{-}\right)^{\text{T}}. \tag{21}\] Conversely, azimuthal winding of the phase \(\chi_{0}\) only, i.e., \(\chi_{0}=k\varphi\), \(\chi_{\pm 2}=0\) results instead in a phase vortex in the UN limit, and a vortex-free state in the BN: \[\zeta_{\text{pv-vf}}^{\text{UN-BN}}=\frac{1}{2}\left(D_{-},0,e^{ik\varphi} \sqrt{2}D_{+},0,D_{-}\right)^{\text{T}}. \tag{22}\] In addition to describing terminating vortices, Eqs. (21) and (22) can also parametrize the superfluid UN core of a BN phase vortex and vice versa, where the interface forms part of the vortex core structures [23]. The terminating vortices break axisymmetry at the interface. This follows immediately from the \(\chi\) dependence of Figure 2: Spin-singlet duo and trio amplitudes \(|A_{30}|^{2}\) (a) and \(|A_{30}|^{2}\) (b), obtained in Eqs. (15) and (16) as functions of the interpolating parameter \(\eta\) and the relative phase difference \(\chi\). Along \(\chi\approx 0\), \(\eta\) interpolates between the UN and BN phases. Crossover to the C phase is realized within the regions corresponding to \(\pi/2\lesssim\chi\lesssim 3\pi/2\) and \(-0.5\lesssim\eta\lesssim 0.9\). \(|A_{30}|^{2}\) at \(\eta=0\) shown in Fig. 2. For Eqs. (21) and (22) with \(k=1\), \(\chi=\pm 2\varphi\), respectively, \(\chi\) takes all values \(0\) to \(\pm 4\pi\) on any closed loop around the vortex. Thus, the termination of a phase-vortex implies the appearance of C-phase regions at the interface, where \(D_{\pm}\approx 1\) in Eq. (19), as illustrated in Fig. 3. _Connections involving half-quantum vortices_--HQVs form in the BN phase by combining a \(\pi\) winding of the condensate phase with a compensating spin rotation to keep the wave function single valued. We can immediately infer from the order-parameter symmetry shown in Fig. 1d that an HQV can be formed either through a \(\pi/2\) spin rotation about the \(\hat{\mathbf{e}}_{(0,0,1)}\) symmetry axis to yield a \((1/2,1/4)\) vortex, or through a \(\pi\) spin rotation about \(\hat{\mathbf{e}}_{(1,1,0)}\) resulting in a \((1/2,1/2)\) vortex. More generally, vortices with any half-integer quanta of mass circulation can be constructed in a similar way. \((1/2,1/4)\) vortices connecting smoothly to a vortex-free UN limit are obtained from Eq. (14) by choosing \(\chi_{2}=\chi_{0}=0,\chi_{-2}=\varphi\) [equivalent to \(\gamma=\varphi/4\), \(\tau=2\gamma\) in Eq. (6)], to yield \[\xi_{\text{vf-hqv}}^{\text{UN-BN}}=\frac{1}{2}\left(D_{-},0,\,\sqrt{2}D_{+},0,e^{i\varphi}D_{-}\right)^{\text{T}}. \tag{23}\] The same BN \((1/2,1/4)\) vortex can also smoothly connect to a singly quantized UN phase vortex by instead choosing \(\chi_{0}=\pm\varphi\) as the complex argument of the \(\zeta_{0}\) component in Eq. (23). _Connections involving spin vortices_--Spinor BECs additionally support the non-dissipative flow of spin, which can lead to vortices carrying spin circulation only: spin vortices. An example of a singular case is given by the combination \(\chi_{\pm 2}=\mp k\varphi,\,\chi_{0}=0\) in Eq. (14), where a BN spin vortex terminates at the interface, connecting to a vortex-free UN state. This can equivalently be constructed through the action in Eq. (6), e.g., by choosing the winding \(\alpha=k\varphi/2\). Both constructions result in the spinor \[\xi_{\text{vf-sv}}^{\text{UN-BN}}=\frac{1}{2}\left(e^{-ik\varphi}D_{-},0,\, \sqrt{2}D_{+},0,e^{ik\varphi}D_{-}\right)^{\text{T}}. \tag{24}\] Note that for \(k=\pm 1\), the BN limit of Eq. (24) describes a vortex where the order parameter rotates by \(\pi\) around the \(\hat{\mathbf{e}}_{(0,0,1)}\) axis as depicted in Fig. 1d. The vortex is a spin-HQV, analogous to \(\pi\)-disclinations in nematic liquid crystals. Spin vortices may also penetrate the interface, connecting to a corresponding spin vortex in the other phase. In the presence of a UN-BN interface, these can be obtained by applying Eq. (6) with \(\alpha=k\varphi/2\), and constant \(\beta=\pi/2\) to the interpolating spinor in Eq. (14). The resulting state reads \[\xi_{\text{vf-sv}}^{\text{UN-BN}}=\frac{1}{4}\begin{pmatrix}e^{-ik\varphi}(D _{-}+\sqrt{3}D_{+})\\ 0\\ \sqrt{6}D_{-}-\sqrt{2}D_{+}\\ 0\\ e^{ik\varphi}(D_{-}+\sqrt{3}D_{+})\end{pmatrix}. \tag{25}\] When \(k=1\), Eq. (25) contains a spin-HQV in the UN limit, where the nematic director exhibits a radial disgyration in the \(x,y\)-plane. In the BN limit, the spin-HQV is instead formed by a \(\pi\) rotation of the order parameter around the \(\hat{\mathbf{e}}_{(1,0,0)}\) axis (see Fig. 1d) along any closed loop around the vortex line. _Nonsingular textures and monopoles_--So far we have considered only singular line defects. However, the nematic spin-2 phases can also form nonsingular spin vortices. We construct a fountain-like texture of the nematic axis \(\hat{\mathbf{d}}\) in the bulk UN phase (cf. Fig. 1c). Starting from Eq. (11), we apply the transformation (6), where the Euler angles are chosen such that the nematic axis, \(\hat{\mathbf{d}}=(\cos\alpha\sin\beta,\sin\alpha\sin\beta,\cos\beta)\), bends away from the vortex line with increasing radial distance \(\rho\), i.e., with \(\alpha=\varphi\). The resulting spinor reads \[\zeta^{\text{UN}}=\sqrt{\frac{3}{8}}\begin{pmatrix}e^{-2i\varphi}\sin^{2} \beta\\ -e^{-i\varphi}\sin 2\beta\\ \frac{1}{\sqrt{6}}(1+3\cos 2\beta)\\ e^{i\varphi}\sin 2\beta\\ e^{2i\varphi}\sin^{2}\beta\end{pmatrix}, \tag{26}\] and the nonsingular texture is obtained by taking \(\beta=\beta(\rho)\) to be a monotonically increasing function of the radial distance \(\rho\) from the vortex line, with \(\beta(0)=0\). Instead applying the same transformation to the BN spinor (12), however, results in a singular spin vortex, albeit together with a fountain texture formed by the \(\hat{\mathbf{e}}_{(0,0,1)}\) symmetry axis (cf. Fig. 1d). The vortex is described by \[\zeta^{\text{BN}}=\frac{1}{\sqrt{8}}\begin{pmatrix}e^{-2i\varphi}(\cos^{2} \beta+1)\\ e^{-i\varphi}\sin 2\beta\\ \sqrt{6}\sin^{2}\beta\\ -e^{i\varphi}\sin 2\beta\\ e^{2i\varphi}(\cos^{2}\beta+1)\end{pmatrix}. \tag{27}\] Since Eqs. (11) and (12) are exactly the UN and BN limits, respectively, of the UN-BN interpolating spinor (14) with all \(\chi_{m}=0\), it follows immediately that the nonsingular UN spin vortex in Eq. (26) can connect smoothly across the interface to the singular spin vortex in Eq. (27) on the BN side. The corresponding wavefunction is given by \[\zeta^{\text{UN-BN}}=\frac{1}{\sqrt{2}}\left(D_{+}\zeta^{\text{UN}}+D_{-} \zeta^{\text{BN}}\right), \tag{28}\] where \(D_{\pm}(z)\), defined in Eq. (20), parametrize the spatial interpolation between the bulk regions. On a topological interface, monopoles may form as termination points of singular vortex lines, similar to "boojums" in \begin{table} \begin{tabular}{l c c c c} \hline \hline UN limit & BN limit & \(\chi_{2}/\varphi\) & \(\chi_{0}/\varphi\) & \(\chi_{-2}/\varphi\) \\ \hline Phase vortex & Phase vortex & \(k\) & \(k\) & \(k\) \\ Vortex-free & Phase vortex & \(k\) & \(0\) & \(k\) \\ Phase vortex & Vortex-free & \(0\) & \(k\) & \(k\) \\ Vortex-free & Spin vortex & \(-k\) & \(k\) & \(k\) \\ Vortex-free & Half-quantum vortex & \(0\) & \(0\) & \(1\) \\ Phase vortex & Half-quantum vortex & \(0\) & \(\pm 1\) & \(1\) \\ \hline \hline \end{tabular} \end{table} Table 1: Singular vortex connections across a UN-BN interface, characterized by the phase windings \(\chi_{m}\) in Eq. (14). Generalizations to multiple quantization are given by \(k\in\mathbb{Z}\). superfluid liquid \({}^{3}\)He [13; 92]. The construction is analogous to that connecting the BN singular spin vortex to the UN non-singular spin vortex. The spinor is again given by Eqs. (26)-(28), only now taking \(\beta=\theta\), independent of radial distance to form the required monopole configuration in the UN limit. In the BN limit, the spinor still represents a singular spin vortex. As the spinor interpolates across the interface, however, this vortex line now terminates on the UN point defect. It is also possible for a singular UN vortex to terminate as a BN monopole, constructed such that the vortex coincides with the associated line singularity [which always exists since \(\pi_{2}(\mathcal{M}^{\text{BN}})=0\)]. As in Ref. [35], we construct the monopole by applying Eq. (6) with \(\alpha=-\gamma=\varphi\) and \(\beta=\theta\) to the BN spinor \(\zeta^{\text{BN}}=(1,0,\sqrt{6},0,1)^{\text{T}}/\sqrt{2}\) [itself obtained applying a \(\beta=\pi/2\) rotation to Eq. (12)]. The line singularity is then aligned with the negative \(z\)-axis. On a topological interface, a BN monopole may then be oriented such that the line defect is "hidden" on the opposite side. For example, constructing an interpolating spinor on the form (28), where the monopole forms the BN limit for \(z>0\) and interpolating to the UN phase for \(z<0\), we find the limits \[\zeta^{\text{LN}}=\sqrt{\frac{3}{8}}\begin{pmatrix}e^{-2i\varphi}(\cos\theta \cos\varphi+i\sin\varphi)^{2}\\ 2e^{-i\varphi}\sin\theta\cos\varphi\left(\cos\theta\cos\varphi+i\sin\varphi \right)\\ \frac{1}{2}\sqrt{6}\left(\sin^{2}\theta\cos 2\varphi-3\cos 2\theta-1\right)\\ -2e^{i\varphi}\sin\theta\cos\varphi\left(\cos\theta\cos\varphi-i\sin\varphi \right)\\ e^{2i\varphi}(\cos\theta\cos\varphi-i\sin\varphi)^{2}\end{pmatrix}, \tag{29}\] \[\zeta^{\text{BN}}=\frac{1}{4\sqrt{2}}\begin{pmatrix}2e^{-4i\varphi}\sin^{4} \frac{\theta}{2}+3e^{-2i\varphi}\sin^{2}\theta+2\cos^{4}\frac{\theta}{2}\\ e^{-3i\varphi}\sin\theta\left[e^{4i\varphi}(\cos\theta\!-\!1)\!-\!6e^{2i \varphi}\cos\theta\!+\!\cos\theta\!+\!1\right]\\ \sqrt{\frac{3}{2}}\left(2\sin^{2}\theta\cos 2\varphi+3\cos 2\theta+1\right)\\ 2e^{-i\varphi}\sin\theta\left[\cos\theta\left(\cos 2\varphi-3\right)+i\sin 2 \varphi\right]\\ 2e^{4i\varphi}\sin^{4}\frac{\theta}{2}+3e^{2i\varphi}\sin^{2}\theta+2\cos^{4} \frac{\theta}{2}\end{pmatrix}. \tag{30}\] The spinor interpolating between these limits then represents a UN singular spin vortex, which terminates as a BN monopole on the interface. Both examples of nematic spin vortices terminating as monopoles on the UN-BN interface are simulated numerically in Sec. IV.1. ### Cyclic to nematic (C-UN/BN) As shown in Sec. III.1, the steady-state family of phase-mixing spinors in Eq. (14) includes a crossover from both nematic phases to the C phase if we ensure a constant \(\pi/2\) phase difference between the components \(\zeta_{0}\) and \(\zeta_{+2}\), equivalent to restricting ourselves to the subset of solutions given by \[\zeta^{\text{C-UN/BN}}=\frac{1}{2}\left(e^{i\chi_{2}}\sqrt{1\!-\!\eta},0,ie^{ i\chi_{0}}\sqrt{2(1\!+\!\eta)},0,e^{i\chi_{-2}}\sqrt{1\!-\!\eta}\right)^{\text{T}}, \tag{31}\] where \(\chi_{2}+\chi_{-2}-2\chi_{0}=0\). Thus, the C spinor in Eq. (9), with tetrahedral symmetry combined with the condensate phase, is obtained for \(\eta=0\). Conversely, when \(\eta=\pm 1\), we once again retrieve UN or BN states, respectively. The uniform mean-field energy of \(\zeta^{\text{C-UN/BN}}\) in Eq. (31) reads \[\mathcal{E}^{\text{C-UN/BN}}=2q(1-\eta)+\frac{\tilde{\chi}_{2}}{10}\eta^{2}, \tag{32}\] and is minimized by the value \(\eta=10q/\tilde{\epsilon}_{2}\) when \(\tilde{\epsilon}_{2}>0\), i.e., we now have a \(q\)-dependent interpolating parameter. We can then construct interface solutions interpolating between C and UN/BN magnetic phases for \(|q|\leq\tilde{\epsilon}_{2}/10\), as numerically demonstrated in Sec. IV.3. We focus here on a C-BN interface only, where the Zeeman shift \(q\) varies spatially from \(q=0\) (C) to \(q=-\tilde{\epsilon}_{2}/10\) (BN). _Phase and spin vortices penetrating the interface_--The defect connections involving phase and spin vortices discussed in Sec. III.1 for the UN-BN case are retrieved here with the simple replacement rule \(D_{+}\to iD_{+}\) in Eqs. (19), (24) and (25), where the C limit is given by \(D_{\pm}=1\). For example, spin vortices in both C and BN phases connecting across the interface are given by the wave function \[\zeta^{\text{C-BN}}_{\text{sv-sv}}=\frac{1}{2}\left(e^{-ik\varphi}D_{-},0,i \sqrt{2}D_{+},0,e^{ik\varphi}D_{-}\right)^{\text{T}}, \tag{33}\] obtained from the interpolating spinor in Eq. (31) by choosing \(\chi_{\pm 2}=\mp k\varphi,\chi_{0}=0\). For \(k=1\), both limits represent spin-HQVs. The vortex connections across a C-BN interface constructed from the phase windings \(\chi_{m}\) are represented in Tab. 3. Note that, despite the similar forms of Eqs. (14) and (31), solutions with \(\chi_{2}+\chi_{-2}-2\chi_{0}\neq 0\) in Eq. (31) do not produce well-defined defect states in the C limit. The symmetries of the C and BN order parameters also allow a further connection between quantized spin vortices, distinct from Eq. (33). We now construct the vortex state from spin rotations about \begin{table} \begin{tabular}{c c c c c} \hline \hline UN-BN: Vortices, textures and monopoles from Euler angles & & & \\ \hline \hline UN limit & BN limit & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Spin half-quantum vortex & Spin half-quantum vortex & 1 & 0 & \(\pi/2\) \\ Nonsingular spin vortex & Spin vortex & 1 & 0 & \(\beta(\rho)\) \\ Monopole & Spin vortex & 1 & \(0\) & \(\theta\) \\ Spin vortex & Monopole & 1 & \(-1\) & \(\theta\) \\ \hline \hline \end{tabular} \end{table} Table 2: Singular and nonsingular spin vortices and monopoles connecting across a UN-BN interface, constructed by azimuthal dependence of Euler angles \(\alpha\), \(\gamma\) (given as multiples of the azimuthal angle \(\varphi\)), and \(\beta\) (given as a multiple of the polar angle \(\theta\) or, for nonsingular vortices, as a monotonically increasing function of the transverse radius \(\rho\)), with \(\tau=0\). \begin{table} \begin{tabular}{c c c c c} \hline \hline C-UN/BN: Vortices from spinor-component phase winding & & & \\ \hline C/BN limits & UN limit & \(\chi_{2}/\varphi\) & \(\chi_{0}/\varphi\) & \(\chi_{-2}/\varphi\) \\ \hline Phase vortex & Phase vortex & \(k\) & \(k\) & \(k\) \\ Spin vortex & Vortex-free & \(-k\) & 0 & \(k\) \\ \hline \hline \end{tabular} \end{table} Table 3: Vortex connections across a C-UN/BN interface, characterized by the phase windings \(\chi_{m}\) in Eq. (31). in Fig. 1c and e, given by the operator2 Footnote 2: The same spin rotation can also be written on the form of Eq. (6) with Euler angles \(\alpha=-\gamma=\pi/4\), \(\beta=\varphi\). This is, however, less intuitively instructive here than the axis-angle representation. \[\hat{U}\left(\hat{\mathbf{e}}_{(1,1,0)},\delta\right)=\exp\biggl{\{}-i\frac{ \hat{F}_{x}+\hat{F}_{y}}{\sqrt{2}}\delta\biggr{\}}, \tag{34}\] in the axis-angle representation. Whenever \(\delta\) winds by an integer multiple of \(2\pi\) on a closed loop around the vortex line, this defines a spin vortex with integer quantization in both C and BN phases. Applying Eq. (34) with \(\delta=\varphi\) to the interpolating spinor \(\zeta^{\text{C-BN}}\) [Eq. (31) with \(\chi_{m}=0\)] leads to a spinor of the form of Eq. (28) with \(D_{+}\to iD_{+}\) and \[\chi_{\text{sv}}^{\text{UN}}=\sqrt{\frac{3}{8}}\begin{cases}-i\sin^{2}\varphi \\ -e^{-\frac{\pi}{4}}\sin 2\varphi\\ \frac{1}{\sqrt{6}}\left(1+3\cos 2\varphi\right)\\ e^{\frac{\pi}{4}}\sin 2\varphi\\ i\sin^{2}\varphi\end{cases}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \!\! Eq. (42) (2/3-vortex), when expressed using Eq. (6) [30; 34; 111]. Since mass circulation is determined by \(\tau\), the C limit of Eqs. (41) and (42) represent vortices with 1/3 and 2/3 quanta of circulation, respectively. Due to the spin-gauge symmetry of the FM phases, these states connect to phase vortices or vortex-free states in the FM limit, resulting in different mass circulation between the magnetic phases. Specifically, Eq. (41) represents an interpolating solution from a singly quantized phase vortex (FM\({}_{2}\)) to a vortex-free state (FM\({}_{1}\)) via a (1/3, -1/3) vortex (C), and Eq. (42) from a vortex-free state (FM\({}_{2}\)) to a singly quantized phase vortex (FM\({}_{1}\)) via a (2/3, 1/3) vortex (C). The C-FM vortex connections are summarized in Tab. 5. _Nonsingular textures and monopoles_--By considering Eq. (6), we obtain the interpolating solutions determined by the symmetry group \(G\) acting on Eq. (38) \[\zeta^{\text{C-FM}}=\frac{1}{\sqrt{3}}\left(D_{2}\zeta^{\text{FM} _{2}}+D_{-1}\zeta^{\text{FM}_{1}}\right), \tag{43}\] where \(\zeta^{\text{FM}_{1,2}}\) represent the parametrizations of the states with \(F_{z}=\pm 1,2\) \[\zeta^{\text{FM}_{1}}=e^{i(\tau+\tau)\varphi}\left(\begin{array}{ c c c}e^{-2i\alpha}\cos^{4}\frac{\beta}{2}\\ 2e^{-i\alpha}\cos^{3}\frac{\beta}{2}\sin\frac{\beta}{2}\\ \sqrt{6}\cos^{2}\frac{\beta}{2}\sin^{2}\frac{\beta}{2}\\ 2e^{i\alpha}\cos\frac{\beta}{2}\sin^{3}\frac{\beta}{2}\\ e^{2i\alpha}\sin^{4}\frac{\beta}{2}\end{array}\right), \tag{44}\] \[\zeta^{\text{FM}_{1}}=e^{i(\tau+\tau)\varphi}\left(\begin{array}{c c c}-2e^{-2i \alpha}\cos\frac{\beta}{2}\sin^{3}\frac{\beta}{2}\\ e^{-i\alpha}\sin^{2}\frac{\beta}{2}\left(3\cos^{2}\frac{\beta}{2}-\sin^{2} \frac{\beta}{2}\right)\\ \sqrt{6}\left(\cos\frac{\beta}{2}\sin^{3}\frac{\beta}{2}-\cos^{3}\frac{\beta} {2}\sin\frac{\beta}{2}\right)\\ e^{i\alpha}\cos^{2}\frac{\beta}{2}\left(\cos^{2}\frac{\beta}{2}-3\sin^{2} \frac{\beta}{2}\right)\\ 2e^{2i\alpha}\cos^{3}\frac{\beta}{2}\sin\frac{\beta}{2}\end{array}\right). \tag{45}\] In the FM phases, angular momentum can be carried by nonsingular (coreless) vortices. The best known examples exhibit a fountain-like spin texture where the spin density aligns with \(\hat{\mathbf{z}}\) at \(\rho=0\), and tilts away as \(\rho\) increases, corresponding to a monotonically increasing Euler angle \(\beta=\beta(\rho)\). In the FM\({}_{2}\) case, the order parameter is kept nonsingular everywhere by any choice of phase and Euler angles such that \[\tau-2\gamma=\pm 2\alpha=\pm 2\varphi, \tag{46}\] whereas the FM\({}_{1}\) case requires \[\tau+\gamma=\pm\alpha=\pm\varphi. \tag{47}\] Substituting these into Eqs. (44) and (45) results in nonsingular FM vortices connecting across the interface to singular C vortices. The latter include phase (\(\gamma=0\)), spin (\(\tau=0\)), and fractional (\(\tau=\varphi/3\) and \(2\varphi/3\)) vortices. Moreover, analogously to spin-1 BECs [16; 68], the FM\({}_{1,2}\) generalizations of the Dirac monopoles can be defined by taking \(\beta=\theta\) in Eqs. (44) and (45). The condensate phase and the remaining Euler angles are chosen according to Eq. (46) for FM\({}_{2}\) and Eq. (47) for FM\({}_{1}\). This yields the characteristic hedgehog texture of \(\langle\hat{\mathbf{F}}\rangle\), embedding a singular vortex line terminating on the monopole. Similarly to nonsingular vortices, FM\({}_{2}\) Dirac monopoles can connect continuously to phase, spin and fractional vortices in the C limit. All interpolating spinors constructed in this way from Eqs. (43)-(45) are summarized in Table 6 for the C-FM interfaces, with a discussion of the analogous constructions for a FM-BN interface to follow in Sec. III.4. ### Ferromagnetic to biaxial nematic (FM-BN) We now return to the steady-state spinors with zero transverse magnetization (and arbitrary longitudinal magnetization) that satisfy Eq. (37). These can form solutions that inter \begin{table} \begin{tabular}{c c c c c} \hline \hline C limit & FM\({}_{2}\) limit & \(\tau/\varphi\) & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Phase vortex & \(\text{Nonsingular vortex}\) & 2 & 1 & 0 & \(\beta(\rho)\) \\ Spin vortex & \(\text{Nonsingular vortex}\) & 0 & 1 & \(\pm 1\) & \(\beta(\rho)\) \\ \(2/3\)-vortex & \(\text{Nonsingular vortex}\) & 2/3 & 1 & -2/3, 4/3 & \(\beta(\rho)\) \\ Phase vortex & \(\text{Dirac monopole}\) & 2 & 1 & 0 & \(\theta\) \\ Spin vortex & \(\text{Dirac monopole}\) & 0 & 1 & \(\pm 1\) & \(\theta\) \\ \(2/3\)-vortex & \(\text{Dirac monopole}\) & 2/3 & 1 & -2/3, 4/3 & \(\theta\) \\ \hline \hline C limit & FM\({}_{1}\) limit & \(\tau/\varphi\) & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Phase vortex & \(\text{Nonsingular vortex}\) & 1 & 1 & 0 & \(\beta(\rho)\) \\ Spin vortex & \(\text{Nonsingular vortex}\) & 0 & 1 & \(\pm 1\) & \(\beta(\rho)\) \\ \(1/3\)-vortex & \(\text{Nonsingular vortex}\) & 1/3 & 1 & -4/3, 2/3 & \(\beta(\rho)\) \\ Phase vortex & \(\text{Dirac monopole}\) & 1 & 1 & 0 & \(\theta\) \\ Spin vortex & \(\text{Dirac monopole}\) & 0 & 1 & \(\pm 1\) & \(\theta\) \\ \(1/3\)-vortex & \(\text{Dirac monopole}\) & 1/3 & 1 & -4/3, 2/3 & \(\theta\) \\ \hline \hline \end{tabular} \end{table} Table 6: Singular and nonsingular vortices and monopoles connecting across a C-FM interface constructed through spatial dependent \(\tau\), \(\alpha\), and \(\gamma\) (given as multiples of the azimuthal angle \(\varphi\)), and \(\beta\) (given as a multiple of the polar angle \(\theta\) or, for nonsingular vortices, as a monotonically increasing function of the transverse radius \(\rho\)). \begin{table} \begin{tabular}{c c c c c} \hline \hline C-FM: Vortices & textures and monopoles from Euler angles \\ \hline C limit & FM\({}_{2}\) limit & \(\tau/\varphi\) & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Phase vortex & \(\text{Nonsingular vortex}\) & 2 & 1 & 0 & \(\beta(\rho)\) \\ Spin vortex & \(\text{Nonsingular vortex}\) & 0 & 1 & \(\pm 1\) & \(\beta(\rho)\) \\ \(2/3\)-vortex & \(\text{Nonsingular vortex}\) & 2/3 & 1 & -2/3, 4/3 & \(\beta(\rho)\) \\ Phase vortex & \(\text{Dirac monopole}\) & 2 & 1 & 0 & \(\theta\) \\ Spin vortex & \(\text{Dirac monopole}\) & 0 & 1 & \(\pm 1\) & \(\theta\) \\ \(2/3\)-vortex & \(\text{Dirac monopole}\) & 2/3 & 1 & -2/3, 4/3 & \(\theta\) \\ \hline \hline C limit & FM\({}_{1}\) limit & \(\tau/\varphi\) & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Phase vortex & \(\text{Nonsingular vortex}\) & 1 & 1 & 0 & \(\beta(\rho)\) \\ Spin vortex & \(\text{Nonsingular vortex}\) & 0 & 1 & \(\pm 1\) & \(\beta(\rho)\) \\ \(1/3\)-vortex & \(\text{Nonsingular vortex}\) & 1/3 & 1 & -4/3, 2/3 & \(\beta(\rho)\) \\ Phase vortex & \(\text{Dirac monopole}\) & 1 & 1 & 0 & \(\theta\) \\ Spin vortex & \(\text{Dirac monopole}\) & 0 & 1 & \(\pm 1\) & \(\theta\) \\ \(1/3\)-vortex & \(\text{Dirac monopole}\) & 1/3 & 1 & -4/3, 2/3 & \(\theta\) \\ \hline \hline \end{tabular} \end{table} Table 5: Singular vortex connections across a C-FM interface, characterized by the phase windings \(\chi_{m}\) in Eq. (38) polate between the FM\({}_{2}\) and BN phases: \[\zeta^{\text{FM${}_{2}$-BN}}=\frac{1}{\sqrt{2}}\left(e^{i\chi_{2}}\sqrt{1+\eta},0,0,0,e^{i\chi_{-2}}\sqrt{1-\eta}\right)^{\text{T}}, \tag{48}\] where \(\eta=\langle\hat{F}_{z}\rangle/2\), providing the BN phase at \(\eta=0\), and FM\({}_{2}\) at \(\eta=\pm 1\). The uniform mean-field energy of Eq. (48) reads \[\mathcal{E}^{\text{FM${}_{2}$-BN}}=4q-2p\eta+\left(2\tilde{c}_{1}-\frac{\tilde {c}_{2}}{10}\right)\eta^{2}+\frac{\tilde{c}_{2}}{10}, \tag{49}\] and is minimized by \(\eta=p/(2\tilde{c}_{1}-\tilde{c}_{2}/10)\), provided that the spin-dependent and spin-singlet interactions satisfy \(\tilde{c}_{1}\geq\tilde{c}_{2}/20\) at fixed \(p,q\). We construct a topological interface between the FM\({}_{2}\) and BN phases in Eq. (48) with a linear Zeeman shift varying spatially between \(p=\pm(2\tilde{c}_{1}-\tilde{c}_{2}/10)\) (FM\({}_{2}\)), and \(p=0\) (BN). Additionally, in the absence of Zeeman shifts, Eq. (49) shows that the BN phase is energetically favoured for \(\tilde{c}_{1}>\tilde{c}_{2}/20\), and the FM\({}_{2}\) limit otherwise. Thus, FM-BN interfaces are also obtained with spatially varying \(\tilde{c}_{1}\). These are numerically illustrated in Sec. IV.5. _Connections among phase, spin and half-quantum vortices_--The spin-gauge symmetry of the FM\({}_{2}\) limits in Eq. (48) allows further varieties of vortex connections, identified by combinations of winding of the phase coefficients \(\chi_{\pm 2}\). The cases of interest here read \[\zeta^{\text{FM${}_{2}$-BN}}_{\text{pv-sv}} =\frac{1}{\sqrt{2}}\left(e^{-4i\varphi}D_{+},0,0,0,e^{i\varphi}D_{ -}\right)^{\text{T}}, \tag{50}\] \[\zeta^{\text{FM${}_{2}$-BN}}_{\text{pv-buv}} =\frac{1}{\sqrt{2}}\left(D_{+},0,0,0,e^{i\varphi}D_{-}\right)^{ \text{T}}, \tag{51}\] corresponding to \(\chi_{\pm 2}=\mp k\varphi\), and \(\chi_{2}=0,\chi_{-2}=\varphi\), where \(D_{\pm}\) is defined in Eq. (20). Eq. (50) yields phase vortices in the FM\({}_{2}\) limits (\(D_{+}=0\) or \(D_{-}=0\)), connecting to a spin vortex in the BN (\(D_{\pm}=1\)), identified by \(\gamma=\varphi/2\) only. In Eq. (51), fractional \((1/2,1/4)\) BN vortex connects to a vortex-free state or phase-vortex in the FM\({}_{2}\). The vortex connections identified through winding of \(\chi_{\pm 2}\) are shown in Tab. 7. _Nonsingular textures and monopoles_--Applying Eq. (6) to Eq. (48) yields \[\zeta^{\text{FM${}_{2}$-BN}}=\frac{1}{\sqrt{2}}\left(D_{+}\zeta^{\text{FM${} _{2}^{+}$}}+D_{-}\zeta^{\text{FM${}_{2}^{+}$}}\right), \tag{52}\] where \(\zeta^{\text{FM${}_{2}^{+}$}}\) is defined in Eq. (44), and \(\zeta^{\text{FM${}_{2}^{+}$}}\) is similarly obtained by applying Eq. (6) to \(\zeta=(0,0,0,0,1)^{\text{T}}\). Following the procedure outlined in Sec. III.3, we can construct spinors representing FM nonsingular vortices connecting to singular BN vortices, and Dirac monopoles that form the termination point of vortices at the FM-BN interface. By choosing the condensate phase and Euler angles as in Eq. (46), we obtain states of the form (52) with the FM\({}_{2}\) limits \[\zeta^{\text{FM${}_{2}^{+}$}}=\begin{pmatrix}e^{-4i\varphi}\cos^{4}\frac{\beta }{2}\\ 2e^{-3i\varphi}\cos^{3}\frac{\beta}{2}\sin\frac{\beta}{2}\\ \sqrt{6}e^{-2i\varphi}\cos^{2}\frac{\beta}{2}\sin^{2}\frac{\beta}{2}\\ 2e^{-i\varphi}\sin^{3}\frac{\beta}{2}\cos\frac{\beta}{2}\\ \sin^{4}\frac{\beta}{2}\end{pmatrix} \tag{53}\] \[\zeta^{\text{FM${}_{2}^{+}$}}=\begin{pmatrix}e^{-4i\varphi}\sin^{4}\frac{\beta }{2}\\ -2e^{-3i\varphi}\sin^{3}\frac{\beta}{2}\cos\frac{\beta}{2}\\ \sqrt{6}e^{-2i\varphi}\cos^{2}\frac{\beta}{2}\sin^{2}\frac{\beta}{2}\\ -2e^{-i\varphi}\cos^{3}\frac{\beta}{2}\sin\frac{\beta}{2}\\ \cos^{4}\frac{\beta}{2}\end{pmatrix} \tag{54}\] In the BN limit, \(D_{\pm}=1\) in Eq. (52), the spinor yields phase-, spin- and HQVs, depending on the choice of winding \(\tau=2,0,1/2\) respectively. For example, when \(\beta=\theta\), \(\zeta^{\text{FM${}_{2}^{+}$}}\) and \(\zeta^{\text{FM${}_{2}^{+}$}}\) represent a FM\({}_{2}\) monopole with an associated line singularity along \(z>0\) and \(z<0\), respectively. The interpolating spinor then connects the monopole to a singular vortex in the BN limit. We summarize the interpolating states obtained in this manner in Table 8. _Non-Abelian vortex pair at the interface_--The BN and C magnetic phases support non-Abelian vortices. The BN \((1/2,1/4)\) and \((1/2,1/2)\) vortices belong to different conjugacy classes and do not commute [35]. A spinor representing parallel \((1/2,1/4)\) and \((1/2,1/2)\) vortices terminating on the FM\({}_{2}\)-BN interface can be constructed starting from Eq. (51), where the azimuthal angle \(\varphi_{1}\) determines the \((1/2,1/4)\) vortex line. When the \((1/2,1/2)\) vortex is added, the transformations of the BN order parameter that correspond to the spin-rotation charges of the vortex lines combine nontrivially. For a pure BN condensate, this vortex combination was constructed in Ref. [35] (with technical details in its Supplemental Material). Applying the same construction to Eq. (51) results in an inter \begin{table} \begin{tabular}{c c c c c} \hline \hline FM\({}_{2}\) limit & BN limit & \(\tau/\varphi\) & \(\alpha/\varphi\) & \(\gamma/\varphi\) & \(\beta\) \\ \hline Nonsingular vortex & Phase vortex & 2 & 1 & 0 & \(\beta(\rho)\) \\ Nonsingular spin vortex & Spin vortex & 0 & 1 & \(\pm 1\) & \(\beta(\rho)\) \\ Nonsingular vortex & Half-quantum vortex & 1/2 & 1 & -3/4 & \(\beta(\rho)\) \\ Dirac monopole & Phase vortex & 2 & 1 & 0 & \(\theta\) \\ Dirac monopole & Spin vortex & 0 & 1 & \(\pm 1\) & \(\theta\) \\ Dirac monopole & Half-quantum vortex & 1/2 & 1 & -3/4 & \(\theta\) \\ \hline \hline \end{tabular} \end{table} Table 8: Singular and nonsingular vortices and monopoles connecting across a FM\({}_{2}\)-BN interface, constructed through spatial dependent \(\tau\), \(\alpha\), and \(\gamma\) (given as multiples of the azimuthal angle \(\varphi\)), and \(\beta\) (given as a multiple of the polar angle \(\theta\) or, for nonsingular vortices, as a monotonically increasing function of the transverse radius \(\rho\)). \begin{table} \begin{tabular}{c c c c c} \hline \hline FM\({}_{2}\) limit & BN limit & \(\chi_{2}/\varphi\) & \(\chi_{-2}/\varphi\) \\ \hline Phase vortex & Phase vortex & \(k\) & \(k\) \\ Phase vortex & Spin vortex & \(-k\) & \(k\) \\ Phase vortex & Half-quantum vortex & 1 & 0 \\ Vortex-free & Half-quantum vortex & 0 & 1 \\ \hline \hline \end{tabular} \end{table} Table 7: Singular vortex connections across a FM\({}_{2}\)-BN interface, characterized by the phase windings \(\chi_{\text{m}}\) in Eq. (48). face spinor on the form (52), with the FM\({}_{2}^{\pm}\) limits given by \[\xi^{\text{FM}_{2}^{\pm}}=e^{i\frac{\pi}{2}}\begin{pmatrix}\cos^{4} \frac{\varphi_{2}}{4}\\ \frac{1}{2}e^{i\frac{\pi}{4}}e^{i\frac{\pi}{4}}\sin\frac{\varphi_{2}}{2}\left(1 +\cos\frac{\pi_{2}}{2}\right)\\ i\sqrt{\frac{3}{8}}e^{i\frac{\pi_{2}}{4}}\sin^{2}\frac{\varphi_{2}}{2}\\ \frac{1}{2}e^{i\frac{\pi}{4}}e^{i\frac{\pi_{2}}{4}}\sin\frac{\varphi_{2}}{2} \left(1-\cos\frac{\pi_{2}}{2}\right)\\ -e^{i\varphi_{1}}\sin^{4}\frac{\varphi_{2}}{4}\end{pmatrix} \tag{55}\] \[\zeta^{\text{FM}_{2}^{-}}=e^{i\frac{\pi}{2}}\begin{pmatrix}-\sin^{4 }\frac{\varphi_{2}}{4}\\ \frac{1}{2}e^{i\frac{\pi}{4}}\sin\frac{\varphi_{2}}{2}\left(1-\cos\frac{\pi_{ 2}}{2}\right)\\ -i\sqrt{8}e^{i\frac{\varphi_{2}}{4}}\sin^{2}\frac{\varphi_{2}}{2}\\ \frac{1}{2}e^{i\frac{\pi}{4}}e^{i\frac{\pi_{2}}{4}}\sin\frac{\varphi_{2}}{2} \left(1+\cos\frac{\pi_{2}}{2}\right)\\ e^{i\varphi_{1}}\cos^{4}\frac{\varphi_{2}}{4}\end{pmatrix} \tag{56}\] where \(\varphi_{2}\) is the azimuthal angle determining the \((1/2,1/2)\) vortex line. Note, however, that this construction yields a well-defined defect state only in the BN limit. The addition of the \((1/2,1/2)\) vortex results in a discontinuous semiplane at \(\varphi_{2}=0\), where the FM\({}_{2}\) order parameter jumps from \(F_{z}=2\) to \(F_{z}=-2\). Despite this, Eqs. (55) and (56) approximate the desired defect combination and the discontinuity corresponds only to a rapidly relaxing excitation, as illustrated by numerical simulation in Sec. IV.5. ## IV Numerical simulations of core structures We study the dynamics and energy relaxation for illustrative examples of interface-crossing defect states, as constructed in Sec. III, by numerically propagating the coupled Gross-Pitaevskii equations derived from the Hamiltonian density (1) using a split-step method [112]. Simulations are performed on a \(128^{\circ}\)-point grid, and we choose \({}^{87}\)Rb interaction parameters [101] with \(Nc_{0}=1.32\times 10^{4}\,\hbar\omega t^{3}\) (corresponding to \(N=2\times 10^{5}\) atoms in an \(\omega=2\pi\times 130\) Hz trap). The UN-BN and C-BN topological interfaces are stabilized through spatial variations of level shifts, while the C-FM and the FM\({}_{2}\)-BN interfaces by spatially varying the interaction strength \(c_{1}\) along the \(z\)-direction. A weak phenomenological damping, \(t\rightarrow(1-i\nu)\,t\) with \(\nu\approx 10^{-2}\), accounts for dissipation in time-evolution simulations. Energy relaxation is determined using imaginary-time propagation. ### Phase vortex crossing the UN-BN interface As our first example of evolution of interface-crossing defects, we consider a singly quantized phase vortex that perforates the UN-BN topological interface. The continuously interpolating initial state is given by Eq. (19), with \(k=1\) and \(\eta=\eta(z)\) reaching \(D_{-}=0\) for \(z>\xi_{a}\), and \(D_{+}=0\) for \(z<-\xi_{a}\). The interface is stabilized by choosing the quadratic level shift such that \(q=\pm|q_{\text{max}}|\), with \(q_{\text{max}}=0.1\,\hbar\omega\), away from the interface at \(z=0\), around which \(q\) interpolates smoothly over the region smaller than the singlet healing length \(\xi_{a}\). Figure 3a shows the resulting vortex state after time propagation to \(t=100\,\omega^{-1}\). An azimuthal instability at the interface results in a local separation of the vortex lines, which terminate at displaced points on the interface within an extended core region whose size is determined by \(\xi_{a}\) and where the UN, BN and C phases mix (Fig. 3b). On the line singularity itself, the vortex on the BN side of the interface fills with the UN phase, while on the UN side, the singularity is accommodated by excitation to the BN superfluid. ### Singular vortices terminating as monopoles on the UN-BN interface In Sec. III.1, we constructed spinors representing singular vortices terminating as monopoles on the UN-BN interface. Here we numerically simulate their energy relaxation at zero Zeeman shifts to reveal the dissipative deformation of the defect core. In the analogous spin-1 case, an isolated nematic point-defect core is energetically unstable against deformation into a singular HQV-ring [69, 75], called an Alice ring in analogy with similar objects in high-energy physics. We first consider a \((0,1)\) spin vortex in the BN phase along the negative \(z\)-axis terminating as a UN monopole on the interface at the origin. The initial state is given by Eqs. (26)-(28) with \(\beta=\theta\) and the parameter \(D_{+}(z)\) chosen such that \(D_{-}=0\) for \(z>\xi_{a}\), and \(D_{+}=0\) for \(z<-\xi_{a}\). The spin vortex rapidly develops a UN superfluid core, reducing its energy. However, the point defect cannot do the same without deforming to a vortex ring due to the "hairy ball" theorem [69]. This happens Figure 3: Complex-time evolution of singly quantized phase vortices connecting across the UN-BN interface. (a) Longitudinal cut of \(|A_{30}|^{2}\) and spherical harmonics in the \(z\), \(x\)-plane, showing the vortex core structure of two vortices terminating at the interface that originate from an initial line at \(\rho=0\). The locations of the two cores are indicated. (b) Transverse cuts of \(|A_{30}|^{2}\) and spherical harmonics at both sides of the interface, where the filled core structures lead to a mixture of phases including UN (\(|A_{30}|^{2}=1\)), C (\(|A_{30}|^{2}=2\)) and BN (\(|A_{30}|^{2}=0\)). The C regions penetrate the interface as illustrated by red isosurfaces at \(|A_{30}|^{2}=2\). within one trap period of imaginary-time evolution, shown in Fig 4a and b. The ring appears parallel to the interface and encircles the BN spin vortex around its termination point. It is readily identified as a \((0,1/2)\) BN spin-HQV (a "spin-Alice ring" [35]). The BEC away from the defect core retains the topological asymptotics of the point defect. As outlined in Sec. III.1, we may also construct a spinor where the roles of the BN and UN regions are switched, such that a UN spin vortex terminates as a BN monopole on the interface, forming a different boojum. The initial state has the form (28), with the single-phase limits Eqs. (29) and (30). The numerics in this case leads to a more complex geometry of an Alice-ring threaded by a pair of spin-HQVs, shown in Fig. 4c. Energy relaxation causes the core of the UN spin vortex (here appearing in \(z<0\)), initially connected to the monopole, to develop a composite-defect structure with a BN outer core, shown in Fig. 4d. ### Spin vortices crossing the C-BN interface We can also form interfaces using interpolating solutions for energetically excited states that decay due to energy relaxation when they exhibit sufficient dynamical stability. A topological interface between C and BN phases for the parameters of \({}^{87}\)Rb clearly represents such a case. Similarly to recent experimental observations [23], the C-BN interface relaxes into a uniform BN state over timescales sufficiently slower than the relevant vortex core dynamics, allowing for characterizations of vortex stability properties. We consider the C-BN interpolating spinor in Eq. (31) when \(c_{2}<0\). In the numerics we take \(q(z)\) such that \(q=-0.1\,\hbar\omega\) for \(z<-\xi_{a}\), and \(q=0\) for \(z>\xi_{a}\), continuously interpolating between the two values. The BN and C order parameters allow for different spin vortices to connect across the topological interface, as shown in Sec. III.2 and summarised in Tabs. 3 and 4. We examine a spin-HQV of Eq. (33) that penetrates the interface, represented by the spatially dependent \(D_{\pm}(z)\), such that \(D_{+}=0\) for \(z<-\xi_{a}\), and \(D_{\pm}=1\) for \(z>\xi_{a}\). The dynamics is shown in Fig. 5a, where the spherical-harmonics representation after two trap periods highlights the characteristic spin winding around \(z\) on both sides of the interface. For singly quantized spin vortices given by Eq. (35), dissipation rapidly develops a composite vortex core, characterized by a FM cylindrical structure across the interface and illustrated by an isosurface at \(|\langle\hat{\mathbf{F}}\rangle|=1\) in Fig. 5b. This configuration is unstable and eventually decays into vortices with FM\({}_{2}\) cores. Interestingly, in a pure BN phase, the same vortex instead develops a FM\({}_{1}\) core, highlighting how the presence of the interface can strongly influence core dissipation. ### Fractional and nonsingular vortices at the C-FM interface The interface between C and FM\({}_{2}\) phases allows the smooth connection of fractionally quantized vortices with singular and nonsingular vortices as well as vortex-free states on the Figure 5: Complex-time evolution of spin-vortices penetrating a C-BN interface. (a) Spin-HQV crossing the interface. Transverse cuts of \(|A_{30}|^{2}\) on both sides of the interface together with the spherical-harmonic representation of the order parameter show a UN core, highlighted by the isosurface \(|A_{30}|^{2}=1\). (b) Singly quantized spin vortices connecting across the interface developing a composite-defect structure with FM outer core, shown by \(|\langle\hat{\mathbf{F}}\rangle|\) on transverse cuts on both C and BN sides and isosurface \(|\langle\hat{\mathbf{F}}\rangle|=1\). The spherical-harmonics representation shows the spin rotation of the order parameter, defined in Eq. (34), about the \(\hat{\mathbf{e}}_{(1,0,0)}\) axis. Figure 4: Energy relaxation of spin vortices terminating as monopoles on the UN-BN interface. (a, b) A BN spin vortex terminating as a UN monopole. Cuts of \(|A_{30}|^{2}\) and spherical harmonics show the UN monopole for \(z>0\) attached to a vortex for \(z<0\) in the BN phase. (c, d) Analogous representation for an initial singular UN spin vortex in \(z<0\) terminating as a BN monopole in \(z>0\). Energy relaxation leads to Alice rings at the interface, illustrated by isosurfaces at \(|A_{30}|^{2}=1/2\) and longitudinal cuts of \(|A_{30}|^{2}\). In both (b) and (d) the spherical harmonics show the nematic hedgehog and the continuous winding of the order parameter around the spin-HQV rings. FM\({}_{2}\) side (Sec. III.3 and Tabs. 6 and 6). We simulate the dynamics of two example defect states: a C \((1/3,-1/3)\) vortex connecting to a singly quantized FM\({}_{2}\) vortex, and a C \((2/3,1/3)\) vortex connecting to a vortex-free FM\({}_{2}\) state. The initial spinor wave functions are given by Eq. (41) in the former example, and (42) in the latter, with spatially dependent \(D_{nm}(z)\) interpolating between \(D_{2}=1,D_{-1}=\sqrt{2}\) for \(z<-\xi_{F}\), and \(D_{-1}=0\) for \(z>\xi_{F}\). For our numerical simulation, we stabilize the interface by introducing a sign changing spin-dependent interaction \(c_{1}\), such that \(c_{1}>0\) on the C side, assumed for \(z<-\xi_{F}\), and \(c_{1}<0\) on the FM side for \(z>\xi_{F}\). Figure 6 shows the core structures emerging after approximately ten trap periods of complex-time evolution. The spin magnitude as the overlaid spherical-harmonics representation of the order parameter show that the core of the \((1/3,-1/3)\) vortex (Fig. 6a) fills with the FM\({}_{1}\) phase, extending also throughout the FM\({}_{2}\) region to form the outer core of a composite defect. By contrast, the \((2/3,1/3)\) vortex forms a FM\({}_{2}\) core that smoothly connects to a vortex-free state on the FM\({}_{2}\) side of the interface (Fig. 6b). As an additional illustrative example, we also consider a singular, doubly quantized C phase vortex connecting to a nonsingular vortex in the FM\({}_{2}\) limit. This state was constructed in Eq. (43), choosing \(\beta=\beta(\rho)\) to be a function of the radial distance such that \(\beta(0)=0\) on the vortex line and \(\beta(\rho)=\pi/2\) far away, forming a Mermin-Ho texture [113] on the FM side. Figure 7 shows the rapidly forming defect core (half a trap period of energy relaxation). The FM\({}_{2}\) phase quickly intrudes on the C side to fill the core of the singular C vortex. The spherical-harmonics representation shows the \(4\pi\)-winding of the condensate phase about the C vortex, coupled to a spin rotation. The doubly quantized vortex line is, however, not stable and quickly decays via an azimuthal instability under further energy relaxation. ### Non-Abelian vortex pair terminating on a FM-BN interface Intriguingly, the order-parameters with discrete polytope symmetries in spin-2 BECs exhibit non-Abelian vortices [30; 34; 35]. As our final example, we examine a pair of non-commuting BN \((1/2,1/4)\) and \((1/2,1/2)\) vortices, a distance \(2\ell\) apart, terminating on a BN-FM\({}_{2}\) interface. The initial spinor was constructed in Eqs. (55) and (56) to approximate Eq. (52), with the discontinuity in the FM\({}_{2}\) limit rapidly disappearing after short energy relaxation. Similarly to the C-FM case (Sec. IV.4), the interface is stabilized in the numerical simulation by spatially varying the \(c_{1}\) interaction strength along the \(z\) axis, such that \(c_{1}>0\) on the BN side at \(z<-\xi_{F}\) and \(c_{1}>0\) on the FM side at \(z>\xi_{F}\), at fixed \(c_{2}\). Figure 8 shows the defect core relaxation after one trap period of imaginary-time propagation. Both non-commuting BN HQVs terminate to a nonsingular FM\({}_{2}\) spin texture. For the chosen interaction strengths \(c_{1,2}\), the HQV cores surprisingly rapidly relax to the C phase (consistent with the core structure of a single BN \((1/2,1/4)\) vortex [35]), resulting in a spontaneous C-FM\({}_{2}\) interface between the vortex cores and the FM\({}_{2}\) phase, as illustrated in Fig. 8b. ## V Concluding remarks and experimental prospects To summarise, we have demonstrated the potential of spin-2 BECs as rich laboratories for topological-interface physics by systematically constructing spinor wave functions for defects and textures connecting across the interfaces between UN, BN, C and FM\({}_{2}\) phases. These wave functions are derived from continuously interpolating steady-state solutions to the spin-2 Gross-Pitaevskii equations, and include connecting Figure 7: Energy relaxation of a doubly quantized phase vortex in the C phase connecting to a FM\({}_{2}\) nonsingular vortex. (a) Transverse cuts of \(|\langle\hat{\mathbf{F}}\rangle|\) on both sides of the interface together with the spherical-harmonic representation of the order parameter. The C vortex develops a FM\({}_{2}\) core that forms an interface-penetrating continuation of the nonsingular FM\({}_{2}\) vortex, as shown by the isosurface at \(|\langle\hat{\mathbf{F}}\rangle|=1\). (b) Longitudinal cut of \(|\langle\hat{\mathbf{F}}\rangle|\) and spherical-harmonic representation, showing the fountain-like spin texture on the FM\({}_{2}\) side. Figure 6: Defect-core structures after complex-time evolution of initial configurations containing a fractional vortex on the C side of a C-FM\({}_{2}\) interface. Transverse cuts of \(|\langle\hat{\mathbf{F}}\rangle|\) on both sides of the interface are shown together with the spherical-harmonic representation of the order parameter. (a) \((1/3,-1/3)\) vortex in the C phase connecting to a singly quantized FM\({}_{2}\) phase vortex, forming a FM\({}_{1}\) core that penetrates the interface, as highlighted by the isosurface \(|\langle\hat{\mathbf{F}}\rangle|=1\). (b) \((2/3,1/3)\) vortex in the C phase terminating at the interface. The FM\({}_{2}\) phase penetrates the interface, forming the core of the C vortex. singular vortices carrying mass and/or spin circulation as well as vortices that terminate at the interface to a vortex-free state or as monopoles. For a selection of examples, we have simulated their time evolution and energy relaxation, demonstrating the appearance of non-trivial core structures that include the formation of composite defects as well as the deformation of point defects into Alice-ring structures at the interface. The discrete, polytope order-parameter symmetries of the spin-2 BN and C phases also open the possibility of using spin-2 BECs to explore non-Abelian interface physics both theoretically and experimentally. We have demonstrated an example by numerically simulating a pair of non-commuting HQVs that terminates on the FM\({}_{2}\)-BN interface. The experimental creation of magnetic-phase interfaces in BECs has thus far been restricted to vortex cores [42, 23, 43]. Creating extended interfaces and populating them with the desired topological excitations poses significant additional technical challenges. Nevertheless, we can sketch possible creation methods by combining relatively straightforward extensions of existing technologies, which have already been shown to adjust ground state magnetic phases and controllably generate vortex excitations. Topological interfaces can be created by engineering spatial variation in interaction strengths, as discussed earlier. This could in principle be achieved by controlling microwave Feshbach [110] resonance conditions with optical ac Stark shifts, or by using optical Feshbach resonances [114]. However, here we have focused on interfaces that are obtained through spatially varying Zeeman shifts \(p,q\) that could be generated by ac Stark shifts of lasers or microwaves [95]. For example, the value of \(q\) in the system Hamiltonian (1) can be modified by applying a linearly polarized microwave field detuned from the ground state hyperfine transition [95], as realized for both \({}^{87}\)Rb [115] and \({}^{23}\)Na [53] in the spin-1 manifold. The spin-2 case is conceptually straightforward where the presence of the additional levels provides additional useful degrees of freedom that are conventionally expressed in terms of dynamical scalar, vector, and tensor polarizabilities [116], extending the description beyond the single parameter \(q\). Another intriguing possibility is to mimic microwave-induced level shifts with stimulated Raman dressing between the two ground-state hyperfine levels. Such transitions are highly state-selective and, with proper choice of polarization, intensity, and frequency, can differentially dress selected Zeeman sublevels in a spatially dependent fashion. For example, UN-BN interface could be achieved by illuminating half of the condensate with light that depresses the energy of the \(m=0\) state with respect to the others, and the other half with light that depresses the energy of the \(m=\pm 2\) states. It remains to populate these two regions with atoms in the appropriate magnetic phases, which could be achieved by magnetic phase exchange [43] in a spatially selective fashion. Ideally, the resulting condensate would change smoothly from its ground state UN phase to the ground state BN phase at the boundary between the beams. Vortices can currently be introduced through a variety of methods, including direct phase imprinting through optical [117, 118] or magnetic [119] means, as well as by stirring [120, 121]. One possible approach is to use a combination of magnetic phase imprinting [42, 119] followed by magnetic phase conversion [43, 23], using optical fields instead of microwaves to achieve the requisite spatial selectivity. For example, a phase vortex crossing a UN-BN interface could be generated from a phase vortex in a spin-1 polar condensate (\(F=1\), \(m=0\)) followed by optical pulse sequences to convert the spin-1 polar phase into spin-2 BN and UN phases [23] in the two regions. Similarly, a vortex-free to phase vortex interface could be created by using magnetic phase imprinting techniques on a spin-1 mixed polar and FM condensate (\(F=1\), \(m=0\) and \(m=1\)), where the phase imprinting yields two vortices in the \(m=-1\) spinor component and no vortices in the \(m=0\) spinor component. Initially displaced from one another spatially by a magnetic field gradient, one of these components could subsequently be optically converted to the UN phase (\(F=2\), \(m=0\)) and the other to the BN phase (\(F=2\), \(m=\pm 2\)), leading to the desired topological configuration in the milieu of ground states described previously. Approaches such as these can be envisioned for creating spin vortices and HQVs. ###### Acknowledgements. G.B. and M.O.B. as well as J.R. acknowledge financial support from the EPSRC, Grant refs. EP/V03832X/1 and EP/S002952/1, respectively. D.S.H. acknowledges support from NSF grant PHY-2207631. The numerical results presented in this paper were carried out on the High Performance Computing Cluster supported by the Research and Specialist Computing Support service at the University of East Anglia. Figure 8: Energy relaxation of a pair of non-commuting BN (\(1/2,1/4\)) and (\(1/2,1/2\)) HQVs terminating on the C-FM\({}_{2}\) interface. (a) Transverse cuts of \(|A_{20}|^{2}\) at the interface and both sides, together with the spherical harmonics representation of the order parameter. Both initial HQVs develop C cores, as shown by the isosurface at \(|A_{20}|^{2}=1/10\) and order-parameter representation. (b) Longitudinal cut of \(|A_{20}|^{2}\) together with spherical harmonics, showing the continuous FM\({}_{2}\)-C transition occurring at the termination point of the (\(1/2,1/4\)) vortex (similar transition in the (\(1/2,1/2\)) vortex core not shown).
2309.05790
SeBaSi system-level Integrated Access and Backhaul simulator for self-backhauling
millimeter wave (mmWave) and sub-terahertz (THz) communications have the potential of increasing mobile network throughput drastically. However, the challenging propagation conditions experienced at mmWave and beyond frequencies can potentially limit the range of the wireless link down to a few meters, compared to up to kilometers for sub-6GHz links. Thus, increasing the density of base station deployments is required to achieve sufficient coverage in the Radio Access Network (RAN). To such end, 3rd Generation Partnership Project (3GPP) introduced wireless backhauled base stations with Integrated Access and Backhaul (IAB), a key technology to achieve dense networks while preventing the need for costly fiber deployments. In this paper, we introduce SeBaSi, a system-level simulator for IAB networks, and demonstrate its functionality by simulating IAB deployments in Manhattan, New York City and Padova. Finally, we show how SeBaSi can represent a useful tool for the performance evaluation of self-backhauled cellular networks, thanks to its high level of network abstraction, coupled with its open and customizable design, which allows users to extend it to support novel technologies such as Reconfigurable Intelligent Surfaces (RISs)
Amir Ashtari Gargari, Matteo Pagin, Andrea Ortiz, Nairy Moghadas Gholian, Michele Polese, Michele Zorzi
2023-09-11T19:49:15Z
http://arxiv.org/abs/2309.05790v1
# Demo:[SeBaSi] system-level Integrated Access and Backhaul simulator for self-backhauling ###### Abstract millimeter wave (mmWave) and sub-terahertz (THz) communications have the potential of increasing mobile network throughput drastically. However, the challenging propagation conditions experienced at mmWave and beyond frequencies can potentially limit the range of the wireless link down to a few meters, compared to up to kilometers for sub-6GHz links. Thus, increasing the density of base station deployments is required to achieve sufficient coverage in the Radio Access Network (RAN). To such end, 3rd Generation Partnership Project (3GPP) introduced wireless backhauled base stations with Integrated Access and Backhaul (IAB), a key technology to achieve dense networks while preventing the need for costly fiber deployments. In this paper, we introduce SeBaSi, a system-level simulator for IAB networks, and demonstrate its functionality by simulating IAB deployments in Manhattan, New York City and Padova. Finally, we show how SeBaSi can represent a useful tool for the performance evaluation of self-backhauled cellular networks, thanks to its high level of network abstraction, coupled with its open and customizable design, which allows users to extend it to support novel technologies such as Reconfigurable Intelligent Surfaces (RISs). mmWave, IAB, self-backhauling, wireless backhaul, sioma ## I Introduction 5G mobile networks introduced the support for mmWave communications, with a further expansion towards sub-THz envisioned for 6th generation (6G) networks. This progressive shift from sub-6 GHz frequencies towards the upper portion of the spectrum represents the main technology enabler towards achieving multi-Gbps mobile throughput. Nevertheless, mmWave and sub-THz frequencies are affected by high propagation and penetration losses, as well as by a marked susceptibility to blockage, which degrade the reliability and capacity of wireless networks operating in this portion of the spectrum [1, 2]. To mitigate these unfavorable propagation characteristics, it is paramount to maximize the Line-of-Sight (LOS) coverage, and to increase the density of base station deployments with respect to traditional sub-6 GHz cellular networks. In turn, to make ultra-dense deployments a viable option from both a logistic and an economic standpoint, the 3GPP has standardized an extension of 5th generation (5G) NR known as IAB [3, 4]. The latter leverages the Distributed Unit (DU)/Central Unit (CU) split to introduce Next Generation Node Bases (gNBs) with wireless backhauling capabilities, i.e., IAB nodes, thus reducing the need for fiber drops. The IAB nodes eventually terminate the various 5G NR interfaces at a gNB connected by fiber to the Core Network (CN) and the Internet, i.e., the IAB donor. In this context, the research community has been studying how to optimize radio resource allocation, scheduling, route selection, topology construction, and deployment planning [5, 6]. Given the lack of access to actual 5G (and beyond) network deployments, previous research efforts relied heavily on homegrown physical layer simulators to evaluation performance [7]. However, these simulators usually feature a heavily simplified model of actual IAB deployments, since they introduce strong assumptions in the upper layers of the protocol stack. Therefore, they are incapable of capturing the real network dynamics. Similarly, existing system-level simulators are outdated, and thus also fail to properly model a Rel. 17 IAB network [8]. Moreover, an experimental evaluation is prohibitive since researchers usually do not have access to commercial deployments at scale. To fill this gap, we introduce SeBaSi, an IAB simulator which accurately models large-scale wireless backhauled deployments. In this demo, we describe SeBaSi, and showcase examples of different IAB cellular networks, for which we report system-level Key Performance Indicators (KPIs). This paper is organized as follows. In Sec. II, we describe the system model. Then, we introduce the proposed system-level simulator in Sec. III, and describe the contents of the demo in Sec. IV. Finally, we discuss possible future extension of our simulator in Sec. V. ## II System Model In this work, we consider a Time Division Multiple Access (TDMA) cellular system where both self-backhauled and wired base stations exchange data with the User Equipments (UEs). In accordance with 3GPP terminology, we refer to the former base stations as IAB-nodes (BS-nodes) and to the latter as IAB-donors (BS-donors). The IAB-nodes exchange data with the CN and the Internet via a wireless multi-hop connection to an IAB-donor. We assume that the IAB-nodes incorporate two Radio Frequency (RF) chains. One RF chain is reserved for commu nication with cellular users (access network), while the other is utilized for self-backhauling. In line with the 3GPP standard [4], we assume half-duplex and in band self-backhauling, such that in each time slot the IAB-nodes can either transmit, receive, or remain idle. Without loss of generality, in this analysis, we focus on uplink traffic only. ## III SeBaSi SeBaSi is a Python system-level simulator, built on top of the open-source 5G and 6G physical layer simulator Sionna(tm) [9], which models 3GPP Rel. 17 IAB cellular networks. To introduce self-backhauling functionalities in Sionna, we have implemented a number of system-level features. These extensions, which we describe more in detail in [10], comprise a Medium Access Control (MAC)-level scheduler, layer-2 buffers and backhaul path selection algorithms. Moreover, to better align Sionna's physical layer to that of 5G-NR, we also implemented 5G-NR procedures such as codebook-based beamforming and Signal to Interference plus Noise Ratio (SINR) computation. Additionally, in [11] we further extend our simulator to support sub-THz links in the backhaul, with the goal of providing a performance evaluation of the potential of sub-THz frequencies for 6G IAB. Fig. 1 depicts the general structure of SeBaSi. For the mmWave channel, we rely on the 3GPP TR. 38.901 channel model provided by Sionna. Additionally, we model sub-THz channels using Network Simulator 3 (ns-3) Terasim [12], and produce traces which are imported into our simulator. For the upper layers, we introduce the Backhaul Adaptation Protocol (BAP) layer, to handle routing within the wireless backhaul network [13], a MAC-level scheduler that operates in a TDMA fashion, and hop-by-hop Radio Link Control (RLC) channels for modeling layer-2 buffering and data transmission. SeBaSi, which we make publicly available1, allows users to configure most simulation parameters, such as the simulation's runtime and mode, the packet size, and the source rate (either per UE or system-wise). The considered simulation modes are _run_ mode and _debug_ mode, with the latter providing additional control signals and related information. Moreover, users can customize the scenario by choosing the number and location of UEs and base stations, and the IAB topology, i.e., the wireless backhaul links among gNBs. For the backhaul scheduler, i.e., the entity which dictates which backhaul links to schedule during each time slot, users can either define custom policies, or choose among: SCAROS [7], MLR [14], Safehaul [10], and SINR-based [11]. In addition, the links can be configured to operate either at mmWave, sub-THz, or a combination of the two frequencies. Footnote 1: [https://github.com/TUDA-wise/safehaul_infocom2023](https://github.com/TUDA-wise/safehaul_infocom2023) The simulator outputs an extensive set of system-level KPIs, such as end-to-end latency, throughput, and packet drop rate. Each of these metrics can be displayed per IAB node, or for the entire network. In addition, we also make available internal and/or lower layer metrics such as the generation and arrival time of each packet, destination UE, and its backhaul path. Furthermore, we report the load on each IAB node per each time step and for both the access and backhaul interfaces. ## IV Demo Description In this demonstration, we simulate in SeBaSi two cellular deployments whose topologies mimic those of the cities of New York and Padova, respectively. To such end, we gathered 4th generation (4G) evolved Node Base (eNB) locations from the actual deployments of three mobile network operators, and considered them as either 5G IAB nodes or donors. The overall objective of the demo is to provide examples of how to interact with SeBaSi, with different IAB deployments, and how to tune simulation parameters such as the system source rate and the frequency spectrum, by either defining new scenarios or by using the built-in examples of SeBaSi. Moreover, we provide examples of SeBaSi's output traces by displaying in real-time KPI metrics and network routing information, such as the one depicted in Fig. 2. Specifically, we first demonstrate how to work with SeBaSi by instantiating the Manhattan, New York example. Then, we instantiate the Padova, Italy topology, and we demonstrate the performance of the backhaul scheduler Safehaul [10] in both scenarios. Safehaul is a risk-averse Reinforcement Learning (RL) solution to ensure reliability in IAB mmWave networks. We conclude the demonstration by discussing future extensions and test-case scenarios. ## V Future work The deployment of IABs is becoming increasingly important for seamless connectivity at mmWave frequencies as the number of UEs and their anticipated Quality of Service (QoS) grows. The operational cost for adding more IABs to meet the network's increased demand is high. Adding more of them to mitigate this issue is therefore not always possible. A more cost-effective solution is to use RISs. RISs are energy-efficient smart surfaces that can change the direction of the impinging signal to the desired locations. The propagation characteristics may be enhanced in significant ways thanks to RISs [15]. That's because mmWave frequencies have high propagation along penetration loss, which increases with Fig. 1: Overall design of SeBaSi. The red blocks represent our additions to the baseline simulator, i.e., Sionna [9]. blockages, especially in urban scenarios. Using RISs to their full extent can thus greatly avoid this problem. Yet, the coexistence of IABs and RISs presents particular challenges. The coordination between the RIS phase shifters along with the setting of IABs would require solving complex joint optimization problems. Also, the placement of RISs along IABs in a network would require more information on the UEs location. Moreover, RISs can interfere with other components of a cellular network, therefore, it's essential to carefully manage them such that the interference is avoided. Because of the aforementioned factors, system-level simulation is required to verify RIS's potential in actual situations. Both IAB and RIS serve as relays to increase the range of communication; IAB is active, whereas RIS is passive. Both of them function as relay elements from a software perspective, but channel modeling and beamforming in RIS are difficult [16]. We will attempt to integrate RIS into SeBaSi using the new raytracing feature of Sionna. In summary, extending SeBaSi to RIS-assisted scenarios will open a new direction in the future of mmWave networks, however, several obstacles have to be addressed in order to fully exploit its potential. ## VI conclusion We have described and showcased SeBaSi a system level IAB simulator and an example of self-backhauling in Manhattan, New York City. In the demonstration, we have used Safehaul [10] self-backhauling scheduler in SeBaSi to evaluate the IAB network based on KPI. Thanks to the open-source development of the tool, researchers can utilize new scenarios or extend the simulator to support novel technologies such as RIS. ## Acknowledgment This paper is partially supported by EU H2020 MSCA ITN project MINTS (grant no. 861222), the Collaborative Research Center 1053 MAKI and the BMBF project Open6GHub (Nr. 16KISK014).
2309.15583
Impact of surface anisotropy on the spin-wave dynamics in thin ferromagnetic film
The spin-wave dynamics in the thin CoFeB film in Damon-Eshbach geometry are studied in three cases of boundary conditions -- free boundary conditions, symmetrical surface anisotropy, and one-sided surface anisotropy. The analytical model created by Wolfram and De Wames was extended to include perpendicular surface anisotropy in boundary conditions. Its comparison with numerical simulations demonstrate perfect agreement between the approaches. The analysis of the dispersion relation indicates that the presence of surface anisotropy increases the avoided crossing size between Damon-Eshbach mode and perpendicular standing modes. Additionally, asymmetrical one-sided surface anisotropy induces nonreciprocity in the dispersion relation. In-depth analysis of the avoided crossing size is conducted for systems with different boundary conditions, different thicknesses, surface anisotropy constant values, and external magnetic fields. It shows the significant role of the strength of surface localization of Damon-Eshbach mode and the symmetry of perpendicular standing modes in the avoided crossing broadening. Interestingly, for specific set of parameters the interaction between the particular modes can be suppressed, resulting in a mode crossing. Such a crossing, which occurs only on one side of the dispersion relation in a one-sided surface anisotropy system, can be utilized in nonreciprocal devices.
Krzysztof Szulc, Julia Kharlan, Pavlo Bondarenko, Elena V. Tartakovskaya, Maciej Krawczyk
2023-09-27T11:32:46Z
http://arxiv.org/abs/2309.15583v1
# Impact of surface anisotropy on the spin-wave dynamics in thin ferromagnetic film ###### Abstract The spin-wave dynamics in the thin CoFeB film in Damon-Eshbach geometry are studied in three cases of boundary conditions--free boundary conditions, symmetrical surface anisotropy, and one-sided surface anisotropy. The analytical model created by Wolfram and De Wames was extended to include perpendicular surface anisotropy in boundary conditions. Its comparison with numerical simulations demonstrate perfect agreement between the approaches. The analysis of the dispersion relation indicates that the presence of surface anisotropy increases the avoided crossing size between Damon-Eshbach mode and perpendicular standing modes. Additionally, asymmetrical one-sided surface anisotropy induces nonreciprocity in the dispersion relation. In-depth analysis of the avoided crossing size is conducted for systems with different boundary conditions, different thicknesses, surface anisotropy constant values, and external magnetic fields. It shows the significant role of the strength of surface localization of Damon-Eshbach mode and the symmetry of perpendicular standing modes in the avoided crossing broadening. Interestingly, for specific set of parameters the interaction between the particular modes can be suppressed, resulting in a mode crossing. Such a crossing, which occurs only on one side of the dispersion relation in a one-sided surface anisotropy system, can be utilized in nonreciprocal devices. ## I Introduction In recent years, spin waves (SWs), which are collective, harmonic oscillations of spins that propagate within magnetic materials, have received increased attention due to their potential to transport and process information with the reduction of Joule heating and energy dissipation [1]. One of the interesting properties of propagating SWs in thin magnetic films in Damon-Eshbach (DE) geometry [2] is the hybridization between the fundamental SW mode and perpendicular standing SW (PSSW) modes [3; 4; 5; 6; 7; 8; 9]. This may result in the formation of avoided crossings (ACs), which can be a crucial physical characteristic for the development of magnonic devices such as filters and phase shifters. However, the control of the dynamic magnetic properties is a fundamental problem for the implementation of these devices. It has been demonstrated that surface anisotropy significantly impacts the dispersion relation and the AC size between propagating SW mode and PSSW modes [10]. Another studies have shown that surface anisotropy can be controlled by the voltage applied across the ferromagnetic-metal/insulator heterostructures due to the charge accumulation at the interface [11; 12; 13] or across insulator/ferromagnet/insulator multilayer due to the dielectric polarization influence on the interface [14]. Therefore, it can be concluded that hybridization between fundamental SW mode and higher-order PSSW modes could be controlled by electric field. However, there has been no systematic study on the influence of surface anisotropy on the hybridization between SW modes in the ferromagnetic film. In general, there are two alternative approaches which can be used for the analytical evaluation of dipole-exchange SW spectrum including interaction between fundamental SW mode and PSSW modes. One approach, proposed by Wolfram and De Wames [15; 16], involves solving a sixth-order differential equation derived from Maxwell's equations along with equations of the magnetization motion. The extension of Damon and Eshbach's theory for pure dipolar SWs by including exchange interactions provides evidence that, as a result of exchange, the surface and bulk modes mix. This theoretical approach was used for explanation of the first experiments on magnon branch repulsion in thin ferromagnetic films with in-plane magnetization [17; 18] and in thin single-crystal disks of yttrium-iron garnet [19]. Much later, researchers applied the same method to characterize SWs in infinitely long cylindrical wires with magnetization along the wire [20; 21]. However, it turned out that the Wolfram and De Wames approach is not suitable for a broad range of sample geometries and magnetic moment directions. In fact, its effectiveness is limited to cases of unbroken symmetry in infinite films, as well as in infinite wires with a magnetic moment along the wire axis, as previously noted. For more general cases, Kalinikos and Slavin proposed an alternative approach for mixed exchange boundary conditions in thin films and the arbitrary direction of external magnetic field and magnetic moment relative to the film plane [22; 23]. The first step of this method is to solve Maxwell's equations separately in the magnetostatic approximation [24]. Then, the dynami cal scalar potential obtained in the form of the tensorial magnetostatic Green's functions [25] is inserted into the equations of motion for the magnetic moment (linearized Landau-Lifshitz equations), and the resulting integro-differential equation is solved through perturbation theory. This method has resolved most theoretical issues of spin dynamics in laterally confined magnetic elements under different magnetic field configurations. It has been previously applied to describe SW dynamics in isolated magnetic stripes [26] as well as rectangular [27; 28], cylindrical [29], and triangular [30] magnetic dots. A notable benefit of the Kalinikos and Slavin method is that it utilizes a simple analytical formula to achieve good agreement between theory and experiment for thin, circular nanoelements with perpendicularly-magnetized states, such as rings [31] and dots [32]. In more complex cases with broken cylindrical symmetry, it is necessary to consider a greater number of perturbation theory terms (i.e., the interaction of SW modes) [33; 34]. However, the applicability of this theory to any case of nanostructures and geometry of applied fields is not in question. The method of Wolfram and De Wames turned out to be somewhat forgotten, which forced Harms and Duine [35] to "rediscover" this ansatz since in some cases it provides a more direct path to the result. A comprehensive review of the two mentioned approaches with an analysis of their applicability for various cases of the direction of the external field and magnetization in a ferromagnetic film is given by Arias [36]. The potential drawbacks of the Kalinikos-Slavin method were identified, including possible inaccuracies of the results obtained in the region of hybridization of SW modes, as well as the complexity of describing the interaction of surface and bulk modes. The theoretical approach proposed in [36] is based on the method developed by Wolfram and De Wames and provides strict solutions to the problem. It is important to note that the hybridization of SWs was only examined in the case of mixed symmetrical boundary conditions. In this paper, we conduct a systematical analysis of the impact of surface anisotropy on the SW hybridization, which was presented in [10]. We are confronted with a choice between the two methods described above for calculating the dynamics of SWs should be chosen. Following the conclusions of Arias [36], the Wolfram and De Wames method not only leads to the goal more efficiently in this case, despite the asymmetry of the boundary conditions, but also provides a rigorous solution. This is in contrast to the Kalinikos-Slavin perturbation theory which requires a significant number of iterations and provides only an approximate solution. Therefore, we compared the dispersion relations of SWs in an DE geometry using symmetrical and asymmetrical boundary conditions via the extended Wolfram and De Wames approach. The results of analytical calculations perfectly matched the numerical simulations on the example of CoFeB thin film. We provide an in-detail analysis of the dispersion relations, SW mode profiles, and the effect of material parameters on the SW coupling in the frame of AC size. ## II Methods ### Investigated system The system under investigation is presented in Fig. 1(a). It is a thin CoFeB film of thickness \(L\) magnetized in-plane in \(y\)-direction by the external magnetic field \(H_{0}\). We consider the DE geometry, i.e., the SWs propagating along the \(x\)-direction, perpendicular to the external field \(H_{0}\). The \(z\)-axis corresponds to the direction perpendicular to the film plane, where the surfaces of the film are located at \(z=\pm L/2\). The following parameters were used for CoFeB: magnetization saturation \(M_{\rm S}=1335\,\)kA/m, exchange stiffness \(A_{\rm ex}=15\,\)pJ/m, and gyromagnetic ratio \(\gamma=30\,\)GHz/T. In this study, we consider three cases of boundary conditions: free boundary conditions (FBC) where the surface anisotropy is absent in the system [Fig. 1(b)]; symmetrical surface anisotropy (SSA), i.e., the surface anisotropy of equal strength is present on both boundaries of the film [Fig. 1(c)]; one-sided surface anisotropy (OSA) where the bottom surface has non-zero surface anisotropy while the top surface is described with FBC [Fig. 1(d)]. ### Analytical model We use the approach proposed by Wolfram and De Wames [15; 16] to calculate the dispersion relation in DE geometry in dipole-exchange regime and extend it to include the perpendicular surface anisotropy introduced by Rado and Weertman [37]. The magnetic free energy of the system can be pre Figure 1: (a) A general schematic of the system and coordinate system. (b-d) Schematics of the boundary conditions investigated in the manuscript: (b) free boundary conditions, (c) symmetrical surface anisotropy, and (d) one-sided surface anisotropy. sented as \[F=\int\left(-\mu_{0}\mathbf{H}_{0}\cdot\mathbf{M}+\frac{A_{\mathrm{ex}}}{M_{\mathrm{ S}}^{2}}\left(\nabla\mathbf{M}\right)^{2}-\frac{1}{2}\mu_{0}\mathbf{H}_{\mathrm{d}} \cdot\mathbf{M}\right)\mathrm{d}V, \tag{1}\] where there are three terms in the integral--the first term represents the Zeeman energy, the second term represents the exchange energy, and the third term represents the magnetostatic energy, \(\mathbf{M}\) is the magnetization vector, \(\mu_{0}\) is the vacuum permeability, \(\mathbf{H}_{\mathrm{d}}\) is the demagnetizing field. The dynamics of the magnetic system are described with Landau-Lifshitz equation \[\frac{\partial\mathbf{M}}{\partial t}=-|\gamma|\mu_{0}\mathbf{M}\times\mathbf{ H}_{\mathrm{eff}}, \tag{2}\] where \(\mathbf{H}_{\mathrm{eff}}=-\frac{1}{\mu_{0}}\frac{\delta F}{\delta\mathbf{M}}\) is the effective magnetic field. The demagnetizing field \(\mathbf{H}_{\mathrm{d}}\) is derived from the Maxwell equations in magnetostatic approximation: \[\nabla\times\mathbf{H}_{\mathrm{d}}=0,\ \ \nabla\cdot\mathbf{B}=0, \tag{3}\] where \(\mathbf{B}=\mu_{0}(\mathbf{H}_{\mathrm{d}}+\mathbf{M})\) is the magnetic induction. Equation (3) enables the introduction of magnetic scalar potential \(\varphi\), which satisfies the formula \(\mathbf{H}_{\mathrm{d}}=-\nabla\varphi\). As a result, the magnetostatic Maxwell equations are replaced with a single equation for the magnetic scalar potential \[\Delta\varphi=\nabla\cdot\mathbf{M}. \tag{4}\] Thanks to the uniform magnetization, the Landau-Lifshitz equation can be easily linearized. We assume that the static \(y\)-component of the magnetization remains constant and is equal to the saturation magnetization \(M_{\mathrm{S}}\), while the dynamic component \(\mathbf{m}=(m_{x},m_{z})\), which is much smaller than the static component \(M_{y}\) (\(|\mathbf{m}|\ll M_{\mathrm{S}}\)), precesses in the \(xz\)-plane. Therefore, \(\mathbf{M}(x,y,z,t)=M_{\mathrm{S}}\hat{y}+\mathbf{m}(x,z)e^{i\omega t}\), where \(\omega=2\pi f\) is the angular frequency and \(f\) is the frequency. After linearization, the SW dynamics are described with a set of three coupled equations: \[i\omega m_{x}=\gamma\mu_{0}\left(H_{0}-\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{ \mathrm{S}}}\Delta\right)m_{z}+M_{\mathrm{S}}\partial_{z}\varphi, \tag{5}\] \[-i\omega m_{z}=\gamma\mu_{0}\left(H_{0}-\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{ \mathrm{S}}}\Delta\right)m_{x}+M_{\mathrm{S}}\partial_{x}\varphi, \tag{6}\] \[\Delta\varphi-\partial_{x}m_{x}-\partial_{z}m_{z}=0. \tag{7}\] The solutions to Eqs. (5)-(7) take the form of plane waves. Two wave vectors can be defined due to the system's symmetry: in-plane wave vector \(k\) (in the \(x\)-direction) and out-of-plane wave vector \(q\) (in the \(z\)-direction), as shown in Fig. 1(a). As a result, we have \((m_{x},m_{z},\varphi)\propto(m_{x0},m_{z0},\varphi_{0})e^{ikx}e^{iqz}\). The system in the \(x\)-direction is infinite, therefore the wave vector \(k\) can only have real values for the solution to be physical. On the other hand, the wave vector \(q\) may take on complex values. For simplicity, we introduce the following dimensionless parameters: \(\Omega=\frac{\omega}{\gamma\mu_{0}M_{\mathrm{S}}}\), \(\theta=\Omega_{H}+\lambda^{2}(k^{2}+q^{2})\), \(\Omega_{H}=\frac{H_{0}}{M_{\mathrm{S}}}\), and \(\lambda^{2}=\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}^{2}}\). After substituting the plane-wave solution into Eqs. (5)-(7) and expressing them in the matrix form, we obtain \[\begin{pmatrix}i\Omega&\theta&iq\\ \theta&-i\Omega&ik\\ ik&iq&k^{2}+q^{2}\end{pmatrix}\begin{pmatrix}m_{x0}\\ m_{z0}\\ \varphi_{0}\end{pmatrix}=0. \tag{8}\] The condition that the determinant of the 3x3 matrix in Eq. (8) is equal to zero leads to the following formula: \[(k^{2}+q^{2})(\Omega^{2}-\theta^{2}-\theta)=0. \tag{9}\] As \(\theta=\theta(q^{2})\), Eq. (9) is a third-degree function with respect to \(q^{2}\). Two roots, \(q=\pm ik\), are obtained by setting the first bracket to zero whereas four roots, \(q=\pm q_{1}\) and \(q=\pm iq_{2}\) where \(q_{1},q_{2}\in\mathbb{R}\), are obtained by setting the second bracket to zero. From the zeroing of the second bracket in Eq. (9), we can also derive the formula for the dimensionless frequency \[\Omega=\sqrt{\theta(\theta+1)}. \tag{10}\] Let \(\theta(q=q_{1})=\theta_{1}\) and \(\theta(q=q_{2})=\theta_{2}\). Since \(q_{1}\) and \(q_{2}\) correspond to the same frequency, \(\Omega=\sqrt{\theta_{1}(\theta_{1}+1)}=\sqrt{\theta_{2}(\theta_{2}+1)}\), and therefore, \(\theta_{2}=-(\theta_{1}+1)\). From this formula we can obtain the connection between wave vectors \(k\), \(q_{1}\), and \(q_{2}\), which is the following: \[q_{2}=\pm\sqrt{2k^{2}+q_{1}^{2}+\frac{2\Omega_{H}+1}{\lambda^{2}}}. \tag{11}\] We can interpret the solutions obtained for the out-of-plane wave vector \(q\) as follows. Since our solution is a plane wave, wave vector \(q_{1}\) will give a volume contribution of the sinusoidal character to the mode profile while wave vectors \(k\) and \(q_{2}\) denote exponentially-decaying modes localized on the surfaces. Since the wave vector \(k\) represents also the propagating in-plane wave vector, this solution has a character of a DE mode. Next, knowing that \(\Omega_{H}\geq 0\), we can derive from Eq. (11) that \(|q_{2}|\geq 1/\lambda\), indicating that \(q_{2}\) has a character of a surface exchange mode. The solution of Eq. (8) can be represented by a vector \[\begin{pmatrix}m_{x0}\\ m_{z0}\\ \varphi_{0}\end{pmatrix}=\begin{pmatrix}ik\theta-q\Omega\\ iq\theta+k\Omega\\ \Omega^{2}-\theta^{2}\end{pmatrix}C, \tag{12}\] where \(C\) is an arbitrary constant. The general solution for the full vector \((m_{x},m_{z},\varphi)\) is a superposition of six terms, one for each solution of the wave vector \(q\) \[\begin{pmatrix}m_{x}\\ m_{z}\\ \varphi\end{pmatrix}=\left[\begin{pmatrix}X_{1}\\ Z_{1}\\ F_{1}\end{pmatrix}C_{1}e^{iq_{1}z}+\begin{pmatrix}X_{2}\\ Z_{2}\\ F_{2}\end{pmatrix}C_{2}e^{-iq_{1}z}+\begin{pmatrix}X_{3}\\ Z_{3}\\ F_{3}\end{pmatrix}C_{3}e^{kz}+\begin{pmatrix}X_{4}\\ Z_{4}\\ F_{4}\end{pmatrix}C_{4}e^{-kz}+\begin{pmatrix}X_{5}\\ Z_{5}\\ F_{5}\end{pmatrix}C_{5}e^{q_{2}z}+\begin{pmatrix}X_{6}\\ Z_{6}\\ F_{6}\end{pmatrix}C_{6}e^{-q_{2}z}\right]e^{ikx} \tag{13}\] where \(X_{1}=ik\theta_{1}-q_{1}\Omega\), \(X_{2}=ik\theta_{1}+q_{1}\Omega\), \(X_{3}=X_{4}=ik\), \(X_{5}=ik\theta_{2}-iq_{2}\Omega\), \(X_{6}=ik\theta_{2}+iq_{2}\Omega\), \(Z_{1}=k\Omega+iq_{1}\theta_{1}\), \(Z_{2}=k\Omega-iq_{1}\theta_{1}\), \(Z_{3}=-k\), \(Z_{4}=k\), \(Z_{5}=k\Omega-q_{2}\theta_{2}\), \(Z_{6}=k\Omega+q_{2}\theta_{2}\), \(F_{1}=F_{2}=\Omega^{2}-\theta_{1}^{2}\), \(F_{3}=-(\Omega+\Omega_{H})\), \(F_{4}=\Omega-\Omega_{H}\), \(F_{5}=F_{6}=\Omega^{2}-\theta_{2}^{2}\), as it follows from Eq. (12). As the system under consideration is an infinite film, boundary conditions must be applied on top and bottom surfaces. Our goal was to extend the model derived by Wolfram and De Wames to include the presence of the perpendicular surface anisotropy. It requires the extension of exchange boundary condition by adding the term depending on the surface anisotropy [37] \[\begin{cases}\partial_{z}m_{x}=0|_{z=\pm L/2}\\ \partial_{z}m_{z}\mp\sigma_{\rm t(b)}m_{z}=0|_{z=\pm L/2}\end{cases} \tag{14}\] where \(\sigma_{\rm t(b)}=K_{\rm s}^{\rm t(b)}/A_{\rm ex}\) and \(K_{\rm s}^{\rm t(b)}\) is surface anisotropy constant for the top (bottom) surface. Since the equation for the magnetic scalar potential [Eq. (7)] outside of the film gives \(\Delta\varphi_{\rm out}=0\) and, subsequently, \(-\varphi_{0}(k^{2}+q^{2})e^{ikx}e^{iqz}=0\), the asymptotic solutions outside the film for the magnetic scalar potential are given by expression \[\varphi_{\rm out}=\begin{cases}C_{7}e^{ikx}e^{-|k|z}&\text{for $z\geq L/2$,}\\ C_{8}e^{ikx}e^{|k|z}&\text{for $z\leq-L/2$.}\end{cases} \tag{15}\] As the tangential components of the demagnetizing field \(\mathbf{H}_{\rm d}\) are continuous across the surfaces of the film, the magnetic scalar potential must also be continuous. Additionally, the normal component of \(\mathbf{B}\) must also be continuous. Therefore, this results in the effective magneto-static boundary conditions: \[\varphi=\varphi_{\rm out}, \tag{16}\] \[B_{z}=B_{z}^{\rm out}, \tag{17}\] where \(\varphi\) and \(B_{z}\) are magnetic scalar potential and magnetic induction in the magnetic material, and \(\varphi_{\rm out}\) and \(B_{z}^{\rm out}\) - out of the magnetic material, respectively. Then, Eq. (17) can be rewritten in terms of scalar potential as \[\partial_{z}\varphi-m_{z}=\partial_{z}\varphi_{\rm out}. \tag{18}\] The complete set of boundary conditions in Eqs. (14), (16), and (18) evaluated for the SW modes in Eq. (13) leads to the following degeneracy matrix \(\mathbf{A}\): \[\mathbf{A}=\begin{pmatrix}iq_{1}X_{1}e^{iq_{1}\frac{k}{2}}&-iq_{1}X_{2}e^{-iq_{1 }\frac{k}{2}}&kX_{3}e^{k\frac{k}{2}}\\ iq_{1}X_{1}e^{-iq_{1}\frac{k}{2}}&-iq_{1}X_{2}e^{iq_{1}\frac{k}{2}}&kX_{3}e^{-k \frac{k}{2}}\\ (iq_{1}-\sigma_{\rm t})Z_{1}e^{iq_{1}\frac{k}{2}}&(-iq_{1}-\sigma_{\rm t})Z_{2} e^{-iq_{1}\frac{k}{2}}&(k-\sigma_{\rm t})Z_{3}e^{k\frac{k}{2}}\\ (iq_{1}+\sigma_{\rm b})Z_{1}e^{-iq_{1}\frac{k}{2}}&(-iq_{1}+\sigma_{\rm b})Z_{2 }e^{iq_{1}\frac{k}{2}}&(k+\sigma_{\rm b})Z_{3}e^{-k\frac{k}{2}}\\ ([iq_{1}+|k|)F_{1}-Z_{1}]e^{iq_{1}\frac{k}{2}}&[(-iq_{1}+|k|)F_{2}-Z_{2}]e^{-iq_ {1}\frac{k}{2}}&[(k+|k|)F_{3}-Z_{3}]e^{k\frac{k}{2}}\\ ([iq_{1}-|k|)F_{1}-Z_{1}]e^{-iq_{1}\frac{k}{2}}&[(-iq_{1}-|k|)F_{2}-Z_{2}]e^{iq _{1}\frac{k}{2}}&[(k-|k|)F_{3}-Z_{3}]e^{-k\frac{k}{2}}\\ \end{pmatrix} \tag{19}\] \[\begin{pmatrix}-kX_{4}e^{-k\frac{k}{2}}&q_{2}X_{5}e^{q_{2}\frac{k}{2} }&-q_{2}X_{6}e^{-q_{2}\frac{k}{2}}\\ -kX_{4}e^{k\frac{k}{2}}&q_{2}X_{5}e^{-q_{2}\frac{k}{2}}&-q_{2}X_{6}e^{q_{2}\frac{ k}{2}}\\ (-k-\sigma_{\rm t})Z_{4}e^{-k\frac{k}{2}}&(q_{2}-\sigma_{\rm t})Z_{5}e^{q_{2} \frac{k}{2}}&(-q_{2}-\sigma_{\rm t})Z_{6}e^{-q_{2}\frac{k}{2}}\\ (-k+\sigma_{\rm b})Z_{4}e^{k\frac{k}{2}}&(q_{2}+\sigma_{\rm b})Z_{5}e^{-q_{2} \frac{k}{2}}&(-q_{2}+\sigma_{\rm b})Z_{6}e^{q_{2}\frac{k}{2}}\\ [(-k+|k|)F_{4}-Z_{4}]e^{-k\frac{k}{2}}&[(q_{2}+|k|)F_{5}-Z_{5}]e^{q_{2}\frac{ k}{2}}&[(-q_{2}+|k|)F_{6}-Z_{6}]e^{-q_{2}\frac{k}{2}}\\ \end{pmatrix}.\] \[\begin{pmatrix}-k(-k|)F_{4}-Z_{4}]e^{k\frac{k}{2}}&[(q_{2}-|k|)F_{5}- Z_{5}]e^{-q_{2}\frac{k}{2}}&[(-q_{2}-|k|)F_{6}-Z_{6}]e^{q_{2}\frac{k}{2}}\\ \end{pmatrix}.\] The condition \(\det\mathbf{A}=0\) allows obtaining the solutions of wave vector \(q\) and, subsequently, the resonance frequencies as a function of wave vector \(k\). The eigenvectors of matrix \(\mathbf{A}\) provide the coefficients \(C_{i}\) in Eq. (13). Compared to the approach suggested by Kalinikos et al. [23], the solution mentioned above is precise within the examined geometry. Calculating multiple integrals for components of a demagnetizing tensor and expanding dynamical magnetization components into a series is not required to obtain coupled modes, which simplifies analytical calculations and significantly reduces computation time. ### Numerical simulations The Landau-Lifshitz equation in the linear approximation [Eqs. (5),(6)] and the magnetostatic Maxwell equation-based formula for the magnetic scalar potential [Eq. (7)] along with the boundary conditions for perpendicular surface anisotropy [Eq. (14)] and magnetostatic potential [Eq. (15)] were solved numerically using finite-element method simulations in COMSOL Multiphysics [10]. The problem was solved in 1D geometry with reduced \(x\)- and \(y\)-directions. Eqs. (5)-(7) were modified accordingly to introduce the terms coming from the implementation of plane-wave solution representing the propagation of SWs in \(x\)-direction \((m_{x},m_{z},\varphi)=(m_{x0},m_{z0},\varphi_{0})e^{ikx}\). The dispersion relations were calculated using eigenfrequency study. ## III Results and discussion ### Dispersion relation analysis First, we study the effect of the surface anisotropy on the dispersion relation. We chose the thickness of the CoFeB film \(L=100\,\mathrm{nm}\) and external magnetic field \(\mu_{0}H_{0}=50\,\mathrm{mT}\). We show the dispersion relation of the six lowest modes for three cases--free boundary conditions (FBC), i.e., \(K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=0\) [Fig. 2(a)]; symmetrical surface anisotropy (SSA) with \(K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=-700\,\mathrm{\SIUnitSymbolMicro J }/\mathrm{m}^{2}\) [Fig. 2(b)]; one-sided surface anisotropy (OSA) with \(K_{\mathrm{s}}^{\mathrm{t}}=0\) and \(K_{\mathrm{s}}^{\mathrm{b}}=-1500\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m}^{2}\) separately for negative [Fig. 2(c)] and positive [Fig. 2(d)] wave vector \(k\). Values of surface anisotropy are comparable to the values presented in literature [38]. The dispersion relation calculated with the analytical model is shown as a dashed orange line while the numerical simulation results are shown with dashed blue line. Figs. 2(a-d) demonstrate the perfect agreement between these two methods, yielding identical outcomes. The nature of dispersions is characteristic of the system in DE geometry. Each plot consists of one branch with a significant slope in the center of the investigated range of wave vector \(k\), displaying a DE surface mode character, and the remaining five are flat branches representing PSSW modes. All the modes start to increase significantly in frequency at about \(10^{7}\) rad/m as a result of the increasing contribution of the exchange interaction to the SW energy. Positions of PSSW modes at \(k\approx 0\) are determined by the wave vector \(q_{1}\approx n\pi/L\) (\(n=1,2,3...\) is the PSSW mode number). In the presence of negative surface anisotropy, the value of \(q_{1}>n\pi/L\) for corresponding PSSW modes (the reverse happens for positive surface anisotropy). The increase of the frequency of the DE mode correlates with the increase of its wave vector \(q_{1}\) with the increase of \(k\). However, \(q_{1}\) begins to decrease at some point, leading to \(q_{1}\approx n\pi/L\) for very large wave vectors \(k\). Similarly as for the case of \(k\approx 0\), for very large \(k\) in the presence of negative surface anisotropy, \(q_{1}>n\pi/L\) (the reverse happens for positive surface anisotropy). Detailed explanation of the correlation between wave vectors \(k\) and \(q_{1}\) is provided in Appendix A. The DE mode increases in frequency and intersects with the three lowest PSSW modes, leading to the emergence of ACs. These ACs are labeled in Figs. 2(a-d) with the abbreviation AC and a number indicating their sequence, beginning with the lowest. The discussion of ACs requires a precise definition of where AC occurs. Neglecting the atomic distance limit, the theory provides infinite number of SW modes. Though it is hypothetically possible for AC to be present between all modes, it is apparent that the number of ACs is not infinite for the finite-thickness film. To denote the presence of AC, we establish two distinct criteria. The first is the local minimum criterion. If the function that represents the frequency difference between the neighboring modes \[\Delta f_{mn}=f_{m}(k)-f_{n}(k) \tag{20}\] (where \(m,n\) is a mode number) has a local minimum \(\Delta f_{\mathrm{AC}n}\), this minimum represents an AC (or simply crossing if \(\Delta f_{\mathrm{AC}}=0\)). In this way, we can define an AC for any boundary conditions and it allows multiple ACs if multiple local minima exist. The second is a frequency limit criterion. It could be clearly defined only for FBC. It says that an AC is present between the DE mode and \(n\)-th PSSW mode if \(f_{k\rightarrow\infty}^{\mathrm{DE}}>f_{k=0}^{n}\) in case where \(f_{k\rightarrow\infty}^{\mathrm{DE}}\) is calculated for \(A_{\mathrm{ex}}=0\)[2], i.e. \[f_{k\rightarrow\infty}^{\mathrm{DE}}=\frac{\mu_{0}\gamma}{2\pi}\left(H_{0}+ \frac{M_{\mathrm{S}}}{2}\right) \tag{21}\] and [39] \[f_{k=0}^{n} = \frac{\mu_{0}\gamma}{2\pi}\left(\left(H_{0}+\frac{2A_{\mathrm{ex} }}{\mu_{0}M_{\mathrm{S}}}\left(\frac{n\pi}{L}\right)^{2}\right)\right.\times \tag{22}\] \[\times\left(H_{0}+M_{\mathrm{S}}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_ {\mathrm{S}}}\left(\frac{n\pi}{L}\right)^{2}\right)\right)^{1/2}\] This criterion is valid under the assumption that the contribution of the exchange interaction to the \(k\) dependence of the frequency of DE mode and PSSW modes is identical. The AC position is determined by the minimum of Eq. (20). It means that the choice of criterion does not influence the value of the AC size. In this paper, we present the results based on the local minimum criterion because of its broader definition. However, we will also mention the frequency limit criterion and its impact on the results. To address AC occurrence accurately, the frequency difference between neighboring modes is presented as a function of wave vector \(k\) in Fig. 2(e) for the case of FBC, for which the dispersion relation is shown in Fig. 2(a). In the range of small and large wave vectors, the distance between the modes is almost constant. The discrepancy between these ranges is due to the fact that in the limit of small wave vectors, the dispersion relation of the modes can be described by Eq. (22) [39], while in the large wave vector limit with the function \[f_{n}=\frac{\mu_{0}\gamma}{2\pi}\left(H_{0}+M_{\mathrm{S}}+\frac{2A_{\mathrm{ ex}}}{\mu_{0}M_{\mathrm{S}}}k^{2}+\frac{2A_{\mathrm{ex}}}{\mu_{0}M_{\mathrm{S}}} \left(\frac{n\pi}{L}\right)^{2}\right). \tag{23}\] In the mid-range, each curve shown in Fig. 2(e) has a local minimum corresponding to the AC, which is labeled and marked with an arrow. The first three ACs are relatively small, not exceeding a size of 200 MHz. The AC4, represented by a deep minimum, has a size of 1.14 GHz. On the other hand, AC5 has a very shallow minimum with a size of 6.02 GHz. Interestingly, it is not the global minimum, as according to Eq. (23) the distance Figure 2: (a-d) Dispersion relations of six lowest modes of a 100 nm-thick CoFeB film with (a) FBC, (b) SSA with \(K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=-700\,\mathrm{\SIUnitSymbolMicro 1} \mathrm{\SIUnitSymbolMicro m}^{2}\), and (c,d) OSA with \(K_{\mathrm{s}}^{\mathrm{t}}=0\) and \(K_{\mathrm{s}}^{\mathrm{b}}=-1500\,\mathrm{\SIUnitSymbolMicro J} \mathrm{\SIUnitSymbolMicro m}^{2}\) for (c) negative and (d) positive wave vectors in the external magnetic field \(\mu_{0}H_{0}=50\,\mathrm{mT}\). The plots present the comparison between the analytical model (orange lines) and numerical simulations (blue lines). Avoided crossings (ACs) are marked with labels. (e) The frequency difference between neighboring modes in FBC system. In plots (a-e) wave vector \(k\) on the \(x\)-axis is presented in the logarithmic scale. (f-i) Close-up on the ACs: (f) AC1, (g) AC2, (h) AC3, and (i) AC4. The plot axis are showing the wave vector and frequency values relative to the AC position calculated from Eqs. (20) and (24), respectively. Plots present numerical simulations results only which are in agreement with analytical results. between the modes can reach 5.99 GHz, which value is in agreement with the analytical model. However, according to the local minimum criterion, it is considered to be an AC. In case of the frequency limit criterion, only the first three minima can be identified as ACs. The AC4 does not meet this criterion as \(f_{k=0}^{n=4}=27.55\,\mathrm{GHz}\) exceeds \(f_{k\rightarrow\infty}^{\mathrm{DE}}=26.66\,\mathrm{GHz}\) slightly. After presenting the similarities between the systems, it is time to highlight the differences. Firstly, the symmetry of the system, specifically the boundary conditions, leads to the symmetry of the dispersion relation with respect to wave vector. Therefore, the FBC and SSA systems have symmetrical dispersions since \(K_{s}^{\mathrm{t}}=K_{s}^{\mathrm{b}}\). In contrast, the OSA system has different surface anisotropy constants on the top and bottom surfaces, resulting in a frequency difference between negative and positive wave vectors. Additionally, the presence of the negative surface anisotropy causes a slight increase in the frequency of all modes. Comparing the results in Figs. 2(a) and (b), for \(K_{s}=-700\,\mathrm{\SIUnitSymbolMicro m}^{2}\) the increase does not pass 1 GHz. Conversely, for a positive surface anisotropy, a decrease in frequency would be noted. The most significant difference between the systems lies in the size of the ACs. Close-up plots are shown in Fig. 2(f-i) for AC1-AC4, respectively. They show the dispersion relation for the values of wave vector and frequency relative to the AC location (\(k_{\mathrm{AC}}\),\(f_{\mathrm{AC}}\)), which is defined separately for each AC in the following way -- \(k_{\mathrm{AC}_{n}}\) represents the wave vector of the local minimum of Eq. (20), while \(f_{\mathrm{AC}_{n}}\) \[f_{\mathrm{AC}_{n}}=\frac{f_{n}(k_{\mathrm{AC}_{n}})+f_{n+1}(k_{\mathrm{AC}_{ n}})}{2}. \tag{24}\] The values of the AC size for each AC type can be found in Table 1. AC1 [Fig. 2(f)] exhibits a negligible size for the FBC and SSA systems, but a more significant size of 162.8 MHz for negative and 122.8 MHz for positive wave vectors in the OSA system. As the dispersion relations for FBC and SSA are symmetrical, the AC sizes are always equal for both negative and positive wave vectors. The size of AC2 [Fig. 2(g)] remains small only in the FBC system, whereas it opens up in the SSA and OSA systems reaching the sizes larger than AC1. In the OSA system, there is a slight asymmetry between negative and positive wave vector range. In the case of AC3 [Fig. 2(h)], it opens up for all the considered cases. The most interesting case is present for OSA system. In the range of negative wave vectors, this AC is large, whereas in the range of positive wave vectors, it is very small, measuring only 24.67 MHz. AC4 is much larger than lower order ACs, having a size above 1 GHz, however, the size is very similar in all of the systems [Fig. 2(i)]. AC5 presents a similar case, with its size being even larger, measuring above 5 GHz. ### Mode profiles The surface anisotropy has a significant impact on the dynamic magnetization distribution of SW modes, with mode profiles shown in Fig. 3. Firstly, we present the profile of the lowest frequency mode at \(k=0\) in Fig. 3(a). Due to the low external field, the spin preces \begin{table} \begin{tabular}{c c c c c c} System & AC1 (MHz) & AC2 (MHz) & AC3 (MHz) & AC4 (MHz) & AC5 (MHz) \\ \hline FBC & 11.14 & 6.04 & 158.8 & 1137.2 & 6022.4 \\ SSA & 21.03 & 231.5 & 280.6 & 1327.7 & 5781.5 \\ OSA (\(k-\)) & 162.8 & 254.8 & 565.6 & 1389.2 & 5474.4 \\ OSA (\(k+\)) & 122.8 & 175.7 & 24.67 & 1396.0 & 6119.0 \\ \end{tabular} \end{table} Table 1: AC size of AC1-AC5 for FBC, SSA, and OSA systems, which dispersion relations are shown in Figs. 2(a-d). Figure 3: Distribution across the film thickness of a dynamic magnetization components \(m_{x}\) (solid lines) and \(m_{z}\) (dashed lines) for (a) a first mode for wave vector \(k=0\), (b) a first mode for wave vector \(k=-5\times 10^{8}\,\mathrm{rad/m}\), (c) a third mode for wave vector \(k=-2.5\times 10^{6}\,\mathrm{rad/m}\), and (d) a third mode for wave vector \(k=2.5\times 10^{6}\,\mathrm{rad/m}\). Mode profiles are presented for system with FBC (blue lines), SSA (orange lines), and OSA (green lines). Plots present numerical simulations results only which are in agreement with analytical results. sion is strongly elliptical with the domination of in-plane \(m_{x}\) component. In the case of FBC (blue lines), the mode is uniform throughout the thickness. The negative surface anisotropy leads to the reduction of the SW amplitude close to the film boundary. The mode is symmetrical for SSA, while for OSA it becomes asymmetrical. Interestingly, although the surface anisotropy affects directly only the out-of-plane \(m_{z}\) component, the in-plane \(m_{x}\) component is also impacted. However, in the dipole-dominated low-wave vector regime, the effect of surface anisotropy is generally small. The impact on the PSSW modes (not shown here) is even smaller. However, the anisotropy has a substantial effect on the mode profiles in the exchange-dominated large-wave vector region, as evidenced in Fig. 3(b) for the lowest frequency mode at \(k=-5\times 10^{8}\,\mathrm{rad/m}\). In both the SSA and OSA cases, the mode amplitude is significantly lower near the boundary with surface anisotropy in comparison to the FBC case. Interestingly, in this case, the \(m_{z}\) component exceeds the \(m_{x}\) component, and the precession is close to circular. In Figs. 3(c,d), profiles of the third lowest mode are shown at \(k\) between AC2 and AC3 for the negative [\(k=-2.5\times 10^{6}\,\mathrm{rad/m}\), Fig. 3(c)] and positive [\(k=2.5\times 10^{6}\,\mathrm{rad/m}\), Fig. 3(d)] wave vectors. The mode has a character of a DE mode, although the first and second term of Eq. (13) connected with wave vector \(q_{1}\) also have a significant impact on the mode shape, which results in the sinusoidal character of these profiles. Their contribution is enhanced when the surface anisotropy is present. The \(m_{x}\) component is larger than \(m_{z}\) component, but the precession is less elliptical than when \(k=0\). For both FBC and SSA cases, where the boundary conditions are identical on both surfaces, the mode profiles for opposite wave vectors are their mirror images. However, this is not true for OSA as the mode profiles differ between negative and positive wave vectors. For negative wave vectors [Fig. 3(c)], the contribution from first and second terms in Eq. (13) are significantly stronger for both \(m_{x}\) and \(m_{z}\) components. ### Analysis of thickness dependence In the next step, we present a detailed analysis of the impact of the surface anisotropy on AC formation. Firstly, we study the effect of the film thickness \(L\) on the AC size \(\Delta f_{\mathrm{AC}}\) in four cases--FBC [Fig. 4(a)], SSA [Fig. 4(b)], and OSA for both negative [Fig. 4(c)] and positive [Fig. 4(d)] wave vector \(k\). In general, the increase of film thickness results in an increase in the number of ACs. This phenomenon is well-explained by the frequency limit criterion. The thickness has no impact on the maximum DE frequency \(f_{k\rightarrow\infty}^{\mathrm{DE}}\) [Eq. (21)]. In contrast, the formula for the PSSW frequency \(f_{k=0}^{\mathrm{m}}\) [Eq. (22)] includes thickness in the denominator; thus, an increase of thickness results in a decrease of frequency. This allows for a higher number of PSSW modes to satisfy the frequency limit criterion, resulting in more ACs. Another relevant effect is that the AC size decreases with an increase of thickness. Figure 4 shows that the rate of the AC size decrease depends on the boundary conditions and the parity of the AC number. In the FBC system [Fig. 4(a)], the AC size decreases rapidly, but much faster for even-numbered ACs than for odd-numbered ACs. In the case of SSA [Fig. 4(b)], the rate of decrease for odd-numbered ACs is slightly smaller, but for even-numbered ACs the change is significant; in this case, the decrease is much smaller compared to the odd-numbered ACs. In the OSA system [Figs. 4(c,d)], the rate of decrease is similar across all ACs and comparable to the even-numbered ACs in the SSA system. This effect, which depends on parity, originates from the symmetry of modes and boundary conditions. Due to the dominant contribution of the \(k\)-dependent term in the magnetization profile shape, the DE mode has a symmetry closer to the odd-numbered PSSW modes, which are connected with the odd-numbered ACs. In the case of FBC, there is no additional source of symmetry breaking and, therefore, odd-numbered ACs are larger. On the other hand, SSA causes a symmetric disturbance of all modes, primarily affecting the dynamic Figure 4: The AC size \(\Delta f_{\mathrm{AC}}\) as a function of film thickness \(L\) for the system with (a) FBC, (b) SSA, and OSA for (c) negative and (d) positive wave vector \(k\). Odd-numbered ACs are shown with solid lines, even-numbered ACs with dashed lines. The \(y\)-axis is in the logarithmic scale. Plots present results of numerical simulations. magnetization amplitude in close proximity to the surface. Odd-numbered PSSW modes have opposite amplitude on the opposite boundaries, therefore, the effect of the anisotropy on the mode symmetry cancels out. On the other hand, both DE mode and even-numbered PSSW modes exhibit the same amplitude on the opposite surfaces, so the anisotropy breaks the symmetry of these modes and, as an effect, these modes induce larger ACs. In the case of OSA, the asymmetry of the anisotropy generates the asymmetry in the mode profiles, leading to large ACs in all cases. An explanation based on a simplified model of mode profiles is provided in Appendix B. The final effect is present only in the OSA system in the positive wave vector \(k\) range [Fig. 4(d)]. It is the presence of a local minimum of AC size with a change of thickness. Interestingly, this effect only occurs for odd-numbered ACs. Upon analyzing this effect, one may question whether this local minimum reaches zero, or in other words, whether exists such a critical film thickness for which AC does not occur, i.e., the mode crossing is present. Obviously, the numerical study of the AC size can not provide a definite answer while we were not able to derive it from the analytical model. Nevertheless, the mode profiles analysis can resolve this issue. The close-up to the local minimum of the AC1 [Fig. 4(d), solid blue line] is shown in Fig. 5(a). In this case, the step in simulation was 0.2 nm. The minimum value of \(\Delta f_{\mathrm{AC1}}=1.33\) MHz was obtained for a thickness of 42.4 nm. We can take a look on the magnetization profiles for the first and second modes at wave vector \(k_{\mathrm{AC}}\) [the inset plots in Fig. 5(a)] for the thickness smaller (39 nm, left plot) and larger (46 nm, right plot) than the thickness of the AC1 size minimum. In both cases, the mode profiles are very similar, indicating the superposition of the DE and first PSSW mode. However, the most important fact is to notice that the modes are interchanged. For \(L=39\) nm, the lower frequency mode (orange line) has higher amplitude at the bottom of the film, while for \(L=46\) nm, higher amplitude is at the top of the film. The higher frequency mode (green line) demonstrates the opposite trend. The detailed analysis of the profiles indicates that this behavior is connected with each local minimum of \(\Delta f_{\mathrm{AC}}\), i.e. the mode profiles at \(k_{\mathrm{AC}}\) interchange. According to our analysis, it indicates the existence of a critical thickness value \(L_{\mathrm{C}}\) where a crossing occurs instead of AC, indicating the absence of a gap between the first and second mode. This observation suggests the possible occurrence of an accidental degeneracy in the system [40; 41], meaning that there are two solutions with the same values of wave vector and frequency. Another observation concerns the system with the lowest value of \(\Delta f_{\mathrm{AC1}}\) found for the thickness of 42.4 nm. Its dispersion relation is shown in Fig. 5(b). The AC1 is not visible in the full dispersion. Interestingly, the AC1 is still too small to be visible even after a close-up of the AC1 vicinity (inset plot in the lower right corner). To study the mode profiles in the vicinity of AC1, we chose three wave vectors: \(k_{\mathrm{AC}}=5.033\times 10^{6}\) rad/m, \(k_{\mathrm{L}}=4.853\times 10^{6}\) rad/m, and \(k_{\mathrm{R}}=5.2\times 10^{6}\) rad/m. The mode profiles are shown in the inset plot at the top part of Fig. 5(b). For \(k_{\mathrm{AC}}\) (middle plot), the profiles are similar to the case of \(L=39\) nm. It suggests, that the critical value of the thickness \(L_{\mathrm{C}}>42.4\) nm. In the case of \(k_{\mathrm{L}}\) (left plot), the profile of the first mode (orange line) has a character of the DE mode with a small amplitude reduction at the bottom due to the surface anisotropy. The second mode (green line) has the character of the first PSSW mode. This mode has a slightly larger amplitude Figure 5: (a) The AC1 size \(\Delta f_{\mathrm{AC1}}\) as a function of film thickness \(L\) for the system with OSA for positive wave vector \(k\)— the close-up of Fig. 4(d) to the AC1 local minimum in small thickness range. Inset plots show the magnetization profiles of first (orange line) and second (green line) mode at \(k_{\mathrm{AC1}}\) for the thickness of 39 nm (on the left) and 46 nm (on the right). (b) Dispersion relation of three lowest modes for the system with OSA for positive wave vector \(k\) for the thickness of 42.4 nm. Inset in the bottom-right corner: the close-up to the AC1 with marking of three wave vectors – \(k_{\mathrm{L}}=4.853\times 10^{6}\) rad/m, \(k_{\mathrm{AC}}=5.033\times 10^{6}\) rad/m, and \(k_{\mathrm{R}}=5.2\times 10^{6}\) rad/m. Inset at top: magnetization profiles of first (orange line) and second (green line) mode at \(k_{\mathrm{L}}\) (left), \(k_{\mathrm{AC}}\) (center), and \(k_{\mathrm{R}}\) (right). Plots present results of numerical simulations. at the bottom than at the top. The modes at \(k_{\rm R}\) (right plot) have the same character as the modes at \(k_{\rm L}\), but their order is reversed. It clearly shows that far from the AC (where \(f_{2}-f_{1}\gg\Delta f_{\rm AC1}\)), the modes have the same character on both sides of the AC, as if the interaction between them is negligible. It is worth noting that this interchange is not so clear in the case where \(\Delta f_{\rm AC}\) is relatively large. In this case, the intermixing of the effects of the wave vector dependence and the short distance between the ACs relative to their size leads to a significant change in the mode profiles. ### Analysis of surface anisotropy constant dependence The analysis presented above was done for the case where the surface anisotropy constant \(K_{\rm s}\) has a negative value, resulting in the partial pinning condition for the out-of-plane dynamic component of the magnetization. Now we can look at the case where \(K_{\rm s}\) is positive, so the magnetization amplitude close to the surface is enhanced. The dispersion relation for the system with SSA for \(K_{\rm s}^{\rm t}=K_{\rm s}^{\rm b}=2500\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m }^{2}\) is shown in Fig. 6(a). The small \(k\) range is comparable to the case of negative \(K_{\rm s}\). However, for about \(k=10^{7}\) rad/m, the DE mode reaches the local maximum at about 23 GHz and obtains negative group velocity. This effect is analogous to the effect of the volume perpendicular magnetic anisotropy [42]. On its way, the DE mode produces additional ACs, which did not occur in the case of FBC and negative \(K_{\rm s}\). The frequency difference between the adjacent modes [Fig. 6(b)] shows that additional ACs are present for the first, second, and third PSSW modes. These ACs are marked with the letter 'x'. Also, an AC5 is present. However, it is not related to the AC5 occurring for negative anisotropy, therefore, it is also marked with 'x'. Next, we study the AC size as a function of the surface anisotropy constant \(K_{\rm s}\) for the case of SSA [Fig. 6(c)] and OSA for negative [Fig. 6(d)] and positive [Fig. 6(e)] wave vector \(k\). We calculated it numerically in the range from \(-3000\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m}^{2}\) to \(3000\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m}^{2}\) with a step of \(100\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m}^{2}\). Almost all curves have a minimum similar to the one present in Fig. 4(d). A detailed analysis of the mode profiles agrees with the previous observation--in each case, the mode profiles at \(k_{\rm AC}\) interchange, so we expect that for a critical value of \(K_{\rm s}\) a crossing between modes should occur. The position of the minimum depends on the AC parity. AC2 has the smallest size at \(K_{\rm s}=0\) (however, we expect the critical value to be very Figure 6: (a) Dispersion relation of the lowest six modes of the system with SSA for \(K_{\rm s}=2500\,\mathrm{\SIUnitSymbolMicro J}/\mathrm{m}^{2}\). (b) Frequency difference between the neighboring modes of the system with SSA presented in (a). The \(x\)-axis is in the logarithmic scale. (c-e) The AC size \(\Delta f_{\rm AC}\) as a function of the surface anisotropy \(K_{\rm s}\) for the system with (c) SSA and OSA for (d) negative and (e) positive wave vector \(k\). Odd-numbered ACs are shown with solid lines, even-numbered ACs with dashed lines, and second-order ACs with dotted lines. The \(y\)-axis is in the logarithmic scale. Plots present results of numerical simulations. low, i.e., \(|K_{\rm s}^{\rm critical}|<50\,\rm{\mu J/m^{2}}\)). The odd-numbered ACs (AC1 and AC3) have the smallest size for positive \(K_{\rm s}\) in the system with SSA and OSA at negative \(k\), while for the system with OSA at positive \(k\), the smallest value occurs for negative \(K_{\rm s}\). Interestingly, for the system with OSA, \(K_{\rm s}^{\rm critical}\) of the same AC is different for positive and negative wave vector range, which means that we can get a situation where the AC is present only on one side of the dispersion relation, while on the opposite side a crossing will be present. In general, the AC tends to have larger size for positive surface anisotropy than for negative surface anisotropy of the same value. In addition, we can see that for a wide range of positive surface anisotropy, additional ACs (marked with the letter 'x') occur in all systems. Their source lies in the negative slope range of the dispersion relation, as discussed above. Their minima also follow the rule of the mode profile interchange, so we should expect these minima to go to zero as well. ### Analysis of external magnetic field dependence Finally, the effect of the external magnetic field \(B_{0}\) on the AC size is shown in Fig. 7. The ACs have been calculated in the field range between 10 and 500 mT with a step of 10 mT. Almost all ACs are increasing with the increase of the external field. This observation correlates with the fact that \(k_{\rm AC}\) also increases with the increase of external field. Then, the \(k\)-dependent terms in the DE mode profile [Eq. (13)] give a stronger contribution, and the profile asymmetry is increased, resulting in a stronger interaction with PSSW modes and a larger AC size. The most remarkable example is AC1 in the FBC system, which increases by a factor of 4.06 in the investigated field range. On the other hand, AC5 in the SSA system increases by only 3% in the same range. Interestingly, in the OSA system, the local minimum for the AC3 occurs in the positive wave vector range. The lowest detected value is 0.94 MHz at 280 mT. This minimum also has the source in the mode profile interchange at \(k_{\rm AC}\), indicating the closing of the AC gap. In the direction of lower fields, the local maximum is present for 100 mT with the AC size of 27.4 MHz, while for higher fields it increases up to 120 MHz in the upper limit of the study of 500 mT. The results show that the external field provides a simple way to control the AC size, which is the easiest source of control from the experimental point of view. ## IV Conclusions In this article, we provide a comprehensive investigation of the SW dynamics in the ferromagnetic film in the DE geometry in the presence of surface anisotropy with the use of analytical model and numerical simulations. We compare three different cases: free boundary conditions, symmetrical surface anisotropy, and one-sided surface anisotropy. We show that the surface anisotropy significantly increases the size of the AC between DE and PSSW modes. In the case of OSA, the mirror symmetry breaking leads to the asymmetrical dispersion relation with respect to the wave vector \(k\), which particularly affects the AC size. The surface anisotropy also has a strong influence on the shape of the mode profiles. We have studied in detail the impact of various parameters (i.e., film thickness, surface anisotropy constant, and external magnetic field) on the AC size. In general, the ACs shrink with the increase of film thickness or the decrease of the external magnetic field. Also, the parity of the AC has a strong influence on the AC size. For large positive surface anisotropy constant, the mode of DE character has a non-monotonic dispersion relation, which leads to the appearance of additional ACs for large wave vectors. In most cases, the increase in anisotropy leads to the increase in AC size. Interestingly, we found that under certain conditions, the AC can close and turn into a crossing. This phenomenon, known as accidental degeneracy, occurs for some particular ACs when the Figure 7: The AC size \(\Delta f_{\rm AC}\) as a function of the external magnetic field \(B_{0}\) for the system with (a) FBC, (b) SSA, and OSA for (c) negative and (d) positive wave vector \(k\). Odd-numbered ACs are shown with solid lines, even-numbered ACs with dashed lines. The \(y\)-axis is in the logarithmic scale. Plots present results of numerical simulations. value of the surface anisotropy constant, the layer thickness, or the external magnetic field is changed. In the system with SSA, it occurs for both negative and positive wave vector, while in the system with OSA only on one side of the dispersion relation for a given set of parameters. The transition through the accidental degeneracy point in any parameter space is always associated with the exchange of the order of the mode profiles in the AC region. It is worth to note that the results shown in the paper are calculated for typical material parameters of CoFeB but the presented effects are universal and should also occur for different materials. The presence of surface anisotropy in magnetic thin films is ubiquitous. It is often considered a detrimental feature, but it can also be an essential property. The ability to control the anisotropy by voltage as well as to control its effects by an external magnetic field gives an additional advantage. Moreover, surface anisotropy of different strength on opposite surfaces provides a simple way to induce the nonreciprocity in the structure. We believe that surface anisotropy can be exploited in magnonic devices where asymmetrical transmission or the possibility to control the propagation of the SW is a fundamental property. ###### Acknowledgements. K.S. and M.K. acknowledge the financial support from National Science Centre, Poland, grants no. UMO-2020/39/I/ST3/02413 and UMO-2021/41/N/ST3/04478. The research leading to these results has received funding from the Norwegian Financial Mechanism 2014-2021 project no. UMO-2020/37/K/ST3/02450. K.S. acknowledges the financial support from the Foundation for Polish Science (FNP). The dataset for this manuscript is available on [https://doi.org/10.5281/zenodo.8382924](https://doi.org/10.5281/zenodo.8382924). ## Appendix A Relation between wave vectors \(k\) and \(q_{1}\) Figure 8 shows the wave vector \(\tilde{q}_{1}=q_{1}L\) as a function of wave vector \(k\) for three cases studied in the manuscript--FBC [Fig. 8(a)], SSA with \(K_{\mathrm{s}}^{\mathrm{t}}=K_{\mathrm{s}}^{\mathrm{b}}=-700\,\mathrm{\SIUnitSymbolMicro 1 }\mathrm{/}\mathrm{m}^{2}\) [Fig. 8(b)], and OSA with \(K_{\mathrm{s}}^{\mathrm{t}}=0\) and \(K_{\mathrm{s}}^{\mathrm{b}}=-1500\,\mathrm{\SIUnitSymbolMicro 1 }\mathrm{/}\mathrm{m}^{2}\) for negative [Fig. 8(c)] and positive [Fig. 8(d)] wave vector \(k\). In the low wave vector range (up to about \(10^{7}\) rad/m), the plots are very similar to the dispersion relations shown in Figs. 2(a-d), including the presence of the gaps between the modes. For the case of FBC, the PSSW modes are placed exactly at \(q_{1}=n\pi/L\), while for DE mode the value of \(q_{1}\) is increasing and produces ACs with PSSW modes exactly as in the dispersion relation. Nevertheless, the large value of \(q_{1}\) for DE mode is not decisive for the shape of the mode profile since the coefficients in Eq. (13) associated with \(q_{1}\) give smaller contribution that those associated with \(k\) (however, this contribution is not negligible). For the case of SSA and OSA, values of \(q_{1}\) of PSSW modes are larger than \(n\pi/L\). It is clear that a larger value of \(q_{1}\) results in larger frequency of PSSW modes according to Eq. (10). In the large \(k\) range for the case of FBC, the values of \(q_{1}\) go back to \(n\pi/L\), but this time for \(n\) starting from \(0\). To achieve this feat, all modes in the range of the DE mode are decreasing in the value of \(q_{1}\) in the dipole-exchange regime of the wave vector \(k\). In the case of SSA and OSA, the values of \(q_{1}\) are also larger than \(n\pi/L\) but the difference is much larger than in the small \(k\) range. ## Appendix B Toy model of interaction between modes The interaction between the modes can be explained using of a simplified model of mode profiles. In the case of FBC at \(k=0\), the modes form a basis of cosine functions \[m_{n}=A_{n}\cos\bigg{(}n\pi\left(z-\frac{L}{2}\right)\bigg{)}, \tag{21}\] where \(m_{0}\) represents the DE mode and \(m_{n>0}\) represents \(n\)-th order PSSW modes. \(A_{n}\) is the normalization constant which assure that \(\int_{-L/2}^{L/2}m_{n}^{2}\mathrm{d}z=1\). Assume that in the regime of small \(k\), the PSSW modes remain unchanged with the change of the wave vector \(k\), so their profiles are represented by Eq. (21). On the other hand, the DE mode is described by the function \[m_{0}(k)=A_{0}e^{kz}. \tag{10}\] In the presence of negative surface anisotropy, the PSSW modes are "squeezed" to satisfy the boundary conditions. Due to this effect, their profiles are modified such that \(q_{1}=(n+p_{n})\pi/L\), where \(p_{n}\) is the relative shift of wave vector. In the case of SSA, the mode profile is modified in the following way: \[m_{n}=A_{n}\cos\bigg{(}(n+p_{n})\pi\left(z-\frac{n}{n+p_{n}}\frac{L}{2}\right) \bigg{)}. \tag{11}\] In the case of OSA, the mode profile of PSSW modes is represented by the function: \[m_{n}=A_{n}\cos\bigg{(}(n+p_{n})\pi\left(z-\frac{L}{2}\right)\bigg{)}. \tag{12}\] We assume that the change in DE mode due to surface anisotropy is negligible. Basing on the results in Fig. 8, we assume that \(p_{n}\) has a constant value of 0.03 for all PSSW modes and all thicknesses. Strength of the interaction between the modes is described by the overlapping integral \[I_{ij}=\int_{-L/2}^{L/2}m_{i}m_{j}\mathrm{d}z. \tag{13}\] Figure 9 shows the overlapping integral between the DE mode and the \(n\)-th order PSSW mode at \(k_{\mathrm{AC}_{n}}\) as a function of layer thickness \(L\). The model qualitatively reproduces the behavior shown in Fig. 4 showing that the overlapping integral is connected with the AC size. Firstly, there is an identical dependence on the PSSW mode parity. In the case of FBC [Fig. 9(a)], the overlapping integral has a larger value for the function representing odd-numbered PSSW modes than for even-numbered PSSW modes. In the case of SSA [Fig. 9(b)], the overlapping integral for even-numbered PSSW modes grows over the integral for odd-numbered PSSW modes which only increases slightly compared to the FBC case. In the case of OSA for negative wave vectors \(k\) [Fig. 9(c)], the value of the overlapping integral is similar for all modes. In the case of positive wave vectors \(k\) [Fig. 9(d)], we have successfully reproduced the presence of the minima for odd-numbered PSSW modes shown in Fig. 4(d). The positions of the minima--at 48, 104, 154, and 204 nm for the first, third, fifth, and seventh PSSW modes, respectively, is in good agreement with the positions of the minima in Fig. 4(d) (42, 98, 148, and 200 nm, respectively).
2305.00515
Multi-directional Sobel operator kernel on GPUs
Sobel is one of the most popular edge detection operators used in image processing. To date, most users utilize the two-directional 3x3 Sobel operator as detectors because of its low computational cost and reasonable performance. Simultaneously, many studies have been conducted on using large multi-directional Sobel operators to satisfy their needs considering the high stability, but at an expense of speed. This paper proposes a fast graphics processing unit (GPU) kernel for the four-directional 5x5 Sobel operator. To improve kernel performance, we implement the kernel based on warp-level primitives, which can significantly reduce the number of memory accesses. In addition, we introduce the prefetching mechanism and operator transformation into the kernel to significantly reduce the computational complexity and data transmission latency. Compared with the OpenCV-GPU library, our kernel shows high performances of 6.7x speedup on a Jetson AGX Xavier GPU and 13x on a GTX 1650Ti GPU.
Qiong Chang, Xin Li, Yun Li, Jun Miyazaki
2023-04-30T15:57:23Z
http://arxiv.org/abs/2305.00515v1
# Multi-directional Sobel operator kernel on GPUs ###### Abstract Sobel is one of the most popular edge detection operators used in image processing. To date, most users utilize the two-directional \(3\times 3\) Sobel operator as detectors because of its low computational cost and reasonable performance. Simultaneously, many studies have been conducted on using large multi-directional Sobel operators to satisfy their needs considering the high stability, but at an expense of speed. This paper proposes a fast graphics processing unit (GPU) kernel for the four-directional 5x5 Sobel operator. To improve kernel performance, we implement the kernel based on warp-level primitives, which can significantly reduce the number of memory accesses. In addition, we introduce the prefetching mechanism and operator transformation into the kernel to significantly reduce the computational complexity and data transmission latency. Compared with the OpenCV-GPU library, our kernel shows high performances of 6.7x speedup on a Jetson AGX Xavier GPU and 13x on a GTX 1650Ti GPU. ## 1 Introduction The Sobel operator is a classical first-order edge detection operator that performs a 2D spatial gradient measurement on images and is generally used to find the approximate absolute gradient magnitude at each pixel. It is typically used to emphasize regions of high spatial frequency that correspond to edges. Compared with other edge detection operators, such as the Canny and Roberts cross, the Sobel operator has a low calculation amount, simple structure, and high precision. Therefore, it has a wide range of applications in fields such as remote sensing [1], medical image processing [2], and industrial detection [3]. To date, most applications using Sobel have chosen the two-directional \(3\times 3\) operator as their detectors because of its low computational cost and reasonable performance. Nevertheless, some applications still require further size expansions and increases in the direction of the operator to satisfy their unique requirements. In the medical field, Sheik et al. [4] proposed a Sobel operator for the edge detection of the knee-joint space of osteoarthritis. They improved the operator by adding \(315^{\circ}\) and \(360^{\circ}\) directions on the bias of horizontal and vertical directions, which can perform the detection better than the original two. Remya et al. [5] applied the Sobel operator to the edge detection of brain tumors in MRI images. Their operator was improved to handle eight directions to clearly detect extremely irregularly shapes of tumors and showed a higher detection accuracy than other methods. In the industrial field, Min et al. [6] utilized the Sobel operator to detect the edges of screw threads. Considering the type of screw thread angles are mainly \(30^{\circ}\), \(55^{\circ}\) and \(60^{\circ}\), they assigned spatial weights and added \(67.5^{\circ}\) and \(112.5^{\circ}\) directions for the operator, which can efficiently extract more precise edges and achieve better continuity than conventional methods. In addition to the direction, Siyu et al. [7] expanded the operator size from \(3\times 3\) to \(7\times 7\) using the average gradient vectors of two neighboring pixels to quantify aggregate angularity. Compared with conventional methods using one pixel, the improved method helps calculate a more stable angularity index value. All these applications demonstrated that, in some cases, a large multi-directional Sobel operator has higher robustness than the traditional operator and can better adapt to actual requirements. However, as mentioned in [4], it always requires more computing time. In particular, as the image size increases, the amount of computation increases exponentially, which burdens applications using edge detection as a preprocessing step. This paper proposes a fast graphic processing unit (GPU) kernel for a four-directional \(5\times 5\) Sobel operator, because GPUs usually show an excellent performance on real-time image processing problems [8][9]. In our experience, a \(5\times 5\) Sobel operator has similar edge detection robustness to a \(7\times 7\) operator but higher than \(3\times 3\). Meanwhile, the processing speed of a multi-directional \(5\times 5\) operator is significantly slower than a \(3\times 3\) operator [10]. To improve kernel performance, we made innovations in the following aspects: * we implement the kernel based on warp-level primitives, which can significantly reduce the number of memory accesses; * we provide an efficient procedure with the prefetching mechanism, which significantly reduces the computational complexity and data transmission latency, and * we further accelerate the operations in diagonal directions using a two-step optimization approach, which helps to increase the data reuse rate. The remainder of this paper is organized as follows. Section 2 introduces the acceleration strategies and results of the current Sobel operator. Section 3 provides the principle of the four-directional \(5\times 5\) Sobel operator. In Section 4, we introduce the implementation and optimization details of our GPU kernel. Then, we evaluate the kernel performance in Section 5 and finally conclude this paper in Section 6. ## 2 Related Work Recently, several studies have been conducted on accelerating the Sobel edge detection using GPUs. Jo et al. [11] optimized their GPU-based Sobel kernel using shared memory. They assigned the entire image to several blocks and used the corresponding streaming multiprocessors (SMs) to detect edges in parallel with their respective local information. This approach is simple and versatile but has limited performance improvement because the overuse of shared memory typically affects the number of active blocks and reduces the parallelism of the kernel. Nevertheless, the shared memory approach is 1.65x faster than that of global memory. Chouchene et al. [12] realized a fast grayscale and Sobel edge detection on GPUs, which was approximately 50x faster than running on a CPU. Similar to [11], they split the edge detection task to each SM and stored the image fragments in the corresponding shared memory. The difference is that they enabled different sizes of CUDA blocks to accomplish the detection task, which proved more efficient in their application than the block-size consistent method. Xiao et al. [13] proposed an eight-directional Sobel operator on GPU using the open computing language (OpenCL) framework. In their implementation, each work item handles the convolution calculations for four pixels instead of one, which can significantly improve memory access efficiency and reduce computational complexity. Compared with approaches implemented by CPU, OpenMP, and CUDA, they are 9.55, 2.23, and 1.17x faster, respectively. Zuo. et al. [14] implemented a fast Sobel operator on GPUs. They optimized their GPU kernel as follows: 1) using the texture memory to store image data to accelerate memory access; 2) performing a single thread to process the calculations of multiple pixels, which can significantly increase the overall throughput and 3) fully exploiting the symmetry of Sobel operators to reuse intermediate results to reduce the entire computational complexity. They achieved a 122x acceleration ratio for a 4096 \(\times\) 4096 image compared with the CPU-based implementation. However, owing to the limitation of texture memory size, it is unreasonable for common situations that require multiple images in practical applications. The purposes of all the these methods are to accelerate the Sobel-based edge detection as fast as possible, while ensuring the correctness of results to satisfy real-time processing requirements. However, owing to the limitations of their parallel algorithms under GPU architectures, much room remains for improving their acceleration methods. Then, we will introduce an in-depth acceleration method for the Sobel operator. ## 3 Four-Directional \(5\times 5\) Sobel Operator ### Operator Definition The Sobel operator is a classical edge detection operator proposed by Irwin Sobel and Gary Feldman [15]. It is a discrete differential operator that detects the edge features of images by computing the pixel gradients. The original Sobel operator is an isotropic gradient operator using two \(3\times 3\) filters to convolve with an image and obtain derivative approximations: one each for horizontal and vertical changes. Equation 1 shows the basic computation of the Sobel operator: \[G_{x}=\begin{bmatrix}-1&\mathbf{0}&1\\ -2&\mathbf{0}&2\\ -1&\mathbf{0}&1\end{bmatrix}*I,G_{y}=\begin{bmatrix}-1&-2&-1\\ \mathbf{0}&\mathbf{0}&\mathbf{0}\\ 1&2&1\end{bmatrix}*I, \tag{1}\] where \(I\) represents the input image, and \(G_{x}\) and \(G_{y}\) are the two images containing the horizontal and vertical derivative approximations, respectively. \(*\) denotes the basic convolution calculation between the input image and the two filters. In these two filters, because the weights of the central axis in both directions are 0, and the two sides in both directions are opposite to each other, the convolution results are equivalent to calculating the differences between the two sides, which means calculating the gradients in both directions. Then, the final result can be aggregated by calculating the root sum of square (RSS) of \(G_{x}\) and \(G_{y}\) as follows: \[G=\sqrt{G_{x}^{2}+G_{y}^{2}}. \tag{2}\] Figure 1(a) shows the original images, and Fig. 1(b) shows the local edge images (the yellow boxes in Fig. 1(a)) detected using the original two-directional \(3\times 3\) Sobel operator. Al Figure 1: Edge detection results. (a) Original image. (b) Two-directional \(3\times 3\). (c) Four-directional \(3\times 3\). (d) Four-directional \(5\times 5\). though some texture information is lost, the overall contour of the petals, houses, and figures are clearly preserved. The original Sobel operator considers only the horizontal (\(0^{\circ}\)) and vertical (\(90^{\circ}\)) directions. To further enhance its effect, we introduce a \(5\times 5\) Sobel operator by adding two diagonal directions (\(45^{\circ}\) and \(135^{\circ}\)). The filters of the four-directional \(5\times 5\) Sobel operator can be defined as follows: \[G_{x} =\begin{bmatrix}-1&-2&\mathbf{0}&2&1\\ -4&-8&\mathbf{0}&8&4\\ -6&-12&\mathbf{0}&12&6\\ -4&-8&\mathbf{0}&8&4\\ -1&-2&\mathbf{0}&2&1\end{bmatrix}*I, \tag{3}\] \[G_{y} =\begin{bmatrix}-1&-4&-6&-4&-1\\ -2&-8&-12&-8&-2\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ 2&8&12&8&2\\ 1&4&6&4&1\end{bmatrix}*I,\] \[G_{d} =\begin{bmatrix}-6&-4&-1&-2&\mathbf{0}\\ -4&-12&-8&\mathbf{0}&2\\ -1&-8&\mathbf{0}&8&1\\ -2&\mathbf{0}&8&12&4\\ \mathbf{0}&2&1&4&6\end{bmatrix}*I,\] \[G_{dt} =\begin{bmatrix}\mathbf{0}&-2&-1&-4&-6\\ 2&\mathbf{0}&-8&-12&-4\\ 1&8&\mathbf{0}&-8&-1\\ 4&12&8&\mathbf{0}&-2\\ 6&4&1&2&\mathbf{0}\end{bmatrix}*I,\] and the final results can be aggregated as follows: \[G=\sqrt{G_{x}^{2}+G_{y}^{2}+G_{d}^{2}+G_{dt}^{2}}, \tag{4}\] where \(G_{d}\) and \(G_{dt}\) represent the two images containing the diagonal derivative approximations. They can be obtained by rotating \(G_{x}\) and \(G_{y}\) by \(45^{\circ}\). In general, users can define the filter weights according to their needs and must only combine the Gaussian smoothing and differentiation. In Eq. 3, the weight values are generated using the OpenCV Sobel library and used to perform the edge detection shown in Fig. 1(d). To better distinguish the effect from the \(3\times 3\) operator, the detection results obtained by the four-directional \(3\times 3\) operator are listed in Fig. 1(c). Compared with the two-directional operator, the four-directional operator provides more abundant textures, such as petals and buildings. Furthermore, \(5\times 5\) operator is more insensitive to surrounding changes and less affected by noise, which helps provide clearer edge features and higher robustness than using \(3\times 3\). However, a four-directional \(5\times 5\) operator without optimization is approximately eight times more computationally intensive than the two-directional \(3\times 3\), which significantly slows the processing speed. Therefore, developing an efficient acceleration method for the four-directional \(5\times 5\) operator is essential. ### Filter Weight Generalization To avoid limiting our method to the constant weight values in Eq. 3, we generalize our Sobel operator as follows: \[K_{x} =a\cdot\begin{bmatrix}1\\ n\\ m\\ n\\ 1\end{bmatrix}\times\begin{bmatrix}-1&-b&\mathbf{0}&b&1\end{bmatrix} \tag{5}\] \[=a\cdot\begin{bmatrix}-1&-b&\mathbf{0}&b&1\\ -n&-nb&\mathbf{0}&nb&n\\ -m&-mb&\mathbf{0}&mb&m\\ -n&-nb&\mathbf{0}&nb&n\\ -1&-b&\mathbf{0}&b&1\end{bmatrix}=(k_{ij})_{5\times 5},\] \[K_{y} =a\cdot\begin{bmatrix}-1\\ -b\\ \mathbf{0}\\ b\\ 1\end{bmatrix}\times\begin{bmatrix}1&n&m&n&1\end{bmatrix}\] \[=a\cdot\begin{bmatrix}-1&-n&-m&-n&-1\\ -b&-nb&-mb&-nb&-b\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ b&nb&mb&nb&b\\ 1&n&m&n&1\end{bmatrix}=(k_{ij})_{5\times 5},\] \[K_{d} =a\cdot\begin{bmatrix}-m&-n&-1&-b&\mathbf{0}\\ -n&-mb&-nb&\mathbf{0}&b\\ -1&-nb&\mathbf{0}&nb&1\\ -b&\mathbf{0}&nb&mb&n\\ \mathbf{0}&b&1&n&m\end{bmatrix}=(k_{ij})_{5\times 5},\] \[K_{dt} =a\cdot\begin{bmatrix}0&-b&-1&-n&-m\\ b&\mathbf{0}&-nb&-mb&-n\\ 1&nb&\mathbf{0}&-nb&-1\\ n&mb&nb&\mathbf{0}&-b\\ m&n&1&b&\mathbf{0}\end{bmatrix}=(k_{ij})_{5\times 5},\] \[a\in\mathbb{Z}^{+},\quad b,m,n\in\mathbb{R}^{+},\quad and \quad\forall k_{ij}\in\mathbb{Z}.\] In these filters, \(a,b,m\), and \(n\) are all positive numbers, and all the items \(k_{ij}\) are integers. The generalized operator ensures that the absolute values of the weights remain symmetrical in the horizontal and vertical directions, and the positive and negative relationship is unchanged. Here, a constraint is added to the weight values: expressing \(K_{x}\) and \(K_{y}\) as a constant \(a\) multiplied by two vectors containing 1, which means that the filter weight values are proportional in both horizontal and vertical directions. This constraint is expected to help us reuse the intermediate results and improve computational efficiency without affecting the Sobel edge detection. \(K_{d}\) and \(K_{dt}\) do not satisfy this rule, requiring further optimization in Section 4.3.5. ## 4 GPU Implementation In this section, we introduce our acceleration strategies from three aspects: 1) the kernel implementation method based on warp-level primitives, 2) the procedure for the entire image using the prefetching mechanism, and 3) the operator transformation for diagonal directions. However, we first introduce two key GPU techniques used in our optimization method. ### GPU Warp-level Primitives Nvidia GPUs and the CUDA programming model use the single instruction, multiple threads (SIMT) execution model to maximize the computing capability of GPUs [16]. GPUs execute warps of 32 parallel threads using SIMT, enabling each thread to access its registers, load and store from divergent addresses, and follow divergent control flow paths. In addition, the CUDA compiler and GPUs work together to ensure the threads of a warp execute the identical instruction sequences together as fast as possible. Current CUDA programs can achieve high performances using explicit warp-level primitives, such as warp shuffles. Warp shuffles are a fast mechanism for shifting and exchanging register data between threads in the same warp, such as \(\_shfl\_down\_sync\) and \(\_shfl\_xor\_sync\). These instructions can efficiently complete the reduction and scan operations for the data stored in vector registers without using other types of memory. Using the prefetching mechanism can save data latency and increase practical thread utilization, significantly improving program performance. This study fully uses this mechanism to implement our GPU kernel. ### Prefetching Mechanism The prefetching mechanism is a standard technology used in CPUs to hide the latency of memory operation. The processor caches data and instruction blocks before they are executed. While that data travels to the execution units, other instructions can be executed simultaneously. GPUs also support the prefetching mechanism, which has higher costs than CPUs. Although GPUs typically use excess threads to hide memory latency, using the prefetching mechanism is an excellent decision through explicit instructions, which require frequent access to the global memory and loading part of data each time [17]. ### GPU Kernel Design We now introduce the details of our GPU kernel. The preparations for the input image, including grayscale, boundary padding, and transmission, are treated the same as in [18]. #### 4.3.1 Task Assignment As shown in Fig. 2(a), the input image is evenly distributed to different blocks for parallel processing by the GPU. Each block is assigned multiple rows and columns of image data. Because the Sobel filter is a surrounding window centered on the target pixel, two adjacent blocks must have overlaps. When the radius of the filter is \(r\), the overlap between any two blocks is \(2r\). #### 4.3.2 Data Flow For a large input image, assigning a thread to each pixel is expensive. Our strategy involves allocating sufficient threads in the horizontal direction while processing sequentially in the vertical direction. Moreover, because the Sobel operator does not require frequent data sharing between pixels, the shared memory is not used in our kernel. This avoids reducing block parallelism caused by excessive allocation of shared memory and reduces latency caused by memory accesses. Figure 2(b) shows the data flow of one block for filter \(K_{x}\). At the beginning, \(2r+1\) rows (5 rows for a 5 \(\times\) 5 operator) of the input image are loaded into the kernel sequentially and processed to achieve the detection result of the \(r\)th row (ROW 0). Then, for each incremental row (ROW 1,2) in the output image, only the incremental parts of the input image (row 5,6) must be updated at a time. This is primarily because the filter \(K_{x}\) can be decomposed into the product of the two vectors shown in Eq. 5, indicating that the calculations of the horizontal and vertical directions can be performed separately. Therefore, the intermediate results of the overlapping rows (rows 2 - 4) can be held in the kernel, and only incremental rows must be calculated each time. As mentioned, because each block has overlapping regions, the output image size is smaller than the input, with \(2r\) fewer columns and rows in the horizontal and vertical directions, respectively. #### 4.3.3 Process Detail Figure 2(c) shows the process detail of one warp for filter \(K_{x}\). The actions can be divided into three steps as follows. * Step 1: each thread loads the corresponding pixel data in one row from the global memory to the register, and then shares it with other threads within the same warp using the \(\_shfl\_down\_sync\) primitive. Here, we define the \(p_{i}^{l}\) to denote the obtained pixel data, where \(i\) denotes the thread ID and \(j\) denotes the pixel index. Because for a 5 \(\times\) 5 filter, the upper bound of \(j\) is \(i+4\), and data sharing between threads cannot cross the warp, the last four (\(2r\)) threads will be idle, which is the reason the overlap between blocks is \(2r\) columns. * Step 2: after obtaining the necessary pixel data, each thread performs the basic convolution operations as follows: \[F_{i}^{u}=-1\cdot p_{i}^{0}+(-b)\cdot p_{i}^{1}+b\cdot p_{i}^{3}+p_{i}^{4},\] (6) and stores result \(F_{i}^{u}\) to the corresponding register \(R_{i}^{u}\), where \(u\) denotes the row index of the input image. Then, for the initial calculation, Steps 1 and 2 are repeated for \(2r+1\) times until all rows are calculated. Otherwise, only the oldest register data needs to be updated. Note that because our method is based on separable convolution, instead of expanding the calculation around a target pixel, thread \(i\) actually calculates the result of pixel \(i+r\) in each row. * Step 3: after completing horizontal calculations, each thread begins to perform the vertical convolution operations \(G_{i}\), whose equation can be expressed as follows: \[\begin{split}& G_{i}^{v}=a\cdot F_{i}^{f(v-2)}+an\cdot F_{i}^{f(v-1)}+ am\cdot F_{i}^{f(v)}\\ &\quad+an\cdot F_{i}^{f(v+1)}+a\cdot F_{i}^{f(v+2)},\end{split}\] (7) where \[f(x)=x\text{ mod }5,\quad x\geq 0.\] (8) \(f(x)\) is used to represent the row index because \(F\) obtained by the Eq. 6 is dynamically updated. The \(v\) in Eq. 7 denotes the index of the center row, always maintaining the variable \(x\) greater than 0. Referring to the three steps, edge detection in the horizontal direction can be effectively implemented using the \(5\times 5\) Sobel filter \(K_{x}\). Similarly, detection using filter \(K_{y}\) in the vertical direction can be implemented in the same manner, using different coefficients. #### 4.3.4 Optimization for Data Loading Typically, the data loading of the increments and calculations can be sequentially alternated as shown in Fig. 3(a). Its benefit is in avoiding occupying too many on-chip registers, thereby reducing the thread parallelism. However, it also directly leads to frequent accesses to the global memory, which can generate a considerable latency. To solve this, we explicitly load the incremental image data from the global memory while calculating the on-chip data using the prefetching mechanism mentioned in Section 4.2. As shown in Fig. 3(b), after loading the fourth row, we continue with loading the fifth row without directly completing the calculations of \(G\). Instead, these calculations will end while waiting for the loading to complete, which can help us achieve parallelism in time and significantly improve the efficiency of kernel execution. Here, because the prefetching trades more registers for time parallelism, to avoid additional burden to the processor, only one row of image data is fetched at a time. Thus, the row index function \(f(x)\) in Eq. 8 is changed to \[f(x)=x\bmod 6,\hskip 14.226378ptx\geq 0, \tag{9}\] when the prefetching mechanism is active. #### 4.3.5 Optimization for Diagonal Direction According to Eq. 5, elements \(k_{ij}\) of \(K_{d}\) and \(K_{dt}\) are neither symmetric nor proportional, which implies that they cannot be directly decomposed in the same manner as \(K_{x}\) and \(K_{y}\). This also implies that the convolution results \(F_{d}\) and \(F_{dt}\) in horizontal cannot be reused and must be recalculated for each \(G_{d}\) and \(G_{dt}\). To solve this, we propose a new idea to Figure 2: Implementation of 5\(\times\)5 Sobel filter \(K_{x}\). (a) Task assignment. (b) Data flow. (c) Process detail. generate two matrices \(K_{d+}\) and \(K_{d-}\) as follows. \[K_{d+} =K_{d}+K_{dt} \tag{10}\] \[=a\cdot\begin{bmatrix}-m&-n-b&-2&-n-b&-m\\ b-n&-mb&-2nb&-mb&b-n\\ \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ n-b&mb&2nb&mb&n-b\\ m&n+b&2&n+b&m\end{bmatrix},\] \[K_{d-} =K_{d}-K_{dt}\] \[=a\cdot\begin{bmatrix}-m&b-n&\mathbf{0}&n-b&m\\ -n-b&-mb&\mathbf{0}&mb&n+b\\ -2&-2nb&\mathbf{0}&2nb&2\\ -n-b&-mb&\mathbf{0}&mb&n+b\\ -m&b-n&\mathbf{0}&n-b&m\end{bmatrix},\] which satisfy the symmetry requirements by calculating the sum and difference of \(K_{d}\) and \(K_{dt}\). If we efficiently use the two filters \(K_{d+}\) and \(K_{d-}\), then \(G_{d}\) and \(G_{dt}\) are easily obtained as follows: \[G_{d} =K_{d}*I=\frac{K_{d}^{+}+K_{d}^{-}}{2}*I=\frac{G_{d}^{+}+G_{d}^{- }}{2}, \tag{11}\] \[G_{dt} =K_{dt}*I=\frac{K_{d}^{+}-K_{d}^{-}}{2}*I=\frac{G_{d}^{+}-G_{d}^{ -}}{2}.\] For each row \(u\), the convolution results \(F_{k}^{u}\) in the horizontal direction using filter \(K_{d+}\) can be obtained as follows: \[F_{k0}^{u} =-am\cdot p^{0}+a(-n-b)\cdot p^{1}+(-2a)\cdot p^{2} \tag{12}\] \[+a(-n-b)\cdot p^{3}+(-am)\cdot p^{4},\] \[F_{k1}^{u} =a(b-n)\cdot p^{0}+(-amb)\cdot p^{1}+(-2amb)\cdot p^{2}\] \[+(-amb)\cdot p^{3}+a(b-n)\cdot p^{4},\] \[F_{k2}^{u} =0,\] \[F_{k3}^{u} =a(n-b)\cdot p^{0}+(amb)\cdot p^{1}+(2amb)\cdot p^{2}\] \[+(amb)\cdot p^{3}+a(n-b)\cdot p^{4},\] \[F_{k4}^{u} =am\cdot p^{0}+a(n+b)\cdot p^{1}+(2a)\cdot p^{2}\] \[+a(n+b)\cdot p^{3}+(am)\cdot p^{4}.\] In addition, for a center row \(v\), the results \(G_{d+}^{v}\) can be obtained by aggregating the four \(F_{ki}\) from adjacent rows as follows: \[G_{d+}^{v}=F_{k0}^{v-2}+F_{k1}^{v-1}+F_{k3}^{v+1}+F_{k4}^{v+2},v\geq 2. \tag{13}\] Here, for the convenience of understanding, we use \(ki\) to represent the vector index in filter \(K_{d+}\), instead of using the thread index \(i\) in Eq. 6. Because the absolute values of the weights are symmetrical, for each row \(u\), \(F_{k3}^{u}\) and \(F_{k4}^{u}\) are easily obtained using \(F_{k1}^{u}\) and \(F_{k0}^{u}\): \[F_{k3}^{u} =F_{k1}^{u}=-F_{k1}^{u}, \tag{14}\] \[F_{k4}^{u} =F_{-k0}^{u}=-F_{k0}^{u},\] and \[G_{d+}^{v}=F_{k0}^{v-2}+F_{k1}^{v-1}-F_{k1}^{v+1}-F_{k0}^{v+2},v\geq 2. \tag{15}\] Thus, we effectively reuse part of the intermediate results, as shown in Fig. 4, without repeating convolution operations for each row. Figure 4(a) shows the procedure of \(G_{d+}\) and Fig. 4(b) presents synchronous changes of on-chip register data. In Step 1, we use \(kI\) to convolve the second row instead of \(k2\), because \(k2\) is a zero vector and does not affect the convolution result. This prepares the reused data required to ensure operation consistency in each step. Furthermore, we regard \(F_{k3}^{3}\) and \(F_{k4}^{4}\) as \(F_{-k1}^{3}\) and \(F_{-k0}^{4}\), respectively, to be able to discover the pattern of data reuse. Then, \(G_{d+}^{2}\) can be obtained according to Eq. 15 while loading the fifth row of the image. After it succeeds, the sixth row begins to be loaded. Simultaneously, the vectors from \(k0\) to -\(k0\) are strided down to convolve the rows centered on row 3. In Step 2, in addition to convolving the fifth row with -\(k0\), only \(F_{k0}^{1}\) and \(F_{-k1}^{4}\) (green blocks) need to be recalculated because \(F_{k1}^{2}\) can be reused. Compared with the original operations in Step 1, Step 2 significantly saves a quarter of the computation, ensuring that the filtering of \(K_{d+}\) is efficiently performed. Note that after Step 3, the second vectors \(kI\) used in each step are always the opposite of the previous step. However, this calculation can be reflected in the calculation of \(G_{d+}\) without updating the register data. This method can be repeatedly applied to the calculation of incremental rows, while the register index of the latest row is dynamically updated according to Eq. 9. Figure 3: Optimization for data loading. (a) Sequential execution. (b) Prefetching. For filter \(K_{d-}\), the \(F_{kl}^{u}\) in the horizontal direction can be obtained as follows: \[\begin{split} F_{k0}^{u}&=-am\cdot p^{0}+a(b-n)\cdot p ^{1}\\ &+a(n-b)\cdot p^{3}+(am)\cdot p^{4},\\ F_{k1}^{u}&=a(-n-b)\cdot p^{0}+(-amb)\cdot p^{1}\\ &+(amb)\cdot p^{3}+a(n+b)\cdot p^{4},\\ F_{k2}^{u}&=(-2a)\cdot p^{0}+(-2amb)\cdot p^{1}+2amb \cdot p^{3}+2a\cdot p^{4}\\ F_{k3}^{u}&=F_{k1}^{u}\\ F_{k4}^{u}&=F_{k0}^{u}.\end{split} \tag{16}\] In addition, the \(G_{d-}^{v}\) of a center row \(v\) can be obtained using Eq. 17: \[G_{d-}^{v}=F_{k0}^{v-2}+F_{k1}^{v-1}+F_{k2}^{v}+F_{k1}^{v+1}+F_{k0}^{v+2},v \geq 2. \tag{17}\] Figure 5(a) shows the procedure of \(G_{d-}\), and Fig. 5(b) presents the changes of on-chip register data synchronously, which is the same method as \(G_{d+}\). According to this figure, all convolutions for each row must be recalculated in each step because the \(K_{d-}\) filter no longer has a zero vector in the horizontal direction, missing buffers that can be reused. Inspired by Eq. 5, we decompose \(K_{d-}\) into the sum of two products of two vectors as follows: \[\begin{split} K_{d-}&=a\cdot\begin{bmatrix}-m&b-n& \mathbf{0}&n-b&m\\ -n-b&-mb&\mathbf{0}&mb&n+b\\ -2&-2nb&\mathbf{0}&2nb&2\\ -n-b&-mb&\mathbf{0}&mb&n+b\\ -m&b-n&\mathbf{0}&n-b&m\end{bmatrix}\\ =& a\cdot\begin{bmatrix}m\\ n+b\\ 2\\ n+b\\ m\end{bmatrix}\times\begin{bmatrix}-1&-b&\mathbf{0}&b&1\end{bmatrix}\\ &-\begin{bmatrix}mb+b-n\\ nb+b^{2}-mb\\ 2b-2nb\\ nb+b^{2}-mb\\ mb-n+b\end{bmatrix}\times\begin{bmatrix}0&-1&\mathbf{0}&1&0\end{bmatrix}\end{split}. \tag{18}\] Equation 18 demonstrates that the first 1\(\times\)5 horizontal vector is the same as \(K_{x}\), which means that its intermediate results can be reused without recalculation. In addition, the second horizontal vector means we only need to calculate the difference between columns 2 and 4. Thus, the \(G_{d-}^{v}\) of a center row \(v\) is easily obtained as follows: \[\begin{split} G_{d-}^{v}&=am\cdot F^{f(v-2)}+a(n+b)\cdot F^{f(v- 1)}+2a\cdot F^{f(v)}\\ &+a(n+b)\cdot F^{f(v+1)}+am\cdot F^{f(v+2)}\\ &-a(mb+b-n)\cdot D^{f(v-2)}-a(nb+b^{2}-mb)\cdot D^{f(v-1)}\\ &-a(2b-2nb)\cdot D^{f(v)}-a(nb+b^{2}-mb)\cdot D^{f(v+1)}\\ &-a(mb-n+b)\cdot D^{f(v+2)},\end{split} \tag{19}\] and \[\begin{split} f(x)=x\bmod 6,\hskip 14.226378ptx\geq 0,\end{split}\] Figure 4: Calculation for \(G_{d+}\). (a) procedure. (b) Registers status. where \(D\) denotes the convolution results under the second horizontal vector. Therefore, although the on-chip registers must still be updated every time, we only need to perform simple multiply-accumulate operations instead of multiple convolutions, significantly reducing the overall calculation amount and improving the processing speed. Thus far, the efficient calculation methods in all four directions for a \(5\times 5\) Sobel operator have been introduced, and the final edge detection result can be obtained by integrating the respective results in these four directions according to Eq. 4. ## 5 Evaluation and Discussion ### Evaluation Platforms We implemented our multi-directional 5\(\times\)5 Sobel operator kernel on an embedded Jetson AGX Navier GPU and Nvidia GTX 1650Ti mobile GPU, because as widely used mobile GPUs, they have recently been used in some studies to handle systems that combine Sobel operators with other upper-layer applications [19][20][21]. At the same time, these two kinds of GPUs have different architectures, which determine that the same CUDA kernel often reflects utterly different performance. Jetson AGX Xavier is a powerful platform, built on an Nvidia Volta GPU with 512 cores and shares physical memory with the center processor. Users can \begin{table} \begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c} \hline \hline \multirow{2}{*}{**Hardware**} & \multirow{2}{*}{**Sobel operator**} & \multirow{2}{*}{**Image size**} & \multirow{2}{*}{**GM**} & \multirow{2}{*}{**SM**} & \multirow{2}{*}{**SM-P**} & \multirow{2}{*}{**RG**} & \multirow{2}{*}{**RG**-v1} & \multirow{2}{*}{**RG**-v2} & \multirow{2}{*}{**IO**} & \multirow{2}{*}{**HToD**} & \multirow{2}{*}{**DToH**} & \multirow{2}{*}{**Speedup**} \\ \hline \multirow{4}{*}{GTX1650Ti} & \multirow{4}{*}{\(3\times 3\)} & \(512\times 512\) & 10.285 & 14.626 & 14.246 & **6.766** & - & - & 80.475 & 6.035 & 6.233 & 1.52 \\ & & \(1024\times 1024\) & 43.792 & 55.639 & 54.455 & **30.791** & - & - & 316.52 & 6.115 & 6.287 & 1.42 \\ & & \(2048\times 2048\) & 165.86 & 206.81 & 198.71 & **105.22** & - & - & 1263.6 & 6.151 & 6.254 & 1.576 \\ \cline{2-13} & \multirow{4}{*}{\(5\times 5\)} & \(512\times 512\) & 29.712 & 30.913 & 28.606 & 24.905 & 20.884 & **18.747** & 80.475 & 6.035 & 6.223 & 1.585 \\ & & \(1024\times 1024\) & 124.00 & 109.59 & 107.78 & 95.940 & 77.918 & **66.225** & 316.52 & 6.115 & 6.287 & 1.872 \\ & & \(2048\times 2048\) & 424.37 & 418.50 & 411.91 & 350.36 & 286.01 & **249.22** & 1263.9 & 6.151 & 6.254 & 1.702 \\ \hline \multirow{4}{*}{Jetson AGX} & \multirow{4}{*}{\(3\times 3\)} & \(512\times 512\) & 17.426 & 17.693 & 17.139 & **13.881** & - & - & 17.049 & 26.512 & 34.488 & 1.255 \\ & & \(1024\times 1024\) & 60.748 & 59.483 & 60.975 & **37.816** & - & - & 65.737 & 28.802 & 32.440 & 1.606 \\ & & \(2048\times 2048\) & 250.65 & 230.66 & 227.94 & **160.81** & - & - & 914.23 & 10.504 & 7.501 & 1.559 \\ \cline{1-1} \cline{2-13} & \multirow{4}{*}{\(5\times 5\)} & \(512\times 512\) & 48.625 & 34.739 & 34.379 & 31.346 & 28.118 & **26.620** & 17.049 & 26.512 & 34.488 & 1.827 \\ \cline{1-1} & & \(1024\times 1024\) & 164.41 & 141.66 & 120.68 & 116.14 & 115.06 & **93.354** & 65.737 & 28.802 & 32.440 & 1.761 \\ \cline{1-1} & & \(2048\times 2048\) & 694.99 & 572.95 & 549.38 & 499.15 & 454.63 & **368.24** & 914.23 & 10.504 & 7.501 & 1.887 \\ \hline \hline \end{tabular} \end{table} Table 1: Speed performance of our four-directional Sobel operators. Figure 5: Calculation for \(G_{d^{-}}\). (a) procedure. (b) Registers status. allow both the host and device to access the shared data by applying for managed memory, reducing the impact of data transmission. In contrast, the GTX 1650Ti is built on a Turing architecture with 1024 cores and has dedicated memory. Although it has a higher calculation capability than Jetson AGX Xavier, the data it processes must be first transferred from the host memory to the device memory through the PCIe bus, which increases the burden of IO. Evaluating our kernel on both platforms helps us fully understand its performance, and users can choose different solutions according to their requirements. ### Evaluation of Kernel Performance We evaluated the performance using three different sizes of images: 512\(\times\)512, 1024\(\times\)1024, and 2048\(\times\)2048. To evaluate kernel performance more comprehensively, we used global and shared memory as storage mediums other than the register. Additionally, we compared the 3\(\times\)3 operator. Table 1 lists the speed performance of our four-directional kernels. _GM_, _SM_, and _RG_ represent the original methods using global memory, shared memory, and registers, respectively. _SM-P_ represents the method that covers transmission latency by adding the prefetching mechanism to _SM_. _RG-v1_ indicates that we transformed the original diagonal filters \(K_{d}\) and \(K_{dt}\) into \(K_{d+}\) and \(K_{d-}\); and _RG-v2_ the method that fur Figure 6: Speed comparison of four-directional Sobel operators for a \(1024\times 1024\) image in different resource configurations. ther decomposes \(K_{d}\)- based on _RG-v1_. All _RG_ series kernels are equipped with the prefetching mechanism. Because the calculation of the 3\(\times\)3 operator in the diagonal directions is not complicated, we only perform _RG-v1_ and _RG-v2_ to the 5\(\times\)5 operator. _Speedup_ denotes the ratio of _GM_ to _RG_ at runtime. For each kernel, we used the _NVprof_ profiling tool to measure the kernel execution time 100 times and took the average value as the final execution time. Also, we calculated the standard deviation of the execution time of each kernel, ranging from 0.06 to 5.05, which fully proves the robustness of our measurement results. Regardless of the platform, our kernel achieved a 1.3x speedup, and the maximum even reached over 1.8x. For the degraded case, the _SM_ series kernels on a GTX 1650Ti are slower than those of _GM_. This is because using the shared memory without optimization only increases data transmission costs and reduces kernel performance. Although using the prefetching mechanism can hide latency and reduce the execution time by 19\(\upmu\)s on average compared with _GM_, the performance of _SM-P_ for a 3\(\times\)3 operator on GTX 1650Ti is still lower than that of _GM_. This implies that, for a 3\(\times\)3 operator, using shared memory as the storage medium is not a good choice. The execution time of the kernels increases linearly with the image size on both platforms: approximately 4x on both platforms. This steady change indirectly indicates that our method can fully utilize hardware resources. Additionally, in almost all cases, the introductions of _SM-P_, _RG-v1_ and _RG-v2_ gradually introduce a reduction in execution time, indicating that our proposed methods are effective and have high robustness. Particularly for the 5\(\times\)5 operator, the speed of our accelerated kernel _RG-v2_ (66.225\(\upmu\)s) is only 33% slower than the original 3\(\times\)3 kernel _GM_ (43.792\(\upmu\)s), enabling us to use the 5\(\times\)5 So \begin{table} \begin{tabular}{l|c|c|c|c|c|c} \hline \hline **Method** & **Operator size** & **Image size** & **Runtime\({}^{*}\) (ms)** & **Hardware** & **MPS** & **MPS/C** \\ \hline \multirow{4}{*}{**SobelGPU-Jetson**} & \multirow{4}{*}{\(5\times 5\)} & \(1024\times 1024\) & **0.074** & & **1.41E10** & **2.77E7** \\ & & \(2048\times 2048\) & **0.519** & & **8.07E9** & **1.58E7** \\ & \(5\times 5\) & \(1024\times 1024\) & **0.085** & & **1.24E10** & **2.42E7** \\ & & \(2048\times 2048\) & **0.552** & & **7.59E9** & **1.48E7** \\ \hline \multirow{4}{*}{**SobelGPU-GTX**} & \multirow{4}{*}{\(5\times 5\)} & \(1024\times 1024\) & 0.190 & & 5.51E9 & 5.38E6 \\ & & \(2048\times 2048\) & 0.740 & & 5.66E9 & 5.53E6 \\ & & \(1024\times 1024\) & 0.199 & & 5.27E9 & 5.14E6 \\ & & \(2048\times 2048\) & 0.763 & & 5.49E9 & 5.36E6 \\ \hline \multirow{4}{*}{OpenCV-GPU 1} & \multirow{4}{*}{\(5\times 5\)} & \(1024\times 1024\) & 0.512 & & 2.05E9 & 4E6 \\ & & \(2048\times 2048\) & 1.778 & & 2.36E9 & 4.6E6 \\ & & \(1024\times 1024\) & 0.566 & & 1.85E9 & 3.62E6 \\ & & \(2048\times 2048\) & 1.832 & & 2.29E9 & 4.47E6 \\ \hline \multirow{4}{*}{OpenCV-GPU 2} & \multirow{4}{*}{\(5\times 5\)} & \(1024\times 1024\) & 2.43 & & 4.31E8 & 4.21E5 \\ & & \(2048\times 2048\) & 9.82 & & 4.27E8 & 4.18E5 \\ & & \(1024\times 1024\) & 2.53 & & 4.14E8 & 4.05E5 \\ & & \(2048\times 2048\) & 9.90 & & 4.24E8 & 4.14E5 \\ \hline Xiao [13] & \multirow{2}{*}{\(3\times 3\)} & \(1024\times 1024\) & \(5.48^{\dagger}\) & \multirow{2}{*}{GTX 1070} & 1.91E8 & 9.97E4 \\ & & \(2048\times 2048\) & 18.95\({}^{\dagger}\) & & 2.21E8 & 1.15E5 \\ \hline Zahra [22] & \multirow{2}{*}{\(3\times 3\)} & \(512\times 512\) & 3.62 & & \multirow{2}{*}{GTX 550Ti} & 7.24E7 & 3.77E5 \\ & & \(1024\times 1024\) & 14.74 & & 7.11E7 & 3.7E5 \\ \hline \multirow{4}{*}{Theodora [23]} & \multirow{2}{*}{\(3\times 3\)} & \(1024\times 1024\) & 0.601 & & 1.74E9 & 1.36E6 \\ & & \(2048\times 2048\) & 0.926 & & 4.52E9 & 3.53E6 \\ & & \(1024\times 1024\) & 0.837 & & 1.25E9 & 9.79E5 \\ & & \(2048\times 2048\) & 1.174 & & 3.57E9 & 2.79E6 \\ \hline Dore [24] & \multirow{2}{*}{\(3\times 3\)} & \(1024\times 1024\) & \(11.01^{\dagger}\) & \multirow{2}{*}{GTX 470} & 9.5E7 & 2.1E5 \\ & & \(2048\times 2048\) & 84.023\({}^{\dagger}\) & & 5E7 & 1.1E5 \\ \hline You [25] & \multirow{2}{*}{\(3\times 3\)} & \(1024\times 768\) & 5 & \multirow{2}{*}{DE1-SoC} & 1.57E8 & - \\ & & \(1920\times 1080\) & 15 & & 1.38E8 & - \\ \hline Sato [26] & \multirow{2}{*}{\(3\times 3\)} & \(512\times 512\) & 1.1 & Cyclone II & 2.38E8 & - \\ & & \(1024\times 1024\) & 4.37 & & 2.39E8 & - \\ \hline Tim [27] & \multirow{2}{*}{\(3\times 3\)} & \(1280\times 1024\) & 31 & 2YNQ 7030 & 4.22E7 & - \\ \hline \hline \end{tabular} *: Runtime includes the kernel execution time and data loading time. \({}^{\dagger}\): Only the kernel execution time is included. \end{table} Table 2: Speed comparisons of two-directional Sobel operators with other methods. bel operator with higher detection precision instead of 3x3 in the future. _IO_ denotes the data transmission time required according to the hardware architecture. As mentioned, because Jetson AGX Xavier GPU shares the physical memory between the GPU and CPU, it does not cost too much on IO. By contrast, the IO costs required on GTX 1650Ti are much higher than the kernel execution. The _Throughput_ metrics between the host and the device in both directions also reflect the same issue. The throughputs we achieved on Jetson AGX Xavier are much higher than those of GTX 1650Ti, but still far from theoretical values. This means that our kernel is memory limited and could be further ameliorated by processing larger images or video streams. Because our kernel is implemented in CUDA, the block size configuration is closely related to its performance. Therefore, we provide different combinations of block configurations to the kernels and perform the 3x3 and 5x5 four-directional Sobel operators on the 1024x1024 image shown in Fig. 6. Each row of graphs represents different storage mediums used in our kernels. The graphs in columns 1 and 2 show the processing results of the 3x3 operator under different block configurations and platforms, and columns 3 and 4 show those for 5x5. In each sub-graph, the x-axis represents the number of _grid.y_, which is determined by the number of image sizes and threads allocated in the y direction within each block; the larger the number, the fewer rows each block processes in the y-axis direction. The y-axis represents the execution time of these kernels. The methods performed here are the same as those tested in Table 1. The difference is that we specified _block1: (128,1)_ and _block2: (256,1)_ configurations for each method. According to the results, _block1_ and _block2_ do not affect much under the same method in most cases. Moreover, the speed relationship between these methods is the same as that shown in Table 1. Therefore, we predict that the individual difference is caused by changes in thread parallelism resulting from different register usage rates. For each method, a large _grid.y_ typically shows a high performance because a higher number of blocks indicates a more significant number of active blocks in parallel. Thus, the kernel can use hardware resources better, resulting in better performance. Table 2 shows the speed comparison of our kernel with other methods in two directions. The comparison objects are fast Sobel operators published in recent years, and each study provides their operator sizes, image sizes, execution time, and required hardware platforms. Here, _Runtime_ includes the kernel execution time and data loading time from the host memory to the device. The hardware used by these studies is primarily divided into two categories: GPUs and field programmable gate arrays (FPGAs). Both are the most widely used algorithm accelerators today. To compare the computing capability of these operators based on different hardware, we list the mega-pixel per second (MPS) values of all these studies. Additionally, we use the mega-pixel per second per core (MPS/C) parameter, which represents the number of pixels processed per second by each core, to normalize their processing capabilities on different GPUs. According to the _Runtime_, our kernels based on AGX are faster than those based on 1650Ti, contrary to the results shown in Table 2. This is because of the considerable time required by the IO, resulting in a decrease in overall throughput. Compared with other studies, our operators are much faster in each case. Particularly for OpenCV-GPU, the most commonly used method in image edge detection, the processing speed is approximately 3.3x to 13.3x slower than our kernels. This is because the OpenCV-GPU treads the Sobel operator as a 2D convolution filter by default, and ours is actually further optimized on the basis of two 1D separable kernels. Besides, the OpenCV-GPU does not provide the functions in the diagonal directions, where our kernel has a considerable advantage. Xiao [13], Zahra [22], and Dore [24] implemented their fast 3x3 Sobel operators on GPUs, and You [25], Sata [26], and Tim [27] implemented on FPGAs. They all achieved real-time processing on a large-scale image, but remain at the milliseconds level, leaving little processing time for upper-layer applications. Theodora [23] implemented a complete version evaluation, including the combination of two Sobel operators and two images of different sizes. Their execution times are approximately 1.3x to 4.2x longer than ours, even using a GTX 1060 GPU superior to our GTX 1650Ti. According to the _MPS_ and _MPS/C_, our numbers exceed those of other studies, demonstrating that our kernels have an overwhelming advantage. To confirm the correctness, we listed edge detection results of four images shown in Fig. 7. For each image, we perform the edge detection using four different GPU kernels, including the two-directional OpenCV-GPU kernel (Fig. 7(b)); our two-directional RG kernel (Fig. 7(c)); our four-directional GM kernel (Fig. 7(d)) and our four-directional RG-v2 kernel (Fig. 7(e)). All the sizes of kernels are 5x5. Here, because kernels (b) and (d) are implemented by the most primitive method, we take them as reference objects, and calculate Figure 7: Confirmation of edge detection results using 5x5 Sobel operators. (a) Two-directional: OpenCV-GPU kernel. (b) Two-directional: Our RG kernel. (c) Four-directional: Our GM kernel. (d) Four-directional: Our RG-v2 kernel. the Structure Similarity Index Measure (SSIM) values of (c) and (e) relative to them respectively. SSIM is an indicator to measure the similarity of two images, and is calculated from the three image features of luminance, contrast and structure. It can be calculated as follow: \[SSIM(x,y)=\frac{(2\mu_{x}\mu_{y}+C_{1})(2\sigma_{xy}+C_{2})}{(\mu_{x}^{2}+\mu_{y} ^{2}+C_{1})(\sigma_{x}^{2}+\sigma_{y}^{2}+C_{2})}. \tag{20}\] Here, \(\mu\) and \(\sigma^{2}\) represent the mean value and variance of the image, respectively, and \(\sigma_{xy}\) represents the covariance between the image x and image y. Finally, \(C_{1}\) and \(C_{2}\) are two constants used to avoid instability when \(\mu_{x}^{2}+\mu_{y}^{2}\) or \(\sigma_{x}^{2}+\sigma_{y}^{2}\) are close to zero. The closer SSIM to 1 indicates, the higher the similarity of two images. According to the high SSIM values of 0.99 shown in Fig. 7, it can be observed that the proposed acceleration method can guarantee the correctness of Sobel operators in both two and four directions. ## 6 Conclusion This paper proposed a fast GPU kernel for a four-directional \(5\times 5\) Sobel operator that entirely uses the register resource. We improved the Sobel operator from two perspectives: computer architecture and mathematics. In computer architecture, we focused on fully using registers with the help of warp-level primitives without utilizing global memory and shared memory. Simultaneously, we introduced the prefetching mechanism to hide the system latency caused by data transmission. Concerning mathematics, we proposed a two-step optimization method for the Sobel operator with complex patterns in diagonal directions, enabling us to fully reuse intermediate results and significantly improve the execution efficiency of the kernel. Extensive experiments prove that our kernel has high robustness, with significant improvements in detection speed for images of different sizes. Furthermore, our kernel achieves 6.7x and 13x improvements in processing speed compared with the OpenCV-GPU library on two different GPUs. To the best of our knowledge, the proposed kernel is currently the fastest kernel based on GPUs. To further facilitate our kernel's application, we plan to combine it with high-level applications such as object detection. In addition, for the bottleneck problem of IO, we predict that stream processing can efficiently reduce the latency caused by data transmission. In addition, the burden of on-chip computation not being too heavy must be ensured. These concerns will be addressed in future work. ## CRediT Author Statement **Qiong Chang**: algorithm design, basic code writing, experiment design, draft manuscript writing. **Xiang Li**: code optimization, experiment, manuscript co-writing. **Yun Li**: supervision, experiment environment preparation, writing-reviewing. **Jun Miyazaki**: supervision, writing-reviewing and editing. All authors contributed to discussions. Qiong Chang and Xiang Li contributed equally to this work. ## Conflict of interest The authors declare that they have no conflict of interest.
2309.09182
Optimal Scene Graph Planning with Large Language Model Guidance
Recent advances in metric, semantic, and topological mapping have equipped autonomous robots with semantic concept grounding capabilities to interpret natural language tasks. This work aims to leverage these new capabilities with an efficient task planning algorithm for hierarchical metric-semantic models. We consider a scene graph representation of the environment and utilize a large language model (LLM) to convert a natural language task into a linear temporal logic (LTL) automaton. Our main contribution is to enable optimal hierarchical LTL planning with LLM guidance over scene graphs. To achieve efficiency, we construct a hierarchical planning domain that captures the attributes and connectivity of the scene graph and the task automaton, and provide semantic guidance via an LLM heuristic function. To guarantee optimality, we design an LTL heuristic function that is provably consistent and supplements the potentially inadmissible LLM guidance in multi-heuristic planning. We demonstrate efficient planning of complex natural language tasks in scene graphs of virtualized real environments.
Zhirui Dai, Arash Asgharivaskasi, Thai Duong, Shusen Lin, Maria-Elizabeth Tzes, George Pappas, Nikolay Atanasov
2023-09-17T07:09:46Z
http://arxiv.org/abs/2309.09182v2
# Optimal Scene Graph Planning with Large Language Model Guidance ###### Abstract Recent advances in metric, semantic, and topological mapping have equipped autonomous robots with semantic concept grounding capabilities to interpret natural language tasks. This work aims to leverage these new capabilities with an efficient task planning algorithm for hierarchical metric-semantic models. We consider a scene graph representation of the environment and utilize a large language model (LLM) to convert a natural language task into a linear temporal logic (LTL) automaton. Our main contribution is to enable optimal hierarchical LTL planning with LLM guidance over scene graphs. To achieve efficiency, we construct a hierarchical planning domain that captures the attributes and connectivity of the scene graph and the task automaton, and provide semantic guidance via an LLM heuristic function. To guarantee optimality, we design an LTL heuristic function that is provably consistent and supplements the potentially inadmissible LLM guidance in multi-heuristic planning. We demonstrate efficient planning of complex natural language tasks in scene graphs of virtualized real environments. ## I Introduction Advances in robot perception and computer vision have enabled metric-semantic mapping [1, 2, 3, 4, 5, 6, 7, 8, 9], offering rich information in support of robot autonomy. Beyond single-level maps, hierarchical models encode topological relations among local maps and semantic elements [10, 11]. A scene graph [11] is a prominent example that models buildings, floors, rooms, objects, and occupancy in a unified hierarchical representation. Scene graph construction can be done from streaming sensor data [5, 12, 13]. The metric, semantic, and topological elements of such models offer the building blocks for robots to execute semantic tasks [14]. The objective of this work is to approach this challenge by generalizing goal-directed motion planning in flat geometric maps to natural language task planning in scene graphs. Connecting the concepts in a natural language task to the real-world objects they refer to is a challenging problem, known as symbol grounding [15]. Large language models (LLMs), such as GPT-3 [16], BERT [17], and LLaMA [18], offer a possible resolution with their ability to relate entities in an environment model to concepts in natural language. Chen et al. [19, 20] use LLMs for scene graph labeling, showing their capability of high-level understanding of indoor scenes. Shah et al. [21] use GPT-3 to parse text instructions to landmarks and the contrastive language image pre-training (CLIP) model [22] to infer a joint landmark-image distribution for visual navigation. Seminal papers [23, 24] in the early 2000s established formal logics and automata as powerful representations of robot tasks. In work closely related to ours, Chen et al. [25] show that natural language tasks in 2D maps encoded as sets of landmarks can be converted to signal temporal logic (STL) [26] via LLM re-prompting and automatic syntax correction, enabling the use of existing temporal logic planners [27, 28, 29, 30, 31]. Beyond temporal logics, other expressive robot task representations include the planning domain definition language [32], Petri nets [33, 34], and process algebra [35]. However, few of the existing works have considered task specification in hierarchical models. We use an LLM to translate a natural language task defined by the attributes of a scene graph to a linear temporal logic (LTL) formula. We describe the scene graph to the LLM via an attribute hierarchy and perform co-safe syntax checking to ensure generation of correct and finite-path verifiable LTL. Further, we develop an approach to obtain task execution guidance from the LLM, which is used to accelerate the downstream task planning algorithm. Given a symbolically grounded task representation, the next challenge is to plan its execution in a hierarchical model efficiently. A key component for achieving efficiency in traditional goal-oriented planning in single-level environments is the use of guidance from a heuristic function. Heuristics play a critical role in accelerating both search-based [38, 39] and sampling-based [40, 41] planners. More complex tasks than goal reaching involve sequencing, branching, and recurrence, making heuristic guidance even more important for efficiency. Our work is inspired by Fu et al. [29] who develop an admissible heuristic for LTL planning in probabilistic landmark maps. We extend the approach by (a) generalizing to hierarchical scene graphs via multi-resolution planning, (b) designing a consistent LTL heuristic allowing acceleration over admissible-only heuristic planning, and (c) introducing an LLM heuristic allowing acceleration from LLM semantic guidance while retaining _optimality guarantees_ via multi-heuristic planning. Our approach is enabled by the anytime multi-resolution multi-heuristic A* (AMRA*) [37], which combines the advantages of multi-resolution A* [42] and multi-heuristic A* [39]. Our key contribution is to define the nodes, edges, and costs of a hierarchical planning domain from a scene graph and to introduce guidance from a consistent LTL heuristic and a semantically informed LLM heuristic. Related works consider object search and semantic goal navigation in unknown environments, represented as semantic occupancy maps [43], topological maps [44], or scene graphs [45, 46]. Shah et al. [43] develop an exploration approach that uses an LLM to score subgoal candidates and provide an exploration heuristic. Kostavelis et al. [44] perform place recognition using spatial and temporal proximity to obtain a topological map and encode the connectivity of its attributes in a navigation graph to enable Dijkstra planning to semantically specified goals. Amiri et al. [45] employ a scene graph generation network to construct a scene graph from images and a partially observable Markov decision process to obtain an object-search policy. Ravichandran et al. [46] embed partial scene graph observations in feature space using a graph neural network (GNN) and train value and policy GNNs via proximal policy optimization. In contrast with these works, we consider significantly more general missions but perform planning in a known scene graph. In summary, this paper makes the following _contributions_. * We use an LLM to translate natural language to LTL tasks over the attributes of a scene graph. * We construct a hierarchical planning domain capturing the structure of the scene graph and LTL task. * We design new LTL and LLM heuristic functions for planning, and prove that the LTL heuristic is consistent. * We employ hierarchical multi-heuristic planning to guarantee efficiency (due to LLM semantic guidance) and optimality (due to LTL consistent guidance), despite potential inadmissibility of the LLM heuristic. ## II Problem Statement Consider an agent planning a navigation mission specified in terms of semantic concepts, such as objects and regions, in a known environment. We assume that the environment is represented as a scene graph [11]. **Definition 1**.: A _scene graph_ is a graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\{\mathcal{A}_{k}\}_{k=1}^{K})\) with node set \(\mathcal{V}\), edge set \(\mathcal{E}\subseteq\mathcal{V}\times\mathcal{V}\), and \(K\) attribute sets \(\mathcal{A}_{k}\) for \(k=1,\ldots,K\). Each attribute \(a\in\mathcal{A}_{k}\) is associated with a subset \(\mathcal{V}_{a}\) of the nodes \(\mathcal{V}\). A scene graph provides a hierarchical semantic description of an environment, such as a building, in terms of floors, rooms, objects, and occupancy (see Fig. 1). For example, the graph nodes \(\mathcal{V}\) may represent free space, while the objects may be encoded as an attribute set \(\mathcal{A}_{1}\) such that an object \(o\in\mathcal{A}_{1}\) is associated with a free region \(\mathcal{V}_{o}\subset\mathcal{V}\) around it. Similarly, rooms may be represented as a set \(\mathcal{A}_{2}\) such that a room \(r\in\mathcal{A}_{2}\) is associated with a free-space region \(\mathcal{V}_{r}\subset\mathcal{V}\). Given the initial agent location \(s\in\mathcal{V}\), and a cost function \(c:\mathcal{E}\mapsto\mathbb{R}_{>0}\), our objective is to plan a sequence of scene graph nodes (path) that achieves a mission \(\mu\) with minimal cost. We assume \(\mu\) is provided in natural language using terms from the attribute sets \(\mathcal{A}_{k}\) of the scene graph. An example scene graph mission is provided in Fig. 1. To interpret a natural language mission, we define atomic propositions whose combinations can capture the mission requirements. An _atomic proposition_\(p_{a}:\mathcal{V}\mapsto\{\mathsf{false},\mathsf{true}\}\) associated with attribute \(a\in\mathcal{A}_{k}\) of the scene graph \(\mathcal{G}\) evaluates true at node \(s\in\mathcal{V}\) if \(s\) belongs to the nodes \(\mathcal{V}_{a}\) that satisfy attribute \(a\). We denote this by \(p_{a}(s):s\in\mathcal{V}_{a}\). The set of all atomic propositions at \(s\in\mathcal{V}\) is denoted by: \[\mathcal{AP}(s)\coloneqq\{p_{a}(s)\mid a\in\mathcal{A}_{k},k=1,\ldots,K\}\,. \tag{1}\] Intuitively, \(p_{a}(s)\) being true means that the agent is at node \(s\) that satisfies attribute \(a\), e.g., reaching an object in \(\mathcal{A}_{1}\) or being in a room in \(\mathcal{A}_{2}\). Avoiding an object or leaving a room can be specified via the negation of such propositions. To determine which atomic propositions are satisfied at a particular node, we define a labeling function. **Definition 2**.: Consider a scene graph \(\mathcal{G}\) with atomic propositions \(\mathcal{AP}=\cup_{s\in\mathcal{V}}\mathcal{AP}(s)\). A _labeling function_\(\ell:\mathcal{V}\mapsto 2^{\mathcal{AP}}\) maps a node \(s\in\mathcal{V}\) to a set \(\ell(s)\subseteq\mathcal{AP}\) of atomic propositions that evaluate true at \(s\). The labels along a path \(s_{1:T}\) are obtained as \(\ell(s_{1:T})=\ell(s_{1})\ell(s_{2})\ldots\ell(s_{T})\) and are called a _word_. A word contains information about the objects, rooms, and floors that an agent visits along its path in a scene graph. We can decide whether the agent path satisfies a mission \(\mu\) by interpreting its word. We denote a word \(\ell(s_{1:T})\) that satisfies a mission \(\mu\) by \(\ell(s_{1:T})\models\mu\), and define the precise semantics of this notation Fig. 1: Planning a natural language mission, \(\mu:\) “Reach the oven in the kitchen”, in a scene graph \(\mathcal{G}\) of the Gibson environment Benevolence [36] with object, room, and floor attributes. The terms “oven” and “kitchen” in \(\mu\) belong to the object and room attributes of the scene graph, respectively. The scene graph \(\mathcal{G}\) is described to the LLM using the connectivity of its attributes (attribute hierarchy) and the LLM is used to translate \(\mu\) to LTL formula \(\phi_{\mu}\) and associated Automaton \(\mathcal{M}_{\phi}\). We construct a hierarchical planning domain from the scene graph, and use multi-resolution multi-heuristic planning [37] to plan the mission execution. In addition to mission translation, the LLM is used to provide heuristic guidance to accelerate the planning, while an LTL heuristic is used to guarantees optimality. in Sec. III. With this, we are ready to state the problem of natural language mission planning in scene graphs. **Problem**.: Given a scene graph \(\mathcal{G}=(\mathcal{V},\mathcal{E},\{\mathcal{A}_{k}\}_{k=1}^{K})\), natural language mission \(\mu\) over the attributes of \(\mathcal{G}\), cost function \(c:\mathcal{E}\mapsto\mathbb{R}_{>0}\), and initial node \(s_{1}\in\mathcal{V}\), plan a path \(s_{1:T}\) that satisfies \(\mu\) with minimal cost: \[\begin{split}\min_{T\in\mathbb{N},s_{1:T}\in\mathcal{V}^{T}}& \sum_{t=1}^{T-1}c(s_{t},s_{t+1})\\ \text{s.t.}&(s_{t},s_{t+1})\in\mathcal{E},\quad t=1, \ldots,T-1,\\ &\ell(s_{1:T})\models\mu.\end{split} \tag{2}\] ## III Natural Language to Temporal Logic We use an LLM to translate natural language missions to logic formulas over the scene graph propositions \(\mathcal{AP}\). The key challenge is to describe the structure of a scene graph \(\mathcal{G}\) to an LLM and ask it to translate a mission \(\mu\) to a logic formula \(\phi_{\mu}\). We focus on linear temporal logic (LTL) [47] with syntax in Table. I due to its popularity and sufficient expressiveness to capture temporal ordering of propositions. We require a syntactically co-safe formula [48] to allow checking its satisfaction over finite agent paths. A co-safe LTL formula can be satisfied by a word \(\ell(s_{1:T})\) that consists of a finite prefix followed by a (potentially infinite) continuation that does not affect the formula's truth value. LTL formulas that contain only \(\mathbf{X}\) and \(\mathbf{U}\) temporal operators when written in negated normal form (\(\neg\) appears only in front of atomic propositions) are syntactically co-safe [48]. To use an LLM for scene understanding, it is necessary to design a prompt that describes the scene graph \(\mathcal{G}\) succinctly. Otherwise, the input might exceed the model token limit or confuse the model about the relationship between the sentences. For this aim, we simplify the scene graph \(\mathcal{G}\) into an _attribute hierarchy_\(\mathcal{G}\) that compactly represents scene entities in a YAML format. In our setup, the top level of \(\mathcal{G}\) contains floors \(f\in\mathcal{A}_{3}\). The rooms \(r_{f}\) on floor \(f\) are defined as \(\{r\in\mathcal{A}_{2}|\mathcal{V}_{r}\subseteq\mathcal{V}_{f}\}\), and nested as children of floor \(f\). Additionally, each room \(r\) stores connections to other rooms on the same floor. Similarly, the objects in room \(r\), \(\{o\in\mathcal{A}_{1}|\mathcal{V}_{o}\subseteq\mathcal{V}_{r}\}\), are stored as children of room \(r\). Each entity in \(\mathcal{G}\) is tagged with a unique ID to differentiate rooms and objects with the same name. See Fig. 1(a) for an example attribute hierarchy. In our examples, we define attributes for floors, rooms, and objects and the attribute hierarchy contains \(3\) levels but this can be extended to more generalized attribute sets (e.g., level for object affordances). The attribute hierarchy removes the dense node and edge descriptions in \(\mathcal{G}\) which are redundant for mission translation, leading to a significant reduction in prompt size. Given the natural language mission \(\mu\) and the attribute hierarchy \(\mathcal{\tilde{G}}\), the first call to the LLM is only responsible to extract unique IDs from the context of \(\mu\), outputting \(\mu_{\text{unique}}\). This step facilitates LTL translations by separating high-level scene understanding from accurate LTL generation. Specifically, this step links contextually rich specifications, which can be potentially difficult to be parsed, to unequivocal mission descriptions that are void of confusion for the LLM when it comes to LTL translation (Fig. 1(b)). The list of entities involved in the mission are extracted from \(\mu_{\text{unique}}\) using regular expression, and stored as \(\mu_{\text{regex}}\). This allows to inform the LLM about the essential parts of the mission \(\mu_{\text{unique}}\), without providing \(\mathcal{\tilde{G}}\). The savings in prompt size are used to augment the prompt with natural language to LTL translation examples expressed in _pre-fix_ notation. For instance, \(\phi\wedge\varphi\) is expressed as \(\wedge\phi\varphi\) in pre-fix format. This circumvents the issue of balancing parenthesis over formula \(\phi_{\mu}\). Ultimately, the LTL translation prompt includes \(\mu_{\text{unique}}\), \(\mu_{\text{regex}}\), and the translation examples (Fig. 1(c)). The translated LTL formula \(\phi_{\mu}\) is verified for syntactic correctness and co-safety using an LTL syntax checker. Further calls of the LLM are made to correct \(\phi_{\mu}\) if it does not pass the checks, up to a maximum number of allowed verification steps, after which human feedback is used to rephrase the natural language specification \(\mu\) (Fig. 1(d)). Once the mission \(\mu\) is successfully translated into a co-safe LTL formula \(\phi_{\mu}\), we can evaluate whether an agent path \(s_{1:T}\) and its corresponding word \(\ell(s_{1:T})\) satisfy \(\phi_{\mu}\) by constructing an automaton representation of \(\phi_{\mu}\) (Fig. 1(e)). **Definition 3**.: A deterministic automaton over atomic propositions \(\mathcal{AP}\) is a tuple \(\mathcal{M}=(\mathcal{Q},2^{\mathcal{AP}},T,\mathcal{F},q_{1})\), where \(\mathcal{Q}\) is a set of states, \(2^{\mathcal{AP}}\) is the power set of \(\mathcal{AP}\) called alphabet, \(T:\mathcal{Q}\times 2^{\mathcal{AP}}\mapsto\mathcal{Q}\) is a transition function that specifies the next state \(T(q,l)\) from state \(q\in\mathcal{Q}\) under label \(l\in 2^{\mathcal{AP}}\), \(\mathcal{F}\subseteq\mathcal{Q}\) is a set of final states, and \(q_{1}\in\mathcal{Q}\) is an initial state. A co-safe LTL formula \(\phi\) can be translated into a deterministic automaton \(\mathcal{M}_{\phi_{\mu}}\) using model checking tools such as PRISM [49, 50] or Spot [51]. The automaton checks whether a word satisfies \(\phi_{\mu}\). A word \(\ell_{1:T}\) is accepted by \(\mathcal{M}_{\phi_{\mu}}\), i.e., \(\ell_{1:T}\models\phi_{\mu}\), if and only if the state \(q_{T+1}\) obtained after transitions \(q_{t+1}=T(q_{t},\ell_{t})\) for \(t=1,\ldots,T\) is contained in the final states \(\mathcal{F}\). Hence, a path \(s_{1:T}\) satisfies a co-safe LTL formula \(\phi_{\mu}\) if and only if its word \(\ell(s_{1:T})\) contains a prefix \(\ell(s_{1:t})\) that is accepted by \(\mathcal{M}_{\phi_{\mu}}\). ## IV Optimal Scene Graph Planning We use the structure of the scene graph \(\mathcal{G}\) with guidance from the automaton \(\mathcal{M}_{\phi_{\mu}}\) and the LLM's mission semantics understanding to achieve efficient and optimal planning. ### _AMRA* Planning_ We perform mission planning using AMRA* [37]. The key challenge is to define a hierarchical planning domain and heuristic functions that describe the scene graph and mission. AMRA* requires several node sets \(\mathcal{X}_{r}\) and action sets \(\mathcal{U}_{r}\) for different planning resolution levels, \(r=1,2,\ldots\). Each level \((\mathcal{X}_{r},\mathcal{U}_{r})\) has an associated cost function \(c_{r}:\mathcal{X}_{r}\times\mathcal{X}_{r}\mapsto\mathbb{R}_{>0}\). The algorithm defines an anchor level \((\mathcal{X}_{0},\mathcal{U}_{0})\) \begin{table} \begin{tabular}{c c c c c} \hline \hline \multirow{2}{*}{\(p_{\mu}\)} & \multicolumn{4}{c}{Syntax (Semantics)} \\ \(\neg\phi\) & (Aromatic Proposition) & \(\phi\vee\varphi\) & (Or) & \(\phi\cup\varphi\) & (Until) \\ & (Negation) & \(\phi\vee\varphi\) & (Imply) & \(\mathbf{F}\phi\) & (Eventually) \\ \(\phi\wedge\varphi\) & (And) & \(\mathbf{X}\phi\) & (Next) & \(\mathbf{G}\phi\) & (Always) \\ \hline \hline \end{tabular} \end{table} TABLE I: Grammar for LTL formulas \(\phi\) and \(\varphi\). as \(\mathcal{X}_{0}:=\cup_{r>0}\mathcal{X}_{r}\), \(\mathcal{U}_{0}:=\cup_{r>0}\mathcal{U}_{r}\), and requires an initial state \(x\in\mathcal{X}_{0}\) and a goal region \(\mathcal{R}\subseteq\mathcal{X}_{0}\). AMRA* allows multiple heuristics for each level but requires the anchor-level heuristics to be consistent to guarantee optimality. A heuristic \(h\) is consistent with respect to cost \(c\) if \(h(x)\leq c(x,x^{\prime})+h(x^{\prime})\). We construct the levels \(\{\mathcal{X}_{r},\mathcal{U}_{r},c_{r}\}\), initial state \(x\), and goal region \(\mathcal{R}\) required for running AMRA* from the scene graph \(\mathcal{G}\) and the automaton \(\mathcal{M}_{\phi_{\mu}}\) in Sec. IV-B. We define two heuristics, \(h_{\mathrm{LTL}}\), which is used in the anchor level and other levels, and \(h_{\mathrm{LLM}}\) for other levels, in Sec. IV-C and Sec. IV-D, respectively, and prove that \(h_{\mathrm{LTL}}\) is consistent. ### _Hierarchical Planning Domain Description_ Given scene graph \(\mathcal{G}\) with initial node \(s_{1}\in\mathcal{V}\) and automaton \(\mathcal{M}_{\phi_{\mu}}\), we construct a hierarchical planning domain with \(K\) levels corresponding to each scene graph attribute \(\mathcal{A}_{k}\). Given an attribute set \(\mathcal{A}_{k}\), we define \(\mathcal{V}_{k}:=\cup_{a\in\mathcal{A}_{k}}\mathcal{V}_{a}\), where \(\mathcal{V}_{a}\) is the node set associated with attribute \(a\in\mathcal{A}_{k}\). Then, the node set corresponding to level \(k\) of the planning domain is defined as \(\mathcal{X}_{k}:=\mathcal{V}_{k}\times\mathcal{Q}\). We define the actions \(\mathcal{U}_{k}\) as transitions from \(x_{i}\) to \(x_{j}\) in \(\mathcal{X}_{k}\) with associated cost \(c_{k}(x_{i},x_{j})\). A transition from \(x_{i}=(s_{i},q_{i})\) and \(x_{j}=(s_{j},q_{j})\) exists if the following conditions are satisfied: 1. the transition is from \(\mathcal{V}_{a}\) of attribute \(a\) to the boundary \(\partial\mathcal{V}_{b}\) of attribute \(b\) such that \(s_{i}\in\mathcal{V}_{a}\), \(s_{j}\in\partial\mathcal{V}_{b}\) for \(a\neq b\) with \(a,b\in\mathcal{A}_{k}\) and \(\text{int}\mathcal{V}_{a}\cap\text{int}\mathcal{V}_{b}=\emptyset\), 2. the automaton transitions are respected: \(q_{j}=T(q_{i},\ell(s_{i}))\), where \(\ell(s_{i})\in 2^{\mathcal{AP}}\) is the label at \(s_{i}\), 3. the transition is along the shortest path, \(s_{j}\in\arg\min_{s}d(s_{i},s)\), where \(d:\mathcal{V}\times\mathcal{V}\rightarrow\mathbb{R}\) is the shortest feasible path between two scene graph nodes. The transition cost is defined as \(c_{k}(x_{i},x_{j})=d(s_{i},s_{j})\). Since \(\mathcal{X}_{k}\subseteq\mathcal{V}\times\mathcal{Q}\), we can define the AMRA* anchor level as \(\mathcal{X}_{0}=\mathcal{V}\times\mathcal{Q}\) with actions \(\mathcal{U}_{0}\) derived from the scene graph edges \(\mathcal{E}\), automaton transitions, and \(\cup_{k>0}\mathcal{U}_{k}\). The initial state and goal region are defined as \(x=(s_{1},q_{1})\) and \(\mathcal{R}=\mathcal{V}\times\mathcal{F}\). The hierarchical planning domain is illustrated in Fig. 3. Four levels, occupancy (\(\mathcal{V}\)), objects (\(\mathcal{A}_{1}\)), rooms (\(\mathcal{A}_{2}\)) and floors (\(\mathcal{A}_{3}\)), are used in our experiments. For example, in the object level, the agent can take an action to move directly from the couch to the TV with transition cost computed as the shortest path in the occupancy level. ### _LTL Heuristic_ To ensure optimal AMRA* planning, a consistent heuristic is required at the anchor level. Inspired by the admissible but inconsistent heuristic in [29], we design a consistent LTL heuristic function \(h_{\mathrm{LTL}}\) that approximates the scene graph distance to \(\mathcal{R}=\mathcal{V}\times\mathcal{F}\) using information from the automaton \(\mathcal{M}_{\phi_{\mu}}\) and the scene graph attributes. We require that the scene graph cost \(c\) satisfies the triangle inequality, i.e., \(c(s,s^{\prime})\leq c(s,s^{\prime\prime})+c(s^{\prime\prime},s^{\prime})\) for any \(s,s^{\prime},s^{\prime\prime}\in\mathcal{V}\). Then, we define the cost between two labels \(l_{1},l_{2}\in 2^{\mathcal{AP}}\) as \[c_{\ell}(l_{1},l_{2})=\min_{s_{1},s_{2}:\ell(s_{1})=l_{1},\ell(s_{2})=l_{2}}c(s_ {1},s_{2}), \tag{3}\] which is a lower bound on the transition cost from \(l_{1}\) to \(l_{2}\) in the metric space of cost function \(c\). Next, we define a lower bound on the transition cost from automaton state \(q\in\mathcal{Q}\) with label \(l\in 2^{\mathcal{AP}}\) to an accepting state \(q_{f}\in\mathcal{F}\) as: \[g(l,q)=\min_{l^{\prime}\in 2^{\mathcal{AP}}}c_{\ell}(l,l^{\prime})+g(l^{\prime},T(q, l^{\prime})). \tag{4}\] The function \(g:2^{\mathcal{AP}}\times\mathcal{Q}\mapsto\mathbb{R}_{\geq 0}\) can be pre-computed via Dijkstra's algorithm on the automaton \(\mathcal{M}_{\phi_{\mu}}\). We also define a next labeling function \(\ell_{n}:2^{\mathcal{AP}}\times\mathcal{Q}\mapsto 2^{\mathcal{AP}}\) that tracks the least-cost label sequence returned by Dijkstra's algorithm with \(g(l,q)=0\), \(\ell_{n}(l,q)=\text{true}\), \(\forall q\in\mathcal{F}\). **Proposition 1**.: _The heuristic function \(h_{\mathrm{LTL}}:\mathcal{V}\times\mathcal{Q}\rightarrow\mathbb{R}\) defined below is consistent:_ \[h_{\mathrm{LTL}}(s,q) =\min_{t\in\mathcal{V}}\left[c(s,t)+g(l(t),T(q,l(t)))\right], \tag{5}\] \[h_{\mathrm{LTL}}(s,q) =0,\quad\forall q\in\mathcal{F}.\] Proof.: Consider a state \(x=(s_{x},q_{x})\) with label \(l_{x}=l(s_{x})\). For any \(y=(s_{y},q_{y})\) that \(s_{y}\) is reachable from \(s_{x}\) in one step, Fig. 3: Four-level hierarchical planning domain for _Benevolence_. Fig. 2: Natural language to LTL translation. (a) Attribute hierarchy \(\mathcal{G}\). The unique IDs and the room connections are shown in parenthesis and inside red brackets, respectively. (b) Unique ID extraction from natural language mission \(\mu\). (c) LTL formula generation from natural language specification. (d) Syntax and co-safety check over the generated LTL formula \(\phi_{\mu}\). (e) Automaton construction. we have two cases to handle. When \(T(q_{x},l(s_{y}))=q_{y}=q_{x}\): \[h_{\text{LTL}} (s_{x},q_{x})\leq\min_{t\in\mathcal{V}}\left[c(s_{x},s_{y})+c(s_{y},t)+g(l(t),T(q_{x},l(t)))\right]\] \[=c(s_{x},s_{y})+\min_{t\in\mathcal{V}}\left[c(s_{y},t)+g(l(t),T(q_ {y},l(t)))\right]\] \[=c(s_{x},s_{y})+h_{\text{LTL}}(s_{y},q_{y}).\] When \(T(q_{x},l_{y})=q_{y}\neq q_{x}\), with \(l_{y}=l(s_{y})\), we have: \[c(s_{x},s_{y})+h_{\text{LTL}}(s_{y},q_{y})\] \[=c(s_{x},s_{y})+\min_{t\in\mathcal{V}}\left[c(s_{y},t)+g(l(t),T(q_ {y},l(t)))\right]\] \[\geq\min_{t\in\mathcal{V},(l(t)=l_{y})}c(s_{x},t)+\min_{t\in \mathcal{V}}\left[c(s_{y},t)+g(l(t),T(q_{y},l(t)))\right]\] \[\geq\min_{t\in\mathcal{V},(l(t)=l_{y})}\left[c(s_{x},t)+\min_{l^ {\prime}\in\mathcal{Z}^{\text{op}}}\left[c_{L}(l_{y},l^{\prime})+g(l^{\prime },T(q_{y},l^{\prime}))\right]\right.\] \[=\min_{t\in\mathcal{V},(t)=l_{y}}\left[c(s_{x},t)\right]+g(l_{y},q_{y})\] \[\geq\min_{t\in\mathcal{V}}\left[c(s_{x},t)+g(l(t),T(q_{x},l(t))) \right]=h_{\text{LTL}}(s_{x},q_{x}).\qed\] ### _LLM Heuristic_ In this section, we seek to assist AMRA* by developing an LLM heuristic \(h_{\text{LLM}}:\mathcal{V}\times\mathcal{Q}\rightarrow\mathbb{R}\) that captures the hierarchical semantic information of the scene graph. The LLM heuristic uses all attributes at a node \(s\in\mathcal{V}\), the current automaton state \(q\in\mathcal{Q}\), and the attribute hierarchy \(\mathcal{\tilde{G}}\), and returns an attribute-based guide that helps the AMRA\({}^{*}\) to search in the right direction for an optimal path. We design the prompt to ask for LLM attribute-based guidance with \(4\) components as follows: * environment description from its attribute hierarchy \(\mathcal{\tilde{G}}\), * list of motions \(M=\{m_{i}(\cdot,\cdot)\}\), where \(m_{i}(a_{j},a_{k})\), \(a_{j}\in\mathcal{A}_{j}\), \(a_{k}\in\mathcal{A}_{k}\) describes movements on \(\mathcal{\tilde{G}}\) from attribute \(a_{i}\) to \(a_{j}\) that the LLM model uses to generate its guides, * an example of the mission \(\mu_{unique}\) and how to response, * description of the mission \(\mu_{unique}\), current attributes, remaining task given the automaton state \(q\in\mathcal{Q}\), and request for guidance on how to finish the task. The LLM model returns a sequence of function calls \(\{f_{i}(a_{j},a_{k})\}_{i=0}^{N}\), \(f_{i}\in M\), \(a_{j}\in\mathcal{A}_{j}\), \(a_{k}\in\mathcal{A}_{k}\) in XML format, easing response parsing [52]. Each function call returns a user-defined cost, e.g., Euclidean distance between attributes: \(f_{i}(a_{j},a_{k})=c(s_{j},s_{k})\), where \(s_{j},s_{k}\) are the center of \(\mathcal{V}_{a_{j}}\) and \(\mathcal{V}_{a_{k}}\), respectively. The total cost of the LLM functions is used as an LLM heuristic \(h_{LLM}(s,q)=\sum_{i=0}^{N}f_{i}(a_{j},a_{k})\). Due to the LLM query delay and its limited query rates, the sequence of function calls suggested by the LLM model is obtained offline stored and used to calculate the heuristic \(h_{LLM}\) online in AMRA* based on the user-defined cost. Fig. 4 illustrates a sample prompt and response. The prompt first describes how the attributes are connected based on the attribute hierarchy \(\mathcal{\tilde{G}}\). Each attribute is mentioned with a unique ID to avoid confusing the LLM, as shown in the first paragraph of the prompt in Fig. 4. The second part of the prompt provides a list of possible functions used to guide the agent, such as \(move(1,2)\) to move from room \(1\) to room \(2\), or \(reach(1,3)\) to reach an object \(3\) in room \(1\). The third component provides an example of a mission and how the LLM should response. The last component describes the current attributes and the remaining mission, generated based on the current automaton state \(q\), and requests LLM to generate a high-level plan using the provided functions. The automaton state \(q\) represents how far we have achieved the mission. Thus, to describe the remaining mission, we run Dijkstra's algorithm on the automaton \(\mathcal{T}\) to find the shortest path from \(q\) to an accepting state in \(\mathcal{F}\). We obtain a set of atomic prepositions evaluated _true_ along the path, and concatenate their descriptions to describe the remaining mission in the prompt. For example, the desired mission is to _"go to the bedroom 2, then visit the kitchen 3, reach the oven 11, while always avoid the TV 9"_. Let the atomic prepositions be defined as \(p_{2}\), \(p_{3}\), \(p_{11}\), and \(p_{9}\), where the indices correspond to the ID of the room or object. The task can be described using an LTL as follows: \(\phi=\mathbf{F}(p_{2}\wedge\mathbf{F}(p_{3}\wedge\mathbf{F}p_{11}))\wedge \neg p_{9}\), whose automaton graph \(T\) generated from Spot [51] is shown in Fig. 5 with the initial state \(q_{1}=4\) and the final states \(\mathcal{F}=\{0\}\). The agent is currently in room \(1\) and have been already visited room \(2\), i.e. \(q=2\) on \(T\). The shortest path from \(q\) to the accepting state \(0\) is marked by red arrows in Fig. 5. Along this path, \(p_{3}\) and \(p_{11}\) turn true, causing the remaining mission to be to_"visit the kitchen 3 and reach the oven 11"_ (Fig. 4). The atomic preposition \(p_{4}\) leads to a sink state \(5\) if it evaluates true, and never appears in the next mission, leading to an optimistic LLM heuristic. ## V Evaluation To evaluate our method, we use _Allensville_ (1-floor), _Benevolence_ (3-floor) and _Colliverville_ (3-floor) from the 3D Scene Graph dataset [11]. For each scene, we designed 5 missions (some are shown in Table II). For each mission, we used 5 initial positions across different floors and rooms. We use GPT-4 [53] to translate missions to LTL formulas, and Spot [51] for LTL formulas to automata as described in Fig. 4: ChatGPT prompt requesting a scene graph path. Fig. 5: The automaton graph \(T\) for the mission _”go to the bedroom 2, then visit the kitchen 3, reach the oven 11, and always avoid the TV 9"_ with an initial node \(q_{1}=4\) and an accepting node \(0\). Sec. III. Following Sec. IV-D, we use GPT-4 [53] to generate the LLM heuristic function \(h_{\text{LLM}}\). Given a scene graph \(\mathcal{G}\), the mission described by the automaton \(\mathcal{M}_{\phi}\), the LTL heuristic \(h_{\text{LLM}}\), and the LLM heuristic \(h_{\text{LLM}}\), we construct the hierarchical planning domain and run AMRA*. Fig. 6 shows a path in _Benevolence_ for the mission in Table II. Starting from the empty room on floor 0, the agent goes to the bathroom entrance without approaching the sink, and then proceeds upstairs to floor 1, finally reaching a chair in the dining room without entering the living room. Since planning domains with different hierarchy levels and different heuristics per level can be constructed, we compare different setups to investigate the benefit of using our LLM heuristic. With all hierarchy levels having an LTL heuristic, we design 7 setups: occupancy level only without LLM heuristic (A*), all levels available but without LLM heuristics (NO-LLM), all levels with LLM heuristics (ALL), and one of the levels with LLM heuristic (OCC, OBJ, ROOM, FLR). Fig. 7 shows that the ALL setup manages to return a feasible path much faster than others thanks to the LLM guidance across all hierarchical levels, while it also approaches the optimal solution within similar time spans. As an anytime algorithm, AMRA* starts searching with large weight on the heuristics, then reuses the results with smaller heuristic weights. As the planning iterations increase, the path gets improved. When the heuristic weight decays to 1, we obtain an optimal path. To further investigate the benefits of using LLM heuristics, we compare the number of node expansions per planning iteration. As shown in Fig. 8, applying our LLM heuristic to any hierarchy level reduces the node expansions significantly, which indicates that our LLM heuristic produces insightful guidance to accelerate AMRA*. An exciting observation is that the more hierarchy levels we use our LLM heuristic in, the more efficient the algorithm is. This encourages future research to exploit scene semantic information further to accelerate planning. AMRA* allows a robot to start executing the first feasible path, while the path optimization proceeds in the background. Tables III and IV shows the time required to compute the first path, the path cost relative to the optimal path, and the time required to find an optimal path. The ALL configuration outperforms other setups in speed of finding the first path and the optimal path when the scene gets more complicated. ## VI Conclusion We demonstrated that an LLM can provide symbolic grounding, LTL translation, and semantic guidance from natural language missions in scene graphs. This information allowed us to construct a hierarchical planning domain and achieve efficient planning with LLM heuristic guidance. We managed to ensure optimality via multi-heuristic planning, including a consistent LTL heuristic. Our experiments show that the LLM guidance is useful at all levels of the hierarchy for accelerating feasible path generation. \begin{table} \begin{tabular}{c|l} \hline _Allensive_ & Clean all vases in the dining room. Eventually water \\ & the potted plants in the bathrooms one after another. \\ \hline \multirow{3}{*}{_Benevolence_} & Visit the bathroom on floor 0 and avoid the sink, then go to the dining room and sit on a chair. Always \\ & avoid the living room and the staircase next to it. \\ \hline _Colliverille_ & Clean all the corridors, except the one on floor 0. \\ \hline \end{tabular} \end{table} TABLE II: Example mission descriptions for each scene. Fig. 8: Number of node expansions v.s. the planning iteration. Each sub-figure presents a hierarchy level. \begin{table} \begin{tabular}{c|l|l|l} & 1st iter. time (sec.) & 1st iter. \(cost/cost_{opt}\) & opt. time (sec.) \\ \hline ALL & **0.0007** & 1.3763 & 8.9062 \\ OCC & 0.0244 & 1.0827 & **8.3144** \\ OBJ & 3.7460 & 1.0387 & 24.936 \\ ROOM & 3.6878 & **1.0366** & 13.1352 \\ FLR & 3.8106 & 1.0369 & 13.3287 \\ NO-LLM & 3.3260 & 1.0318 & 24.2516 \\ A* & 1.1202 & 1.0997 & 11.7594 \\ \hline \end{tabular} \end{table} TABLE III: First feasible path computation time, relative cost between first and optimal path, and optimal path computation time averaged over \(5\) initial conditions for mission 1 in _Benevolence_. Fig. 6: Optimal path for the _Benevolence_ mission shown in Table II. Fig. 7: Path cost vs planning time for different AMRA* variants.
2306.00045
Lottery Tickets in Evolutionary Optimization: On Sparse Backpropagation-Free Trainability
Is the lottery ticket phenomenon an idiosyncrasy of gradient-based training or does it generalize to evolutionary optimization? In this paper we establish the existence of highly sparse trainable initializations for evolution strategies (ES) and characterize qualitative differences compared to gradient descent (GD)-based sparse training. We introduce a novel signal-to-noise iterative pruning procedure, which incorporates loss curvature information into the network pruning step. This can enable the discovery of even sparser trainable network initializations when using black-box evolution as compared to GD-based optimization. Furthermore, we find that these initializations encode an inductive bias, which transfers across different ES, related tasks and even to GD-based training. Finally, we compare the local optima resulting from the different optimization paradigms and sparsity levels. In contrast to GD, ES explore diverse and flat local optima and do not preserve linear mode connectivity across sparsity levels and independent runs. The results highlight qualitative differences between evolution and gradient-based learning dynamics, which can be uncovered by the study of iterative pruning procedures.
Robert Tjarko Lange, Henning Sprekeler
2023-05-31T15:58:54Z
http://arxiv.org/abs/2306.00045v1
# Lottery Tickets in Evolutionary Optimization: ###### Abstract Is the lottery ticket phenomenon an idiosyncrasy of gradient-based training or does it generalize to evolutionary optimization? In this paper we establish the existence of highly sparse trainable initializations for evolution strategies (ES) and characterize qualitative differences compared to gradient descent (GD)-based sparse training. We introduce a novel signal-to-noise iterative pruning procedure, which incorporates loss curvature information into the network pruning step. This can enable the discovery of even sparser trainable network initializations when using black-box evolution as compared to GD-based optimization. Furthermore, we find that these initializations encode an inductive bias, which transfers across different ES, related tasks and even to GD-based training. Finally, we compare the local optima resulting from the different optimization paradigms and sparsity levels. In contrast to GD, ES explore diverse and flat local optima and do not preserve linear mode connectivity across sparsity levels and independent runs. The results highlight qualitative differences between evolution and gradient-based learning dynamics, which can be uncovered by the study of iterative pruning procedures. Machine Learning, ICML ## 1 Introduction Evolution strategies have recently shown to provide a competitive alternative to gradient-based training of neural networks (e.g. Such et al., 2017; Salimans et al., 2017). Instead of assuming explicit access to gradient evaluations, ES refine the sufficient statistics of a search distribution using information gathered from the black-box evaluation of sampled candidate solutions. In doing so, modern ES face several scaling challenges: The memory requirements for evaluating populations of large networks quickly become infeasible for common hardware settings. Furthermore, the estimation of the search covariance is often statistically inefficient. But is it really necessary to evolve full dense networks or can these challenges _in principle_ be circumvented by evolving sparse networks? The lottery ticket hypothesis (Frankle and Carbin, 2019, LTH) recently empirically established the existence of sparse network initializations that can be trained to similar performance levels as their dense counterparts. In this study, we set out to answer whether the existence of such winning initializations is fundamentally tied to gradient-based training or whether sparse trainability can also be achieved in the context of ES. Furthermore, the LTH has demonstrated yet another application of studying sparse trainability: The empirical analysis of learning dynamics and loss surfaces (Frankle et al., 2020, 2020). We, therefore, shed light on the differences between gradient descent-based and evolutionary optimization. Evci et al. (2020) previously showed that sparse lottery tickets suffer from reduced gradient flow. GD-based lottery tickets overcome this limitation by biasing the network to retrain to its original dense solution. But does this'regurgitating ticket interpretation' (Maene et al., 2021) also apply to the setting of gradient-free optimization? We summarize our contributions as follows: 1. We apply iterative magnitude pruning (Han et al., 2015, IMP; figure 1 left) to the ES setting and establish the existence of highly sparse evolvable initializations. They consistently exist across different ES, architectures (multi-layer perceptrons/MLP, & convolutional neural networks/CNN) and tasks (9 control & 3 vision tasks). The LTH phenomenon does, therefore, not depend on gradient-based training or the implied masked gradient flow (Section 3, Figure 1 right, top). 2. We introduce a novel iterative pruning procedure based on the signal-to-noise ratio (SNR) of the ES search distribution, which accounts for the loss geometry information encoded by the search covariance. This pruning scheme leads to initializations, which are trainable at higher degrees of sparsity (Figure 1 right, top). 3. ES tickets outperform sparse random initializations when transferred to related tasks (Section 4). They can be transferred between different evolutionary strategies and to GD-based training. Hence, ES tickets do not overfit to their training paradigm or task setting. Instead, they capture moderately task-general inductive biases, which can be amortized in different settings. 4. GD-based lottery tickets are not well suited for training moderately sized neural networks at very high sparsity levels (Section 3, Figure 1, bottom). For ES-derived lottery tickets, on the other hand, we find that they remain trainable at very high sparsity. For vision-based tasks the performance degradation of GD accelerates and correlates with a strong increase in the sharpness of obtained local optima. ES optima, on the other hand, remain flat for higher sparsity (Section 5). 5. While GD-based trained ticket initializations preserve linear mode connectivity at low levels of sparsity, ES tickets do not. Instead, they converge to a diverse set of flat local optima across sparsity levels (Section 3). This questions the'regurgitation ticket interpretation' and the implicit loss landscape bias of ES-based tickets: ES tickets exist despite the fact that they do not repeatedly converge to the same loss basin. This highlights the potential of ES-based ensemble methods in order to compensate for the expensive pruning procedure. ## 2 Background & SNR Pruning Procedure **Iterative Magnitude Pruning in Deep Learning.** Lottery ticket initializations are traditionally found using an expensive iterative pruning procedure (Figure 1, left; e.g. Han et al., 2015; Lange, 2020): Given a dense network initialization \(\theta_{0}\), one trains the parameters using GD. The final weights \(\theta\) are then used to construct a binary mask \(m\) based on a simple heuristic: Prune the fraction \(p\in(0,1)\) of smallest magnitude weights. The cutoff threshold is constructed globally and thereby implies a different pruning ratio per layer. Afterwards, one resets the remaining weights to their original initialization and iterates the procedure using the remaining non-zero weights, \(m\odot\theta_{0}\). The ticket effect for a given level of sparsity \((1-p)^{k}\) can be measured by the performance difference between a sparsity-matched randomly pruned network and the corresponding IMP initialization, \(m_{k}\odot\theta_{0}\). Previously, it has been shown that a positive ticket effect can be observed throughout different GD-based training paradigms including computer vision (Frankle and Carbin, 2019; Frankle et al., 2020), reinforcement learning (Yu et al., 2019; Vischer et al., 2021), natural language processing (Chen et al., 2020), graph neural networks (Chen et al., 2021) and generative models (Kalibhat et al., 2021). In contrast to previous work, we investigate whether the lottery ticket phenomenon is an idiosyncratic phenomenon for gradient-based optimization. Figure 1: **Left.** Differences between the iterative magnitude pruning procedures applied to GD and ES training. While GD-based training relies on explicit gradient computation via backpropagation, ES adapts a search distribution based on the principles of biological evolution. In the ES setting, IMP prunes the initial search distribution based on the ratio of the evolved mean magnitude (IMP) and standard deviation (SNR) at each pruning iteration. **Right, Top**. Task-normalized aggregated ticket effect in ES. IMP-derived lottery ticket initializations outperform random pruning for ES-based optimization. SNR pruning provides an additional sparse trainability improvement. **Right, Bottom**. The ES ticket procedure yields sparse initializations with performance on par with GD. At high sparsity levels ES tickets outperform GD. The task-specific results are normalized across conditions (random, magnitude, SNR) to lie within \([0,1]\) range. We average the normalized scores across 12 tasks, 2 different network classes, 3 ES and over 5 independent runs & plot standard error bars. **Evolution Strategies.** ES constitute a set of random search algorithms based on the principles of biological evolution. They adapt a parametrized distribution in order to iteratively search for well performing solutions. After sampling a population of candidates, their fitness is estimated using Monte Carlo (MC) evaluations. The resulting scores are used to update the search distribution. We focus on two representative ES classes, which have been used in neuroevolution. _Finite Difference-based ES_: A subset of ES use random perturbations to MC estimate a finite difference gradient: \[\nabla_{\theta}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,I)}F(\theta+\sigma \epsilon)=\frac{1}{\sigma}\mathbb{E}_{\epsilon\sim\mathcal{N}(0,I)}[F(\theta +\sigma\epsilon)\epsilon]\] This estimate is then used along standard GD-based optimizers to refine the search distribution mean \(\theta\)(Salimans et al., 2017). They differ in their use of fitness shaping, anti-correlated noise, elite selection and the covariance structure. _Estimation-of-Distribution ES_: A second class of ES does not rely on noise perturbations or low-dimensional approximations to the fitness gradient. Instead, algorithms such as CMA-ES (Hansen and Ostermeier, 2001) rely on elite-weighted mean updates and iterative covariance matrix estimation. They aim to shift the search distribution into the direction maximizing the likelihood of fitness improvement. **Sparsity & Pruning in Gradient-Free Optimization.** The NEAT algorithm (Stanley and Miikkulainen, 2002) co-evolves network architectures and their weights. The connectivity is changed throughout the procedure, which often times naturally leads to the emergence of sparse architectures. Mocanu et al. (2018) used an dynamic sparse training algorithm inspired by evolutionary principle to train networks with dynamic topology. Finally, Mocanu et al. (2016) previously investigated the performance of sparse Boltzmann Machines, which do not rely on gradient computation via backpropagation. Here, we investigate the sparse trainability of otherwise static neural networks and the LTH in ES. **Searching for Lottery Tickets in Evolution Strategies.** We introduce an ES-generalized iterative pruning procedure and focus on search strategies that adapt the sufficient statistics of a multivariate diagonal Gaussian distribution. At each pruning iteration the mean initialization is pruned based on its final estimate (Figure 1, middle). Furthermore, we consider a signal-to-noise ratio pruning heuristic, which prunes weights based on the ratio of the mean magnitude and the standard deviation of the search distribution, \(|\theta|/\boldsymbol{\sigma}\). SNR pruning implicitly incorporates information about the loss geometry/curvature into the sparsification. The procedure is summarized in Algorithm 1. We note that (Blundell et al., 2015) previously considered a SNR criterion in the context of zero-shot pruning of Bayesian neural networks. ``` Input: Pruning ratio \(p\in(0,1)\), iterations \(T\in\mathbb{Z}_{+}\), ES algorithm, \(G\) generations, \(N\) population size. Initialize the ES search distribution \(\theta_{0},\boldsymbol{\sigma}_{0}\in\mathbb{R}^{D}\). Initialize a dense pruning mask \(m_{0}=I_{D}\in\mathbb{R}^{D}\). for\(t=0\)to\(T-1\)do # Construct the sparse ES statistics: \(\theta_{0,t}=m_{t}\odot\theta_{0}\) and \(\boldsymbol{\sigma}_{0,t}=m_{t}\odot\boldsymbol{\sigma}_{0}\). # Evolve non-pruned weights only: for\(g=0\)to\(G-1\)do Sample: \(\mathbf{x}_{i}\sim\mathcal{N}(\theta_{g,t},\boldsymbol{\sigma}_{g,t}I_{D}), \forall i=1,\ldots,N\). Evaluate candidate fitness: \(\mathbf{f}_{i},\forall i=1,\ldots,N\). Update ES: \(\theta_{g+1,t},\boldsymbol{\sigma}_{g+1,t}\leftarrow\texttt{ES}(\theta_{g,t}, \boldsymbol{\sigma}_{g,t},\mathbf{x},\mathbf{f})\). endfor Compute sparsity level \(s_{t}=(1-p)^{t+1}\). Compute SNR threshold \(\rho(s_{t})\) by sorting \(|\theta_{G,t}|/\boldsymbol{\sigma}_{G,t}\). Construct the next mask \(m_{t+1}=\mathbf{1}_{|\theta_{G,t}|/\boldsymbol{\sigma}_{G,t}>\rho_{t}}\). endfor ``` **Algorithm 1** SNR-Based Iterative Pruning for ES Importantly, pruning for ES reduces the memory requirements due to the smaller dimensionality of the stored mean and covariance estimates. While most ES initialize the search mean at zero, we instead sample the initialization according to standard network initializations allowing for an initialization effect. This ensures comparability between GD and ES-based iterative pruning. Throughout the main text, we mainly focus on 4 ES including PGPE (Sehnke et al., 2010), SNES (Schaul et al., 2011), Sep-CMA-ES (Ros and Hansen, 2008) and DES (Lange et al., 2022). They all make use of a diagonal covariance, which enables the computation of weight-specific SNRs used to compute pruning thresholds. All ES and GD training algorithms were tuned using the same amount of compute.1 Footnote 1: We compare against PPO (Schulman et al., 2017) for control tasks and simple cross-entropy minimization with Adam for image classification. We refer the interested reader to the supplementary information (SI) for task-specific hyperparameters. **Measuring Ticket Effect Contributions.** The ticket effect can be decomposed into three ingredients: The pruning-derived mask, the implied initialization of the remaining non-pruned weights, and the layerwise pruning ratio. Vischer et al. (2021) proposed to estimate each contribution using a set of permutation experiments. By permuting the non-masked weights at each pruning iteration, one may estimate the contribution of the extracted weight initialization. If the performance of the sparsified networks remains unharmed, the weight effect is small. If additionally permuting the binary mask greatly damages the trainability, the mask effect is large. Finally, comparing a randomly pruned network with the doubly permuted baseline allows us to estimate the impact of the layerwise pruning ratio implied by the iterative pruning. In this work, we additionally consider the gap between IMP and SNR pruned network initializations. The difference can be attributed to the information resulting from curvature estimation of the ES covariance. ## 3 Winning Lottery Ticket Initializations Exist for Evolutionary Optimization Previous work (Evci et al., 2020; Maene et al., 2021) suggests that GD ticket initializations overcome the sparsity-induced decrease in gradient flow by biasing initializations to retrain to the same loss basin. Given that ES are prone to higher stochasticity during the learning process, it is likely to converge to more diverse loss basins, raising the question whether the LTH phenomenon is unique to GD training using backpropagation. In order to investigate this question, we start by exhaustively evaluating the existence of winning lottery tickets in ES. We focus on 12 tasks, different network architectures and 3 ES: First, we evolve MLP agents (1 to 3 layers) on 9 Reinforcement Learning tasks focusing on the continuous control setting. We evolve the agents to maximize episodic returns using Sep-CMA-ES, SNES and PGPE. The environments are implemented by the Brax (Freeman et al., 2021) package and the agent evaluation is done using the average return on a set of test episodes. Next, we evolve CNN architectures on both the standard MNIST, Fashion-MNIST (F-MNIST) and Kuzushiji-MNIST (K-MNIST) digit classification tasks and to minimize the cross-entropy loss. In this case we evaluate the performance via the test accuracy of the final mean search statistic \(\theta_{G,t}\). For each task-network-ES combination we run the ES-adapted pruning procedure and prune at each iteration 20 percent of the remaining non-pruned weights (\(p=0.2\)). To ensure trainability at higher levels of sparsity, we generously calibrated the amount of ES generations and refer the reader to the SI C for a detailed overview of the considered hyperparameters. **Winning lottery tickets consistently exist for various ES.** We find that the magnitude-based lottery ticket configuration outperforms the random-reinitialization baseline across the majority of tasks, network architectures and ES combinations (Figures 1 and 2, red versus yellow curves). While the qualitative effect is robust across most settings, the quantitative magnitude differs significantly across task complexities and the degree of network overparametrization: For the simpler classification and pendulum task the observed ticket effect is large compared to the more complex control tasks such as HalfCheetah and the Hopper task. This indicates a relationship between task complexity and the overparametrization required to achieve high performance. **SNR-based IMP results in sparser trainable tickets.** Next, we compare ES lottery tickets resulting from magnitude- and SNR-based pruning (Figures 1 and 2, black versus red curves). We find that SNR-derived tickets are consistently Figure 2: Existence of sparse evolvable initializations and benefits of SNR-based pruning. For the majority of considered settings we observe a lottery ticket effect (_IMP_ vs. _random_ pruning). Furthermore, the proposed SNR-based iterative pruning consistently discovers initializations that outperform IMP tickets across all sparsity level. This highlights the positive effect of accounting for loss curvature at the time of pruning. The size of the ticket effect depends on the task-network-ES setting, which indicates a difference in task-specific degree of overparametrization. **Top.** Sep-CMA-ES-based (Ros and Hansen, 2008) tickets. **Middle.** SNES-based (Schaul et al., 2011) tickets. **Bottom.** PGPE-based (Schnke et al., 2010) tickets. Results are averaged over 5 independent runs & we plot standard error bars. trainable at higher degrees of sparsity. The search standard deviation of a specific weight indirectly incorporates information about the local loss landscape geometry. Intuitively, weights with high associated standard deviation imply a strong robustness to perturbations and in turn a flat direction in the loss surface. Hence, we may expect weights with relatively small SNR to have a negligible performance contribution. **Highly sparse trainability can be achieved by ES**. In Figures 1 and 3 (green versus red curves) we compare the performance of GD-based IMP and ES-based SNR pruning. We find that sparse ES initializations are capable of outperforming GD-based training methods on several tasks. More importantly, we observe that ES-based methods are robustly better at training highly sparse networks. For the control tasks, ES can also outperform GD for moderate levels of sparsity (e.g. Hopper and Grasp tasks). For the vision-based tasks, on the other hand, GD-IMP starts to degrade in performance faster as the sparsity increases. In Section 5 we investigate these dynamics and relationship between the sharpness of GD-based local optima and sparsity. In summary, winning lottery tickets can be identified for different gradient-free ES, tasks and architectures. The size of the observed ticket effect and hence sparse trainability can be improved by considering not only the mean magnitude, but also the covariance as a reflection of loss curvature. The general phenomenon of sparse trainability of neural networks, therefore, is not unique to gradient descent and generalizes to algorithms with inherently higher stochasticity. The effect size and underlying contributions are task and network sensitive but largely robust to the optimization scheme. Finally, ES consistently train to higher performance at high sparsity levels. ## 4 ES-Based Lottery Tickets Transfer across Tasks, ES and to GD-Based Training The iterative pruning-based generation of winning tickets is a costly procedure requiring the sequential evolution of network initializations at different sparsity levels. Thereby, it \begin{table} \begin{tabular}{|l||l|l|} \hline \hline Task/ES & Pruning-Train & Transfer-Eval \\ \hline _Sep-CMA-ES_ & Ant Mass 10 & Ant Mass 15 \\ _SNES_ & F-MNIST & MNIST \\ \hline _Ant_ & Sep-CMA-ES & DES \\ _F-MNIST_ & SNES & PGPE \\ \hline _Ant_ & Sep-CMA-ES & PPO \\ _F-MNIST_ & SNES & SGD \\ \hline \hline \end{tabular} \end{table} Table 1: Lottery Ticket ES/Task Transfer Settings Figure 3: Performance comparison between GD and ES-based lottery ticket initializations. For the majority of considered task settings, ES-based training performs on par with GD for medium levels of sparsity. At high sparsity degrees, on the other hand, ES-SNR initializations tend to outperform the sparsity-matched GD initializations. **Top.** Sep-CMA-ES-based (Ros & Hansen, 2008) tickets. **Middle.** SNES-based (Schaul et al., 2011) tickets. **Bottom.** PGPE-based (Sehnke et al., 2010) tickets. Results are averaged over 5 independent runs & we plot standard error bars. remains impractical for most real world application. But is it possible to amortize the costly ticket generation for one task by successfully applying it to a different one? Previous work showed that the GD-IMP-derived input layer mask captures task-specific information (Vischer et al., 2021), discarding task irrelevant dimensions. We wondered whether ES tickets extract similar useful transferable inductive biases or whether they overfit to their generating setting (i.e., task, ES algorithm or training paradigm). If they remain transferable, we can hope for a task-general application of sparse ticket initializations in the context of ES. To answer this question, we test the transferability of sparse initializations generated with ES-based SNR pruning to new unseen but related tasks. We take inspiration by the work of Morcos et al. (2019) and re-train sparse initializations generated for one task-ES configuration on a different setting with a shared network architecture. We start by examining the transfer of SNR-based initializations between different but related task variants and consider several settings (Figure 4, top). Importantly, the source and transfer task share the same input/output dimensionality and are related (Table 1, top; different torso mass for Ant control and digit/cloth image classification transfer). **Winning ES tickets can be transferred to related tasks**. The transfer of a ticket initialization derived on a related task improves the performance of a network trained on a new'sibling' task. The effect measured by the performance difference of the transferred initializations (blue) and the random pruning baseline (yellow) is significant across both of the considered tasks. The transferred ticket does not outperform the task-native SNR-lottery ticket (at high sparsity), indicating that tickets tend to capture both task-general and task-specific information. This positive transfer effect can be observed both for control (MLP) and vision (CNN) tasks. Next, we investigated whether it is possible to evolve sparse initializations generated by one evolution strategy with a different optimization strategy. We consider the transfer within the class of finite difference-based ES, to a covariance matrix adaptation-style ES and to GD-based training. **Winning ES tickets can be transferred between ES**. Ticket initializations also transfer well between different ES optimization algorithms (Figure 4, bottom). Oftentimes the transferred initializations train to the same performance level of the ES-specific ticket initialization, indicating that a within-task transfer is easier as compared to the previous across-task setting. This observation again holds for both task settings (control and vision) as well as different ES combinations. **Winning ES tickets can be transferred to GD Training.** Finally, we repeat the procedure from the previous subparagraphs, but this time transfer sparse ES-derived initializa Figure 4: ES ticket initializations transfer between related tasks and between different ES. **Left.** Conceptual visualization of ticket initialization transfer between tasks/ES. We first run SNR-pruning on a task/ES setting to obtain a set of initializations at different sparsity levels. Afterwards, we evaluate their trainability in a different setting. **Top.** Task transfer between two torso masses (Ant task; Sep-CMA-ES) and image classification tasks (MNIST variants; SNES). **Bottom.** ES Transfer between Sep-CMA-ES and DES (Ant task) as well as SNES and PGPE (F-MNIST task). For both cases we can observe a positive transfer of the previously discovered pruning masks. Results are averaged over 5 independent runs & we plot standard error bars. tions to downstream training with GD. Again, we find a positive effect for transferring an initialization that was obtained by ES (Figure 5). As discussed in Section 3, ES tickets can perform worse than GD-based training for moderate levels of sparsity. In line with this observation, we find that the size of the transfer effect correlates with the relative performance differences between the two paradigms. We do not find a strong positive effect for sparsity levels where the GD ticket baseline outperforms the ES ticket (e.g. for the ant task). More interestingly, for very sparse networks the ES-transferred initialization can even outperform the GD-ticket indicating that highly sparse ES pruning masks are well transferable to GD training. ## 5 Linear Mode Connectivity & SNR Pruning **ES and GD optima are not linearly connected.** Based on the previous results, we wondered how the trained models obtained by ES and GD differed. Therefore, we compared the linear connectivity (Frankle et al., 2020) of the local optima across sparsity levels. We compute the test accuracy (\(\mathcal{A}\)) error barrier between two trained networks \(\max_{\alpha\in[0,1]}\mathcal{A}(\alpha\theta+(1-\alpha)\theta^{\prime})\) for a range of \(\alpha\), comparing network combinations at different IMP iterations. In line with previous work (Frankle et al., 2019), GD-based local optima remain strongly connected for moderate levels of sparsity (Figure 6, top). ES-based (ARS, Mania et al., 2018) solutions, on the other hand, already experience a performance barrier between early IMP iterations, but remain better connected at higher sparsity levels. The optima found by GD and ES are generally not linearly connectable, indicating that GD and ES find fundamentally different solutions. Furthermore, it questions a generalization of the regurgitating ticket interpretation to ES (Evci et al., 2020; Maene et al., 2021): ES ticket initializations exist despite the fact that they do not repeatedly train to the same loss basin. In SI Figure 11, we find that the lack of ES-GD connectivity can partially be explained by different weight magnitudes for the two training paradigms. In general, GD-based solutions have higher magnitude weights and tend to prune the input layer less. **ES tends to converge to flatter optima.** A natural follow-up question is: How do the curvatures of local optima obtained by the different training paradigms differ? We use one-dimensional random projections (Li et al., 2018) of the test loss \(\mathcal{L}(\theta;\xi)=\mathcal{L}(\theta+\xi_{\eta})\) with \(\eta\sim\mathcal{N}(\mathbf{0},\mathbf{I})\) to examine the sensitivity of the discovered local optimum for different strengths \(\xi\in[-1,1]\). We quantify the approximate curvature by fitting a quadratic polynomial to the projected loss as a function of the perturbation strength. In Figure 6 (bottom) we observe that the approximate sharpness of the GD optima increases rapidly with the sparsity level. For ES-based optima, on the other hand, the curvature increases at a smaller rate across IMP iterations. We provide visualizations of the 2D projected loss surface in SI Figure 10. **SNR pruning dynamically accounts for fitness curvature.** Finally, we investigate conditions under which SNR-based pruning improves over IMP. In Figure 7 we plot the correlation of weight magnitudes and SNRs across pruning stages. The correlation decreases for both SNR and IMP-based pruning runs as sparsity increases. The SNR-IMP performance gap is closely related to the relative correlation dynamics: If the correlation decreases faster for IMP than SNR (left; Fetch task), one also observes a positive impact of SNR pruning on the performance. Otherwise, we do not (right; F-MNIST task). This indicates that SNR-based pruning can account for non-isotropic changes in the fitness landscape curvature caused by sparsification. Dimensions with high sharpness (small \(\sigma\)) will have a larger SNR, which makes them less prone to pruning. Future work will have to uncover the mechanistic underpinnings of this phenomenon. Figure 5: Transferability of ES tickets to GD-based training. **Left.** Conceptual visualization of ticket transfer between tasks. We first run SNR-based pruning using ES to obtain a set of initializations at different sparsity levels. Afterwards, we evaluate their trainability with GD. **Middle/Right.** For the two considered task-network-ES settings (Ant task and Sep-CMA-ES, F-MNIST and SNES), we observe a positive transfer effect between the two training paradigms when compared to a random pruning baseline. Furthermore, the ES initializations transfer their trainability at high levels of sparsity over to the GD training setting. The results are averaged over 5 independent runs & we plot standard error bars. ## 6 Discussion **Summary.** We establish the existence of sparse trainable initializations in evolutionary optimization. Sparse trainability, therefore, does not require a specific masked gradient flow. The exact size of the ticket effect depend on ES, architecture and task. The resulting sparse initializations are transferable across training paradigms and to related tasks. Tickets in ES do not necessarily retrain to the same loss basin but still remain trainable across sparsity levels. **Ethical Considerations.**Hooker et al. (2020) show that compression can amplify bias and decrease fairness in the GD-based training setting. As we scale ES, future work will have to assess whether these problems transfer to ES model compression and how to mitigate them. **Limitations.** This work is limited by its empirical nature and scalability of ES. Furthermore, our study focus on medium network sizes. This is partially due to ES suffering from a lack of hyperparameter understanding and the adoption of tools tailored towards GD-based optimization (optimizers, etc.). Finally, our analysis is based on the computationally costly iterative pruning procedure, which requires multiple sequential training runs. **Future Work.** Dynamic sparse training with ES provides a direction for future work and may enable protocols which simultaneously grow and prune networks. Furthermore, a full understanding of sparse trainability requires a theoretical treatment of the effect of sparsity on the fitness landscape. Figure 6: Connectivity barriers & local minima sharpness in GD and ES (ARS, Mania et al., 2018) on MNIST. **Top.** While for low sparsity GD-based optima can be linearly connected with nearly no drop in test accuracy, ARS-based optima suffer from small barriers. GD and ARS optima cannot be connected without strong performance drops, indicating that they find different optima. **Bottom.** Low-dimensional loss projections (Li & Zhang, 2017) and curvature estimates at different iterations. The sharpness of GD-based optima increases rapidly with the sparsity level. ES-based optima remain more flat throughout the IMP procedure. Results are averaged over 5 independent runs. Figure 7: **Top.** Relative performance difference between IMP and SNR pruning for Fetch & F-MNIST tasks. **Middle.** Pruning thresholds for IMP & SNR. **Bottom.** Correlation between SNRs and weight magnitudes across pruning iterations. The correlation remains high in the case of a positive performance difference. The results are averaged over 5 independent runs.
2306.17539
Polarized K3 surfaces with an automorphism of order 3 and low Picard number
In this paper, for each $d>0$, we study the minimum integer $h_{3,2d}\in \mathbb{N}$ for which there exists a complex polarized K3 surface $(X,H)$ of degree $H^2=2d$ and Picard number $\rho (X):=\textrm{rank } \textrm{Pic } X = h_{3,2d}$ admitting an automorphism of order $3$. We show that $h_{3,2}\in\{ 4,6\}$ and $h_{3,2d}=2$ for $d>1$. Analogously, we study the minimum integer $h^*_{3,2d}\in \mathbb{N}$ for which there exists a complex polarized K3 surface $(X,H)$ as above plus the extra condition that the automorphism acts as the identity on the Picard lattice of $X$. We show that $h^*_{3,2d}$ is equal to $2$ if $d>1$ and equal to $6$ if $d=1$. We provide explicit examples of K3 surfaces defined over $\mathbb{Q}$ realizing these bounds.
Dino Festi
2023-06-30T10:50:47Z
http://arxiv.org/abs/2306.17539v2
# Polarized K3 surfaces with an automorphism of order 3 and low Picard number ###### Abstract. In this paper we study, for each \(d>0\), what is the minimum integer \(h_{3,2d}\in\mathbb{N}\) for which there exists a complex polarized K3 surface \((X,H)\) of degree \(H^{2}=2d\) and Picard number \(\rho(X):=\operatorname{rank}\operatorname{Pic}X=h_{3,2d}\) admitting an automorphism of order 3. We show that \(h_{3,2d}=6\) if \(d=1\) and \(h_{3,2d}=2\) if \(d>1\). We provide explicit examples of K3 surfaces defined over \(\mathbb{Q}\) realizing these bounds. ## 1. Introduction The study of automorphisms of K3 surfaces has seen a very intense activity in the last 40 years. In the 80's Nikulin ad Stark proved that a group acting purely non-symplectically on a K3 is cyclic an finite [6, 7]. In [5], Machida and Oguiso prove that such a group can have order at most 66; if the group has prime order, then its maximal order is 19. In these notes we consider non-symplectic automorphisms of order 3, a topic extensively treated in [1, 8]. In particular, we focus on the interplay between the existence of non-symplectical automorphism of order 3, a given polarization, and the Picard number of the surface, as already done in [2] for non-symplectic involutions. More precisely, let \((X,H)\) denote a complex polarized K3 surface of degree \(2d\), that is, \(H\) is an ample divisor of \(X\) and \(H^{2}=2d\). Assume that \(X\) admits an automorphism \(\alpha\in\operatorname{Aut}X\) of order 3. Then \(\alpha\) induces an action \(\alpha^{*}\) on \(H^{2,0}(X)=\langle\omega\rangle\), and hence \(\alpha^{*}=\zeta\omega\), with \(\zeta^{3}=1\). If \(\zeta=1\), then \(\alpha\) is called symplectic and \(\rho(X)\geq 13\)[3, SS15.1.2]. If \(\zeta\neq 1\), that is, \(\zeta\) is a primitive third root of unity, then \(\rho(X)\geq 2\)[1]. One may ask, as in [2], when is this lower bound realized depending on the degree of the polarization. In other words, if \[\mathcal{H}_{3,2d}:=\{(X,H,\alpha)\}\] denotes the set of complex polarized K3 surfaces \((X,H)\) such that \(H^{2}=2d\) and \(X\) admits an automorphism \(\alpha\) of order 3, one can then define \[h_{3,2d}=\min_{X\in\mathcal{H}_{3,2d}}\{\rho(X)\}\;.\] In this work we prove the following result. **Theorem 1.1**.: _The following equality holds:_ \[h_{3,2d}=\begin{cases}2&\text{ if }d>1\;,\\ 6&\text{ if }d=1\;.\end{cases}\] _Both bounds can be realised by surfaces defined over \(\mathbb{Q}\)._ The paper is structured as follows: in SS2 we briefly review the background of complex K3 surfaces with an automorphism of order \(3\); in SS3 we prove the statement of Theorem1.1 for \(d>1\); the statement of Theorem1.1 for \(d=1\) is finally proved in SS4. ## Acknowledgments While working on this subject I greatly benefited from the numerous conversations with Alice Garbagnati and Bert van Geemen. I would also thank Bartosz Naskrecki and Wim Nijgh for their precious contribution to the final part of the work. ## 2. Projective K3 surfaces with a non-symplectic automorphism of order \(3\) There are several results about complex K3 surfaces with an automorphism of order \(3\), in these notes we will mostly use the results presented in [1]. Let \(X\) be a complex projective K3 surface and assume it admits an automorphism \(\alpha\in\operatorname{Aut}X\) of order \(3\). Also assume that \(\alpha\) is non-symplectic. Hence \(\alpha^{3}=1\) and \(\alpha^{*}(\omega)=\zeta\omega\), where \(\omega\) is the class generating \(H^{2,0}(S)\) and \(\zeta\) is a primitive third root of unity. In what follows, \(\zeta\) will always denote a primitive third root of unity. Recall that if \(L\) is a lattice, we denote by \(A_{L}=Hom(L,\mathbb{Z})/L\) its _discriminant group_. **Definition 2.1**.: We define \(N(\alpha):=(H^{2}(S,\mathbb{Z}))^{\alpha^{*}}\), the sublattice of \(H^{2}(X,\mathbb{Z})\) fixed by \(\alpha\). **Definition 2.2**.: Let \(\mathcal{E}=\mathbb{Z}[\zeta]\) denote the ring of Eisenstein integers. A _\(\mathcal{E}\)-lattice_ is a couple \((L,\sigma)\) where \(L\) is a lattice and \(\sigma\) is a point-free isometry of order \(3\) on \(L\). If \(\sigma\) acts as the identity on \(A_{L}\), then \((L,\sigma)\) is called an _\(\mathcal{E}^{*}\)-lattice_. **Proposition 2.3**.: _Let \((X,\alpha)\) be a complex K3 surface with a non-symplectic automorphism of order \(3\). Then_ 1. \(N(\alpha)\) _is a primitive_ \(3\)_-elementary sublattice of_ \(\operatorname{Pic}X\)_;_ 2. \((N(\alpha)^{\perp},alpha^{*})\) _is a_ \(\mathcal{E}^{*}\)_-lattice, where_ \(N(\alpha)^{\perp}\) _is the orthogonal complement of_ \(N(\alpha)\) _inside_ \(H^{2}(X,\mathbb{Z})\)_;_ 3. \((T_{X},\alpha^{*})\) _is a_ \(\mathcal{E}\)_-sublattice of_ \(N(\alpha)^{\perp}\)_, where_ \(T_{X}\) _denotes the transcendental lattice of_ \(X\)_._ Proof.: This is the reformulation of [6, Theorem 0.1] and [5, Lemma 1.1] as in [1, Theorem 1.4]. **Lemma 2.4**.: _The following statements hold:_ 1. _Any_ \(\mathcal{E}\)_-lattice has even rank;_ 2. _Any_ \(\mathcal{E}^{*}\)_-lattice is_ \(3\) _elementary._ **Corollary 2.5**.: _Let Let \((X,\alpha)\) be a complex K3 surface with a non-symplectic automorphism of order \(3\). Then \(\rho(X)\) is even._ Proof.: By Proposition2.3 we have that \((T_{X},\alpha^{*})\) is a \(\mathcal{E}\)-lattice. Then from Lemma2.4 it follows that \(\operatorname{rk}T_{X}\) is even. As \(\rho(X)=22-\operatorname{rk}T_{X}\), we conclude the argument. **Remark 2.6**.: If \((X,\alpha)\) is generic enough, then \(\operatorname{Pic}X=N(\alpha)\) and \(T_{X}=N(\alpha)^{\perp}\). Then it follows that \(A_{\operatorname{Pic}X}\cong(\mathbb{Z}/3\mathbb{Z})^{\oplus a}\) for some \(a\in\mathbb{N}\) such that \(a\leq\rho(X)\). **Remark 2.7**.: Later we will use the following fact due to the classification of binary forms: the only indefinite \(3\)-elementary lattices of rank \(2\) are \(U\) and \(U(3)\); the only definite \(3\)-elementary lattices of rank \(2\) are \(A_{2}\) and \(A_{2}(-1)\). The main result of [1] is the complete classification of the K3 surfaces \((X,\alpha)\) in terms of the fixed locus of \(\alpha\) and \(N(\alpha)\). Moreover, for each case they also provide a projective model realizing it. Their result can be summarized as follows. **Theorem 2.8**.: _[_1_, Theorems 2.8 and 3.4]_ _Let \((X,\alpha)\) be a complex K3 surface with an automorphism \(\alpha\) of order \(3\). Then \(\operatorname{Fix}\alpha\) consists of \(n\leq 9\) points and \(k\leq 6\) curves. The couple \((n,k)\) uniquely determines \(N(\alpha)\). All the possible triples \((n,k,N(\alpha))\) are listed in [1, Table 2]._ _Conversely, for every triple \((n,k,N(n,k))\) in [1, Table 2] there exists a complex K3 surface \(X\) with a non-symplectic automorphism \(\alpha\) of order \(3\) such that \(\operatorname{Fix}\alpha\) consists of \(n\) points and \(k\) curves and \(\operatorname{Pic}X=N(\alpha)\cong N(n,k)\). For each triple \((n,k,N(n,k))\) a projective model of such \(X\) is given._ As we are only interested in K3 surfaces with low Picard number we only include the first entries of [1, Table 2], omitting the transcendental lattice and indicating the type of projective model provided by Artebani and Sarti. This result is very convenient because it tells us where to look in order to find polarized K3 surfaces of any degree admitting an automorphism of order \(3\), as shown in the following sections. **Remark 2.9**.: The K3 surfaces with a given marking and an automorphism of order \(3\) form a _subfamily_ of K3 surfaces with the same marking. To see this, let \((X,\alpha)\) be a very generic complex K3 surface with a non-symplectic automorphism \(\alpha\) of order \(3\). Denote by \(L\) its Picard lattice. Let \(V\) denote the \(\mathbb{C}\)-vector space given by \(H^{2}(X,\mathbb{Z})\otimes\mathbb{C}\). Then \(\alpha^{*}\) acts on \(V\) and its action induces an orthogonal decomposition of \(V\) into eigenspaces: \[V=V_{1}\oplus V_{\zeta}\oplus V_{\zeta^{2}}\;.\] As \(\alpha\) is non-symplectic we can assume that \(H^{0,2}(X)\subseteq V_{\zeta^{2}}\). We know that \(N(\alpha)\subseteq\operatorname{Pic}X\); as we assumed \(X\) to be very generic, we have that \(\operatorname{Pic}X=N(\alpha)=V_{1}\) and hence \(T_{X}\otimes\mathbb{C}=V_{\zeta}\oplus V_{\zeta^{2}}\). As \(V_{\zeta}\) and \(V_{\zeta^{2}}\) are swapped by \(\alpha^{*}\), they have the same dimension, and so we conclude that \[\operatorname{rk}T_{X}=\dim(T_{X}\otimes\mathbb{C})=2\dim V_{\zeta}.\] This means that if \((X,\alpha)\) is a K3 surface with an automorphism of order \(3\), its period \(\omega\) lies in \(\mathbb{P}(V_{\zeta})\) which has dimension \[\dim\mathbb{P}(V_{\zeta})=(\operatorname{rk}T_{X})/2-1=10-\rho(X)/2.\] On the other, if we just consider a marked K3 surface \(X\) with \(\operatorname{Pic}X\cong L\), without any other assumption, then the period of \(X\) will lie in \(\mathbb{P}(L^{\perp})\), which has dimension \(\operatorname{rk}T_{X}-1\). \begin{table} \begin{tabular}{|c|c|c|l|} \hline \(n\) & \(k\) & \(N\) & model for \(X_{n,k}\) \\ \hline 0 & 1 & \(U(3)\) & Quadric \(\cap\) cubic \(\subset\mathbb{P}^{4}\) \\ & 2 & \(U\) & Weierstrass model \\ \hline 1 & 1 & \(U(3)\oplus A_{2}\) & Quartic in \(\mathbb{P}^{3}\) \\ & 2 & \(U\oplus A_{2}\) & Weierstrass model \\ \hline 2 & 1 & \(U(3)\oplus A_{2}^{\oplus 2}\) & Double cover of \(\mathbb{P}^{2}\) \\ & 2 & \(U\oplus A_{2}^{\oplus 2}\) & Weierstrass model \\ \hline \end{tabular} \end{table} Table 1. Table of possible cases of \((n,k,N(\alpha))\) for \((X,\alpha)\) with \(\rho(X)\leq 6\). In the last column we indicate the type of projective model provided in [1]. ## 3. The case \(d\geq 2\) Let \((X,U,\alpha)\) be a complex projective K3 surface with an automorphism of order \(3\) and \(\operatorname{Pic}X\cong U\). We know that such K3 surface exists by Theorem2.8. **Example 3.1**.: In [1, Propositions 4.2 and 3.2], the authors also provide a projective model: \(X\) will be given by an equation of the form \[y^{2}=x^{3}+p(t) \tag{1}\] with \(p(t)\) a polynomial of degree \(12\) with only simple roots. **Lemma 3.2**.: 1. \(X\) _admits an elliptic fibration;_ 2. \(\operatorname{Pic}X=\langle F,O\rangle\)_, where_ \(F\) _is the class of the fiber of the elliptic fibration and_ \(O\) _is the class of the unique section._ 3. \(\alpha^{*}(F)=F\) _and_ \(\alpha^{*}(O)=O\)_._ Proof.: As \(U\) represents \(0\), there is a genus-\(1\) fibration on \(X\). We are left to prove that the fibration admits a section. Let \(e,f\) denote the generators of \(U\), i.e., \(e^{2}=0=f^{2}\) and \(e.f=1\). As \((e-f)^{2}=-2\), we may assume without loss of generality that \((e-f)\) is effective. Let \(f=F\) represent the class of the fiber. As \((e-f).f=1\), we conclude that \(e-f=O\) is the class of a section. Hence \(S\) has an elliptic fibration, proving the first statement. It is immediate to see that \(\langle O,F\rangle\cong[0,1,2]\cong U\), proving the second statement. To prove the third statement just recall \(\operatorname{rk}N(\alpha)\geq 1\) is even and \(N(\alpha)\subseteq\operatorname{Pic}X\) is primitive Remark2.6. It follows that \(N(\alpha)=\operatorname{Pic}X\), proving the statement. **Proposition 3.3**.: _Let \(X\) be a complex K3 surface with \(\operatorname{Pic}S\cong U\). Let \(e,f\) be the generators of \(U\). Then, up to a choice of signs, the ample cone is given by the divisors \(D=xe+yf\) such that \(y>x>0\)._ Proof.: Notice that in \(U\) there are only two \(-2\)-classes: \(\pm(e-f)\). Assume \(O=(e-f)\) is effective. Hence \(O\) is the only effective \(-2\)-curve of \(S\). The positive cone of \(X\) is given by divisors \(xe+yf\) such that \(xy>0\). Hence the ample cone is given by divisors \(D=xe+yf\) such that \(xy>0\) and \(D.(e-f)>0\). As \(D.(e-f)=-x+y\), we obtain the desired statement. **Proposition 3.4**.: _Let \(d>1\) be an integer. Then \(h_{3,2d}=2\)._ Proof.: Let \(X\) be as in Example3.1 and consider the divisor \(D=e+df\). Then, by Proposition3.3, \(D\) is ample. As \(D^{2}=2d\) and \(S\) has an automorphism of order \(3\), this proves the statement. **Remark 3.5**.: We are left to show that, for every integer \(d>1\), the bound \(h_{3,2d}=2\) can be attained over \(\mathbb{Q}\). In fact, Theorem2.8 only assures the existence of a K3 surface \((X,U,\alpha)\) over \(\mathbb{C}\). The authors give the projective model (1) and one might expect that for a random choice of coefficients of \(p(t)\) one obtains a K3 surface with Picard number \(2\). A practical problem arises though: computing the Picard number of a generic K3 surface in Weierstrass form is not easy. Luckily we can use the beautiful K3 surface \[X_{66}\colon y^{2}=x^{3}+t(t^{11}-1)\] considered by Kondo in [4]. This surface has the remarkable property of being the _unique_ K3 surfaces (up to isomorphism) to admit a \(\mathbb{Z}/66\mathbb{Z}\)-action. The Picard lattice of \(X_{66}\) is indeed \(U\)[4, Example 3.0.1]. ## 4. The case \(d=1\) The classification of the fixed locus of an order \(3\) non-symplectic automorphism given in [1] helps us also in establishing \(h_{3,2}\). **Proposition 4.1**.: \(2<h_{3,2}\leq 6\)_._ Proof.: Consider \((X,H,\alpha)\) with \(H^{2}=2\) and let \(N(\alpha)\subset\operatorname{Pic}X\) be the fixed locus of \(\alpha^{*}\). As the the fixed locus of \(\alpha\) is not empty [1, Theorem 2.2] we have that \(\rho(X)\geq\operatorname{rk}N(\alpha)\geq 2\). If \(\rho(X)=2\) it follows that \(\operatorname{Pic}X=N(\alpha)=U\) or \(U(3)\) Theorem 2.8. In both cases \(\operatorname{Pic}X\) does not admit an ample divisor of degree \(2\), hence a contradiction with the hypothesis that \(H^{2}=2\). From this it follows that \(h_{3,2}>2\). On the other hand, from Table 1 we see that there exists a complex K3 surface \(Y\) with an automorphism \(\sigma\) of order \(3\) and \(\operatorname{Pic}Y=N(\sigma)\cong U(3)\oplus A_{2}^{\oplus 2}\)[1, Theorem 3.3] and this \(Y\) can be realized as double cover of \(\mathbb{P}^{2}\). From this it follows that \(h_{3,2}\leq 6\). **Lemma 4.2**.: _Assume \((X,H,\alpha)\) has \(H^{2}\) and \(2<\rho(X)<6\). Then \(\rho(X)=4\) and_ \[\operatorname{Pic}X\cong\begin{cases}U\oplus A_{2}\;\text{or}\\ U(3)\oplus A_{2}\;.\end{cases}\] Proof.: From Corollary 2.5 we know that \(2<\rho(X)<6\) is even. Hence the first statement. To prove the second statement we have to distinguish two cases: whether \(\operatorname{Pic}X\) is equal to \(N:=N(\alpha)\) or not. If \(\operatorname{Pic}X=N\) then the statement follows directly from Theorem 2.8. So let us assume that \(N(\alpha)\subsetneqq\operatorname{Pic}X\) and let \(L:=N^{\perp}\subset\operatorname{Pic}X\) be the orthogonal complement of \(N\) in \(\operatorname{Pic}X\). As \(N(\alpha)\) is primitive in \(\operatorname{Pic}X\) and it is hyperbolic, it follows that \(\operatorname{Pic}X=N\oplus L\) and \(L\) is negative definite of rank \(2\). As \(L\) is not contained in \(N=N(\alpha)\), it follows that \(\alpha^{*}\) induces an order \(3\) action on \(L\), i.e. using the same notation as in [1], \(L\) is an \(\mathcal{E}\)-lattice. On the one hand, \(N\) can be either \(U\) or \(U(3)\), as we have already seen; on the other hand, there is only one negative definite \(\mathcal{E}\)-lattice of rank \(2\), namely \(A_{2}\). This concludes the proof. **Theorem 4.3**.: \(h_{3,2}=6\)_._ Proof.: Combining Proposition 4.1 and Lemma 4.2 we immediately get that \(h_{3,2}\in\{4,6\}\). Assume \(h_{3,2}=4\). Then there is a K3 surface \((X,H,\alpha)\) with \(H^{2}\) and \(\rho(X)=4\). From Lemma 4.2 we have that \(\operatorname{Pic}X\) is isometric to either \(U\oplus A_{2}\) or \(U(3)\oplus A_{2}\). We claim that none of these two lattices admits an ample \(2\)-divisor. First we show that \(U(3)\oplus A_{2}\) does not admit \(2\)-divisors at all. Let \(u_{1},u_{2}\) and \(a_{1},a_{2}\) denote the generators of \(U\) and \(A_{2}\), respectively and let \[D:=x_{1}u_{1}+x_{2}u_{2}+y_{1}a_{1}+y_{2}a_{2}\] a \(2\)-divisor, that is, \(D^{2}=2\). Dividing by \(2\), we get the following equality: \[3x_{1}x_{2}-y_{1}^{2}+y_{1}y_{2}-y_{2}^{2}=1\;. \tag{2}\] This can be rewritten as \[3x_{1}x_{2}-1=y_{1}^{2}-y_{1}y_{2}+y_{2}^{2}. \tag{3}\] Reducing modulo \(3\), (3) induces the following equation: \[y_{1}^{2}-y_{1}y_{2}+y_{2}^{2}\equiv 2\mod 3\;. \tag{4}\] It easy to see by direct computations that (4) has no solutions in \(\mathbb{Z}/3\mathbb{Z}\), proving the claim. Assume then that \(\operatorname{Pic}X\cong U\oplus A_{2}\). This implies that \(U\hookrightarrow N(\alpha)\) and hence \(X\) is elliptic and can be described by the following Weierstrass equation [1, Proposition 4.2]: \[y^{2}=x^{3}+p_{12}(t),\] where \(p_{12}\) is a polynomial of degree \(12\). As \(\operatorname{Pic}X\cong U\oplus A_{2}\) we see that \(X\) has a singular fiber of Kodaira type IV. This implies that \(\operatorname{Pic}X\) is generated by the class of the fiber \(F\), the class of the section \(O\) and the two components \(E_{1},E_{2}\) of the bad fiber not meeting \(O\). Using these four generators, the Gram matrix of \(\operatorname{Pic}X\) is the following. \[\begin{pmatrix}0&1&0&0\\ 1&-2&0&0\\ 0&0&-2&1\\ 0&0&1&-2\end{pmatrix}\] Let us write \(H=aO+fF+e_{1}E_{1}+e_{2}E_{2}\) and notice that the third component of the bad fiber can be written as \(E_{3}=F-E_{1}-E_{2}\) Then \(H^{2}=2\) implies \[af+e_{1}+e_{2}=e_{1}^{2}+e_{2}^{2}+a^{2}+1\;. \tag{5}\] As \(H\) is ample, its intersection with all the \(-2\)-curves is positive, that is, \[\begin{cases}H.O&=f-2a>0\;,\\ H.E_{1}&=-2e_{1}+e_{2}>0\;,\\ H.E_{2}&=e_{1}-2e_{2}>0\;,\\ H.E_{3}&a+e_{1}+e_{2}>0\;.\end{cases}\] From the above inequalities we deduce that \[\begin{cases}0<2a<f\;,\\ e_{1},e_{2}<0\;,\\ 0<-e_{1}-e_{2}<a\;.\end{cases}\] Consider then the quantity \(2a^{2}+1\). As \(2a<f\) and \(e_{1}e_{2}>0\) we can write \[2a^{2}+1<af+e_{1}e_{2}<af+3e_{1}e_{2}\;.\] Using (5), we can substitute \(af+e_{1}e_{2}\), hence obtaining \[2a^{2}+1<e_{1}^{2}+e_{2}^{2}+a^{2}+1+2e_{1}e_{2}=(-e_{1}-e_{2})^{2}+a^{2}+1<2a^ {2}+1\] as \(-e_{1}-e_{2}\) is strictly smaller than \(a\). In this way we get \[2a^{2}+1<2a^{2}+1\;,\] a contradiction, proving that \(U\oplus A_{2}\) admits no ample \(2\)-divisors. This concludes the proof. **Remark 4.4**.: It is easy to show that the bound \(h_{3,2}\) can be attained over \(\mathbb{Q}\). Indeed the paper [1] tells us how to find explicit examples of K3 surfaces of degree \(2\) with Picard number equal to \(6\), just by considering a surface as [1, Proposition 4.11] that is generic enough. For example, consider the K3 surface \(X_{2,1}\) given by the double cover of \(\mathbb{P}^{2}\) branched along the curve \[B\colon F_{6}(x_{0},x_{1})+F_{3}(x_{0},x_{1})x_{2}^{3}+bx_{2}^{6}\] with \[F_{6} :=-x_{0}^{6}+2x_{0}^{5}x_{1}-x_{0}^{4}x_{1}^{2}-2x_{0}^{3}x_{1}^{3}-x _{0}^{2}x_{1}^{4}+x_{0}x_{1}^{5}-x_{1}^{6}\;,\] \[F_{3} :=2x_{0}^{2}x_{1}-x_{1}^{3}\;,\] \[b :=2\;.\] From [1, Proposition 4.11] we know that \(\rho(X_{2,1})\geq 6\). By reducing modulo a prime of good reduction, e.g. 11, one can see that \(\rho(X_{2,1})=6\) and hence \(\operatorname{Pic}X_{2,1}\cong U\oplus A_{2}^{\oplus 2}\).
2309.11915
Probing a scale dependent gravitational slip with galaxy strong lensing systems
Observations of galaxy-scale strong gravitational lensing systems enable unique tests of departures from general relativity at the kpc-Mpc scale. In this work, the gravitational slip parameter $\gamma_{\rm PN}$, measuring the amplitude of a hypothetical fifth force, is constrained using 130 elliptical galaxy lens systems. We implement a lens model with a power-law total mass density and a deprojected De Vaucouleurs luminosity density, favored over a power-law luminosity density. To break the degeneracy between the lens velocity anisotropy, $\beta$, and the gravitational slip, we introduce a new prior on the velocity anisotropy based on recent dynamical data. For a constant gravitational slip, we find $\gamma_{\rm PN}=0.90^{+0.18}_{-0.14}$ in agreement with general relativity at the 68\% confidence level. Introducing a Compton wavelength $\lambda_g$, effectively screening the fifth force at small and large scales, the best fit is obtained for $\lambda_g \sim 0.2$ Mpc and $\gamma_{\rm PN} = 0.77^{+0.25}_{-0.14}$. A local minimum is found at $\lambda_g \sim 100$ Mpc and $\gamma_{\rm PN}=0.56^{0.45}_{-0.35}$. We conclude that there is no evidence in the data for a significant departure from general relativity and that using accurate assumptions and having good constraints on the lens galaxy model is key to ensure reliable constraints on the gravitational slip.
Sacha Guerrini, Edvard Mörtsell
2023-09-21T09:26:56Z
http://arxiv.org/abs/2309.11915v2
# Probing a scale dependent gravitational slip with galaxy strong lensing systems ###### Abstract Observations of galaxy-scale strong gravitational lensing systems enable unique tests of departures from general relativity at the kpc-Mpc scale. In this work, the gravitational slip parameter \(\gamma_{\rm PN}\), measuring the amplitude of a hypothetical fifth force, is constrained using 130 elliptical galaxy lens systems. We implement a lens model with a power-law total mass density and a deprojected De Vaucouleurs luminosity density, favored over a power-law luminosity density. To break the degeneracy between the lens velocity anisotropy, \(\beta\), and the gravitational slip, we introduce a new prior on the velocity anisotropy based on recent dynamical data. For a constant gravitational slip, we find \(\gamma_{\rm PN}=0.90^{+0.18}_{-0.14}\) in agreement with general relativity at the 68% confidence level. Introducing a Compton wavelength \(\lambda_{g}\), effectively screening the fifth force at small and large scales, the best fit is obtained for \(\lambda_{g}\sim 0.2\) Mpc and \(\gamma_{\rm PN}=0.77^{+0.25}_{-0.14}\). A local minimum is found at \(\lambda_{g}\sim 100\) Mpc and \(\gamma_{\rm PN}=0.56^{0.45}_{-0.35}\). We conclude that there is no evidence in the data for a significant departure from general relativity and that using accurate assumptions and having good constraints on the lens galaxy model is key to ensure reliable constraints on the gravitational slip. ## I Introduction Together with quantum field theory, Einstein's theory of general relativity (GR) is a cornerstone of modern physics. Those two theories yield a description of the history of the Universe from a fraction of a second after the Big Bang to today, in what's called the cosmological concordance model \(\Lambda\)CDM [1]. The latter model is not fully understood however. In particular, the accelerated cosmic expansion remains one of the most puzzling questions in cosmology and in physics in general [2]. It may be formally understood as a cosmological constant added to Einstein equations expressing the link between space-time curvature and the stress-energy tensor \(T_{\mu\nu}\). The required cosmological constant is very small and presents a discrepancy of \(\gtrsim 60\) orders of magnitude with theoretical estimates, refered to as the _cosmological constant problem_[3]. Another perspective for understanding cosmic acceleration is to modify Einstein's theory of gravity [4]. So far, GR has been confirmed in all experiments, especially at the Solar System scale [5; 6; 7] but the true gravity theory might deviate from GR at cosmological scales. Therefore, determining whether dark energy or modified gravity (MG) drives cosmic expansion can potentially be adressed with a test of GR at cosmological scales. Many MG theories can be embedded in a phenomenological description [8], allowing for measurements of general departures from GR. The validity of GR can be tested by constraining the gravitational slip parameter \(\gamma_{\rm PN}\)[9], which describes how much space curvature is provided by the unit rest mass of objects. In addition, screening mechanisms appear naturally in many MG theories and restore GR on small and large scales [3]. Several cosmological probes allow tests of GR under screening. Among them, strong gravitational lensing (SGL) occurs due to the curving of space-time induced by mass. Strong lensing more precisely refers to the formation of multiple source images by a lens mass located close to the line of sight towards the source. In recent years, great efforts have been put into estimating cosmological parameters [10; 11], measuring the Hubble constant \(H_{0}\)[12; 13], the cosmic curvature [14] and the distribution of matter in massive galaxies acting as lenses [15; 16]. Provided reasonable prior assumptions and appropriate descriptions of the internal structure of lensing galaxies, it is possible to constrain the gravitational slip \(\gamma_{\rm PN}\) using SGL [17; 18; 19; 20]. Recent publications introduced a phenomenological screening model as a step discontinuity in \(\gamma_{\rm PN}\) at a scale \(r_{V}\)[20; 21; 22]. The obtained constraint in Ref. 21 is \(|\gamma_{\rm PN}-1|\leq 0.2\times(r_{V}/100\;{\rm kpc})\) with \(r_{V}=10-200\) kpc using two gravitationally lensed quasars time-delay measurements. Fast radio burst time-delay simulations [22], predict constraints \(|\gamma_{\rm PN}-1|\leq 0.04\times(r_{V}/100\;{\rm kpc})\times[N/10]^{-1/2}\) where \(N\) is the sample size. Ten events alone could place constraints at a level of 10% in the range \(r_{V}=10-300\) kpc. In this work, we take advantage of a recently compiled sample of 130 SGL systems [16] to investigate a gravitational slip under screening effects. Here, we assume that only massless photons will be affected by the fifth force i.e. only the longitudinal potential, \(\Psi\), varies. This is a common assumption [16; 20] motivated by the fact that we only probe the difference between massive and massless particles. We introduce a phenomenological description of screening at small and large scales respectively, parameterised by the Vainshtein radius, \(r_{V}\), and the Compton wavelength of the theory, \(\lambda_{g}\). The combination of lensing and stellar kinematics data is used to constrain possible discrepancies in the gravitational effects on massless (photons) and massive (stars, gas,...) particles. We introduce a deprojected De Vaucouleurs luminosity density to be compared with the commonly used power-law luminosity profile. We assess the influence of the lens mass model on our estimation of the gravitational slip and finally study the degeneracy between the gravitational slip and the Compton wavelength of the theory for \(\lambda_{g}=1\mathrm{pc}-100\,\mathrm{Gpc}\). This paper is organised as follows: in Section II, we introduce the model used to evaluate the velocity dispersion of lensing galaxies and our phenomenological screening description. We further introduce our SGL sample, the cosmological model as well as the model parameters for which we perform a Markov Chain Monte Carlo (MCMC) analysis. In Section III, we present and discuss our results. The case without screening is first used to asses the influence of the lens mass model on the fit before studying the degeneracy between the Compton wavelength and the gravitational slip. Conclusions are summarised in Section IV. ## II Methodology ### The Model #### ii.1.1 The general framework The general idea is to measure the mass enclosed inside the Einstein radius of the lens using both massless photons and massive stars as probes of the gravitational potential. Besides the imaging data of the SGL, spectroscopic data of the system is needed to measure the velocity dispersion of the lens galaxy. The comparison of the projected gravitational and dynamical masses (\(M_{\mathrm{grav}}\) and \(M_{\mathrm{dyn}}\) respectively) provides a promising test of GR at the galactic scales. From the theory of gravitational lensing, the gravitational mass is \(M_{\mathrm{grav}}=\Sigma_{\mathrm{cr}}\pi R_{E,\mathrm{GR}}^{2}\)[23] in GR where \(R_{E,\mathrm{GR}}=\theta_{E,\mathrm{GR}}D_{l}\) is the Einstein radius wherein \(\theta_{E,\mathrm{GR}}\) is the Einstein angle and \(D_{l}\) the angular distance between the observer and the lens. The critical surface density is defined by, \[\Sigma_{\mathrm{cr}}=\frac{c^{2}}{4\pi G}\frac{D_{s}}{D_{l}D_{ls}}, \tag{1}\] where \(D_{s}\) and \(D_{ls}\) are the angular distances between the observer and the source and between the lens and the source respectively. A mass distribution model of the lens galaxy \([\rho(r),\nu(r),\beta]\) is required to compute the velocity dispersion in the lens galaxy and the dynamical mass \(M_{\mathrm{dyn}}\). \(\rho\) is the total mass density, \(\nu\) the luminosity density of stars and \(\beta\) the anisotropy of the velocity dispersion assumed to be constant in this work. Assuming spherical symmetry, the Jeans equation [24] is given by \[\frac{d}{dr}[\nu(r)\sigma_{r}^{2}]+2\frac{\beta}{r}\nu(r)\sigma_{r}^{2}=-\nu( r)\frac{d\Phi}{dr}, \tag{2}\] where the gravitational potential is given by \[\frac{d\Phi}{dr}=\frac{GM(r)}{r^{2}}, \tag{3}\] where \(M(r)\) denotes the mass enclosed inside a sphere of radius \(r\). After integration, \[\sigma_{r}^{2}(r)=\frac{G\int_{r}^{\infty}dr^{\prime}\nu(r^{\prime})r^{\prime 2 \beta-2}M(r^{\prime})}{r^{2\beta}\nu(r)}. \tag{4}\] In an observational context, we do not measure \(\sigma_{r}^{2}\) but rather the luminosity-weighted average along the line of sight (los) and over the effective spectroscopic aperture \(R_{A}\)[16]. This can be expressed mathematically, \[\sigma_{\parallel}^{2}(\leq R_{A})=\frac{\int_{0}^{R_{A}}dR2\pi R\int_{-\infty }^{\infty}dZ\sigma_{los}^{2}\nu(r)}{\int_{0}^{R_{A}}dR2\pi R\int_{-\infty}^{ \infty}dZ\nu(r)}, \tag{5}\] where \(\sigma_{\mathrm{los}}\) is the velocity dispersion along the line of sight, \[\sigma_{\mathrm{los}}^{2}=(\sigma_{r}\cos\theta)^{2}+(\sigma_{t}\sin\theta)^{ 2}, \tag{6}\] where \(\sigma_{t}\) is the tangential velocity dispersion, \(\sigma_{r}\) the radial velocity dispersion and \(\theta\) the angle between the line of sight and the radial direction. Note that \(\sigma_{r}^{2}\) contains \(M_{\mathrm{dyn}}\) since we use the equality \(M_{\mathrm{grav}}=M_{\mathrm{dyn}}\) to fix the normalisation constant of the density \(\rho\). #### ii.1.2 Lens mass models In this work we use the following lens mass model: \[\left\{\begin{array}{rcl}\rho(r)&=&\rho_{0}\left(\frac{r}{r_{0}}\right)^{- \gamma},\\ \nu(r)&=&\nu_{0}\left(\frac{r}{a}\right)^{-\delta}\exp\left(-\left(\frac{r}{a }\right)^{1/4}\right),\\ \beta&=&1-\frac{\sigma_{r}^{2}}{\sigma_{r}^{2}},\end{array}\right. \tag{7}\] where \(\rho\) follows a commonly used power-law distribution [16; 17; 20] and \(\nu\) a deprojected De Vaucouleurs density profile [25] where \(a=R_{\text{eff}}/b^{4}\) with \(b=7.66925\) and \(\delta=0.8556\). It will be compared to the commonly used power-law \(\nu_{\text{pl}}(r)=\nu_{0}(r/r_{0})^{-\delta}\). The latter is convenient since the velocity dispersion can be expressed analytically [16]. The case of the De Vaucouleurs deprojected luminosity density requires numerical integration: \[\begin{split}\sigma_{\parallel}^{2}(\leq R_{A})=\frac{2c^{2}}{ \sqrt{\pi}}\frac{D_{s}}{D_{ls}}\theta_{E,\text{GR}}\frac{\Gamma(\gamma/2)}{ \Gamma\left(\frac{\gamma-1}{2}\right)}\left(\frac{R_{\text{eff}}}{R_{E}} \right)^{2-\gamma}\\ \times\frac{1}{b^{4(2-\gamma)}}\frac{A(\gamma,\beta;R_{A},R_{ \text{eff}})}{B(R_{A},R_{\text{eff}})},\end{split} \tag{8}\] where \[\begin{split} A(\gamma,\beta;R_{A},R_{\text{eff}})& =\int_{0}^{\frac{R_{A}}{R_{\text{eff}}}b^{4}}\int_{-\infty}^{ \infty}d\kappa dZ\frac{R}{(R^{2}+Z^{2})^{\beta}}\\ &\times\Gamma\left(4+4(2\beta-\gamma-\delta+1),(R^{2}+Z^{2})^{1 /8}\right)\left(1-\beta\frac{R^{2}}{R^{2}+Z^{2}}\right).\end{split} \tag{9}\] and \[B(R_{A},R_{\text{eff}})=\int_{0}^{\frac{R_{A}}{R_{\text{eff}}}b^{4}}\int_{- \infty}^{\infty}d\kappa dZ\frac{R}{(R^{2}+Z^{2})^{\beta/2}}\exp\left(-(R^{2}+ Z^{2})^{1/8}\right), \tag{10}\] with \(\Gamma(.,.)\) the upper incomplete gamma function: \[\Gamma(s,x)=\int_{x}^{\infty}t^{s-1}e^{-t}dt. \tag{11}\] \(A\) and \(B\) are numerically expensive to compute. As shown in Section II.2 Eq. (22), \(A\) and \(B\) do not depend on \(R_{A}\) and \(R_{\text{eff}}\), as \(B\) is constant and \(A\) can be obtained through interpolation in the \((\gamma,\beta)\)-plane. We use a Gaussian Process with a Matern 5/2 kernel to avoid the untimely call to a numerical integrator. We thus have an expression of the velocity dispersion depending on the Einstein radius of GR. We will mainly focus our interest on the De Vaucouleurs deprojected luminosity profile but will compare its results to those of the power-law model (See eq. (14)). #### ii.1.3 Gravitational slip and screening mechanisms So far, we have not introduced the gravitational slip. This can be done by making the link between the observed Einstein radius, \(\theta_{E,\text{obs}}\), and the one predicted by GR, \(\theta_{E,\text{GR}}\), given the lens mass distribution derived from the observed velocity dispersion. The post-Newtonian variables are applied to quantify the behaviour of gravity and deviations from GR. We express the metric on cosmological scales as [26], \[ds^{2}=a^{2}(\eta)[-(1+2\Phi)d\eta^{2}+(1-2\Psi)d\vec{x}^{2}], \tag{12}\] where \(a^{2}(\eta)\) is the cosmological scale factor, \(\eta\) the conformal time and \(\Phi\) and \(\Psi\) are the Newtonian and longitudinal gravitational potentials. In the weak-field limit, GR predicts \(\Phi=\Psi\) making it possible to constrain possible departures from GR using the gravitational slip parameter \(\gamma_{\text{PN}}=\Phi/\Psi\). MG theories such as \(f(R)\)[27], Brans-Dicke gravity [8] or massive gravity [28; 29] all predict a difference bewteen the two potentials \(\Phi\neq\Psi\). In many MG theories, \(\gamma_{\text{PN}}=1\) is expected at small and/or large scales due to screening effects and a limited range of the additional fifth force. Gravitational screening suppresses the additional gravitational degrees of freedom introduced by MG theories within a certain scale, in massive gravity theory refered to as the Vainshtein radius \(r_{V}\). At large scales, the Compton wavelength of the massive graviton \(\lambda_{g}\) sets the characteristic length of the Yukawa decay. Photons follow null geodesics, \(ds^{2}=0\) and are thus affected by a potential \(\Sigma\equiv\Phi+\Psi\) (12). We can model the departure from GR phenomenologically, \[\Sigma=[2+(\gamma_{\text{PN}}-1)\epsilon(r;r_{V},\lambda_{g})]\Phi(r), \tag{13}\] where \(\epsilon\) is a slip profile parameterised by \(r_{V}\) and \(\lambda_{g}\). Note that the functional form of \(\epsilon\) depends on the specific MG theory studied. \(\epsilon=1\) corresponds to a scale-independent deviation from GR [17; 19; 30], discussed in Section III.1. In Refs. [20; 21; 22], a step function corresponding to \(\epsilon(r;r_{V},\lambda_{g})=\Theta(r-r_{V})\) is employed. This description covers a large variety of models. The key feature is the computation of the deflection angle \(\alpha(\theta)\)[29] \[\alpha=\frac{1}{c^{2}}\frac{D_{ls}}{D_{s}}\int_{-\infty}^{\infty}\nabla_{\perp }\Sigma dZ, \tag{14}\] where \(\nabla_{\perp}\) is the gradient perpendicular to the direction of the photon. We distinguish the deflection angle in GR and the additional contribution from the fifth force parameterised by \(\gamma_{\text{PN}}\), \(r_{V}\) and \(\lambda_{g}\): \[\alpha_{\text{GR}}(\theta) =\frac{2}{c^{2}}\frac{D_{ls}}{D_{s}}\int_{-\infty}^{\infty}\frac{ \partial\Phi}{\partial R}dZ, \tag{15}\] \[\Delta\alpha(\theta) =\frac{\gamma_{\text{PN}}-1}{c^{2}}\frac{D_{ls}}{D_{s}}\int_{- \infty}^{\infty}\frac{\partial}{\partial R}(\epsilon(r;r_{V},\lambda_{g}) \Phi(r))dZ. \tag{16}\] The lens equation, with \(\beta\) the source angular position, is given by \[\beta(\theta)=\theta-\alpha_{\text{GR}}(\theta)-\Delta\alpha(\theta). \tag{17}\] Setting \(\beta=0\) draws a map from the observed Einstein radius, \(\theta_{E,\text{obs}}\), and the one predicted by GR, \(\theta_{E,\text{GR}}\). The slip profile considered in this work is \[\epsilon(r;r_{V},\lambda_{g})=\frac{r^{f}}{r_{V}^{f}+r^{f}}e^{-r/\lambda_{g}}, \tag{18}\] where \(f\) is an additional parameter tuning the sharpness of the cutoff at small scales. It models a polynomial screening at small scales and an exponential decay at large scales in keeping with bimetric gravity [29; 31]. For consistency with the latter theory, we fix \(f=3\) throughout this work. In more general terms, we model \(\gamma_{\rm PN}\) as radius dependent. \(\epsilon\) encodes the radial profile of \(\gamma_{\rm PN}\) which goes to 1 for \(r\ll r_{V}\) and \(r\gg\lambda_{g}\) and the remaining degree of freedom is the maximum deviation of the gravitational slip from unity. The deflection angle associated with this screening function is \[\alpha_{\rm GR}(\theta)=\theta_{E,\rm GR}^{-1}\theta^{2-\gamma}. \tag{19}\] \[\Delta\alpha(\theta)=\frac{2\pi\rm{B}-1}{2\gamma\sqrt{\pi}} \frac{\Gamma(\frac{\gamma}{2})}{\Gamma(\frac{\gamma}{2})}(\theta_{E,\rm GR}D _{L})^{\gamma-1}\theta\] \[\times\underbrace{\int_{-\infty}^{\infty}dZe^{-r/\lambda_{g}}r^{ \prime}/\gamma\left(\left[1-\frac{r}{\lambda_{g}(2-\gamma)}\right]\frac{1}{r _{V}^{2}+r^{\prime}}+\frac{f}{2-\gamma}\frac{r_{V}^{\prime}}{(r_{V}^{\prime}+ r^{\prime})^{2}}\right)}_{I(D_{l}\theta;\lambda_{g},r_{V},\gamma,f)}}_{I(D_{l} \theta;\lambda_{g},r_{V},\gamma,f)}.\] giving, using (17): \[\theta_{E,\rm GR}=\left(\theta_{E,\rm obs}^{1-\gamma}+\frac{2\rm{PN}-1}{2 \sqrt{\pi}}\frac{\Gamma(\gamma/2)}{\Gamma((\gamma-1)/2)}D_{l}^{\gamma-1}I(D_{l }\theta_{E,\rm obs}\lambda_{g},r_{V},f,\gamma)\right)^{\frac{1}{1-\gamma}}. \tag{21}\] ### Data sample In this work, we recovered a subsample of the data used in Ref. [16] (See Table 1 in the latter), having the benefit of being a recently compiled dataset for strong lensing. It contains 130 galaxy-scale SGL systems selected to approximately comply with the assumption of spherical symmetry via the following criteria: * The lens galaxy should be an Early-Type Galaxy with E/S0 morphologies. * The lens galaxy should not have significant substructure or close massive companion. Among those 130 systems, 57 comes from the SLACS survey [32], 38 from the SLACS extension SLACS for the Masses [33], 21 from the BELLS [34] and 14 from the BELLS for GALaxt-Ly\(\alpha\) EmitteR sYstems (BELLS GALLERY, [35]). This dataset provides the following information of relevance to compute the theoretical velocity dispersion from equations (14) or (8): * \(z_{l}\), the lens redshift. * \(z_{s}\), the source redshift. * \(\theta_{E,\rm obs}\), the observed Einstein angle. * \(\sigma_{\rm ap}\), the velocity dispersion of the lens galaxy in the corresponding spectroscopic aperture. * \(\Delta\sigma_{\rm ap}\), the associated measurement error. * \(\theta_{\rm ap}\), the spectroscopic aperture angular radius. * \(\theta_{\rm eff}\), the half-light angular radius of the lens galaxy. * \(\delta\), power-law index of the luminosity density1. Footnote 1: When required this index has been fitted on the high-resolution HST imaging data for the galaxies in our sample, see [16] for details. To take into account the effect of the aperture size on the measurements of the velocity dispersions \(\sigma_{\rm ap}\), we normalise all velocity dispersion to the typical physical aperture \(\theta_{\rm eff}/2\): \[\sigma_{\parallel}^{\rm obs}=\sigma_{\rm ap}\left(\frac{\theta_{\rm eff}}{2 \theta_{\rm ap}}\right)^{\eta}. \tag{22}\] We adopt the best-fit value of \(\eta=-0.066\pm 0.035\) from Ref. [36]. The total uncertainty of \(\sigma_{\parallel}^{\rm obs}\) can thus be written [19]: \[(\Delta\sigma_{\parallel}^{\rm tot})^{2}=\left[\frac{\Delta\sigma_{\rm ap}^{ 2}}{\sigma_{\rm ap}^{2}}+\Delta\sigma_{\rm sys}^{2}+\left[\ln\left(\frac{ \theta_{\rm eff}}{2\theta_{\rm ap}}\right)\Delta\eta\right]^{2}\right]( \sigma_{\parallel}^{\rm obs})^{2}, \tag{23}\] where we include a systematic error of \(\Delta\sigma_{\rm sys}\), e.g., taking into account possible extra mass contribution from matter along the LOS [37]. Previous work introduced a systematic error of 3%. To assess the uncertainty linked to the mass model, we run an MCMC analysis with \(\gamma_{\rm PN}=1\) and \(\Delta\sigma_{\rm sys}\) as an additional parameter. The fitted value for the systematic error is \(\Delta\sigma_{\rm sys}=9.52\pm 0.01\%\) larger than the one used in previous work. In what follows, the latter value for the systematic error is used. The corresponding theoretical prediction of the velocity dispersion is obtained by evaluating equations (14) and (8) at \(R_{A}=R_{\rm eff}/2\), \[\sigma_{\parallel}^{\rm th}=\sigma_{\parallel}(\leq R_{\rm eff}/2). \tag{24}\] In our analysis, we assume a Gaussian likelihood: \[\mathcal{L}\propto e^{-\chi^{2}/2}, \tag{25}\] where \[\chi^{2}=\sum_{i=1}^{N}\left(\frac{\sigma_{\parallel,i}^{\rm th}-\sigma_{ \parallel,i}^{\rm obs}}{\Delta\sigma_{\parallel,i}^{\rm tot}}\right)^{2}, \tag{26}\] with \(N\) being the number of SGL systems. In the following analysis, we derive the posterior probability distributions of model parameters using an affine-invariant Markov Chain Monte Carlo (MCMC) Ensemble sampler (emcee; [38]). ### Cosmological model In equations (14) and (8), we use a \(\Lambda\)CDM cosmology such that the angular distance between redshift \(z_{1}\) and \(z_{2}\) is given by \[D(z_{1},z_{2};H_{0},\Omega_{m})=\frac{c}{H_{0}(1+z_{2})}\int_{z_{1}}^{z_{2}} \frac{dz}{E(z;\Omega_{m})}, \tag{27}\] \[E(z;\Omega_{m})=\sqrt{\Omega_{m}(1+z)^{3}+(1-\Omega_{m})}, \tag{28}\] where \(H_{0}=67.37\) km/s/Mpc and \(\Omega_{m}=0.315\)[39]. It is common in the literature to use a cosmology-independent approach to compute the angular distance, usually using Type Ia supernovae data to get luminosity distances up to redshift \(z\simeq 2\) (See Refs. [14; 16; 20]). We chose not to adopt such an approach and argue that the cosmological model has only negligible influence on the results. Also, since we only need ratios of angular distances \(D_{s}/D_{ls}\), the results do not depend on the Hubble constant \(H_{0}\). As evident from Figure 2 in [16], the influence of \(\Omega_{m}\) on the ratio is quite small, at least for lenses at small redshift. Finally, Figure 4 and Table 1 in [19] show that the use of distance calibration yields only minor modifications to the fitted values. The reader should nevertheless keep in mind that using \(\Lambda\)CDM to measure distances and constrain GR should be considered an approximation employed for simplicity, motivated by the fact that a polynomial fit of Type Ia supernova data will only yield small differences in the estimation of angular distances. ### Model parameters and priors We run MCMC chains to fit the gravitational slip, \(\gamma_{\rm PN}\), the mass density slope \(\gamma\), and the velocity anisotropy, \(\beta\). The gravitational slip is our main interest but it requires accurate constrains on the lens mass model [15; 17]. \(\gamma\) corresponds to a common total density slope across our sample. We adopt flat priors for \(\gamma_{\rm PN}\) and \(\gamma\) on sufficiently wide ranges. We cannot independently measure \(\beta\) for individual lensing system with the spectroscopic data available. The latter is thus considered as a nuisance parameter, and therefore needs an informative prior. #### ii.4.1 Prior on the velocity anisotropy A truncated Gaussian prior on the velocity anisotropy \(\beta\) is commonly used with \(\beta=0.18\pm 0.13\) truncated at \([\overline{\beta}-2\sigma_{\beta},\overline{\beta}+2\sigma_{\beta}]\)[16; 19; 20; 30]. This constraint is obtained from a well-studied sample of nearby elliptical galaxies [40]. We assess the influence of the prior on \(\beta\) by introducing a new prior based on the most recent dynamical data of E/S0 galaxies from the combined analysis of the Dynamical and stellar Population (DynPop) for the MaNGA survey in the final SDSS Data Release 17 [41]. It contains dynamical data of \(\sim 10^{4}\) galaxies in the local universe analysed using the axisymmetric Jean Anisotropic Modelling (JAM) method. The latter is based on the Jeans equation with the velocity anisotropy \(\beta\) as a parameter. In line with our spherically symmetric assumption, we consider the models using JAM\({}_{\rm sph}\). We moreover only use the NFW and gNFW mass models since the mass-follows-light and the fixed NFW do not recover the density profiles very well. Finally, to avoid bias, we only select E/S0 galaxies using the method in Ref. [42]. To only select the most reliable data, we further impose \[|\beta_{NFW}-\beta_{gNFW}|<0.05. \tag{29}\] The threshold has been chosen to ensure a reasonable trade-off between the amount of data and the quality of the fit of \(\beta\). Our final sample contains 1136 galaxies to which we fit several distributions in order to find the most realistic prior. We finally chose a logistic prior to be compared with the histograms of our data in Figure 1, \[f(x;\mu,s)=\frac{e^{-(x-\mu)/s}}{s(1+e^{-(x-\mu)/s^{2}})}, \tag{30}\] where \(f\) is the logistic's density and \(\mu\) and \(s\) are the location and scale parameters fitted to the histograms. The logistic's wings are wider than the Gaussian's so it will allow the MCMC analysis to allow for larger range of values for the velocity anisotropy. The logistic is truncated at \(3\sigma\) to prevent \(\beta\) from taking nonphysical values, e.g., \(\beta>1\). #### ii.4.2 Grid analysis of screening mechanisms In Section III.2, we will introduce screening mechanisms by performing the fit for various values of the Compton wavelength of the theory, \(\lambda_{g}\). The latter will span order of magnitudes from the pc scale to the Gpc scale. Motivated by bimetric theory, we make the Vainshtein radius \(r_{V}\) dependent of \(\lambda_{g}\) and the mass of the lens galaxy [29], \[r_{V}=(r_{S}\lambda_{g}^{2})^{1/3}, \tag{31}\] where \(r_{S}\) is the Schwarzschild radius of the lens considered given by the mass inside its Einstein radius \(\theta_{E,{\rm obs}}\). The Vainshtein radius is therefore different for each galaxy in our sample. Varying \(\lambda_{g}\) explore regimes where the lenses in our samples are screened or unscreened explaining why we rather perform sampling of the gravitational slip for various \(\lambda_{g}\) rather than including it in our parameters. The former case allows to study the dependency of the constraints of \(\gamma_{\rm PN}\) on \(\lambda_{g}\) whereas the latter would not sample the full range of \(\lambda_{g}\). ## III Results and discussion We first assess the influence of the lens mass model in the case of a scale independent gravitational slip in Section III.1. We then study the constraints on a scale dependent gravitational slip (Section III.2) and discuss the results in Section III.3. ### Constant gravitational slip For \(\epsilon(r;r_{V},\lambda_{g})=1\), the relation between \(\theta_{E,\rm GR}\) and \(\theta_{E,\rm obs}\) is obtained from (21) with \(r_{V}=0\) and \(\lambda_{g}\to\infty\), \[\theta_{E,\rm GR}=\theta_{E,\rm obs}\left(\frac{\gamma_{\rm PN}+1}{2}\right)^{ -\frac{1}{\gamma-1}}. \tag{32}\] We perform the analysis for a power-law luminosity density and a De Vaucouleurs luminosity density (Figure 2). We moreover study the influence of the prior on the velocity anisotropy \(\beta\) by considering three priors: 1. Logistic distribution fitted to MaNGA DynPop dynamical data (See Section II.4 and Figure 1) truncated at \([\mu-3\sigma,\mu+3\sigma]\) with \((\mu,\sigma)=(0.22,0.2)\). 2. Truncated Gaussian with \((\mu,\sigma)=(0.3,0.14)\) between \([\mu-3\sigma,\mu+3\sigma]\). 3. Truncated Gaussian with \((\mu,\sigma)=(0.18,0.13)\) between \([\mu-2\sigma,\mu+2\sigma]\) used in previous work. The results are summarised in Table 1. We use the Akaike Information Criterion (AIC) [43] and the Bayesian Information Criterion (BIC) [44] as statistical criterion for model selection AIC \[= 2k+\chi^{2}_{\rm min},\] (33) BIC \[= k\ln(N)+\chi^{2}_{\rm min},\] (34) where \(k\) is the number of parameters and \(N\) the number of data points. They award models with few parameters giving good fits to the data. Here, models containing additional parameters for either screening or the lens mass are penalized in terms of the IC's, unless they supply significant better fits compared to the baseline model. Only the relative difference in AIC and BIC is relevant to favor a model over another. The best-fit values of a constant \(\gamma_{\rm PN}\) are all consistent with GR at the 68% confidence level. Particularly, in the case of a logistic prior (P1), we find a best-fit value of the gravitational slip of \(\gamma_{\rm PN}=1.14^{+0.22}_{-0.18}\) in the case of power-law luminosity densities and \(\gamma_{\rm PN}=0.90^{+0.18}_{-0.14}\) in the case of deprojected De Vaucouleurs luminosity densities. The gravitational slip and the velocity anisotropy are positively correlated and the prior on \(\beta\) can influence the fitted value of \(\gamma_{\rm PN}\) in the case of a power-law luminosity density (See Figure 2). Our choice of prior based on recent dynamical data [41] is slightly favored upon commonly used Gaussian priors but the IC's is not significantly better. We however underline that the posterior of the velocity anisotropy \(\beta\) is biased towards low values in the case of a logistic prior. The fitted value of the gravitational slip \(\gamma_{\rm PN}\) is therefore prior dependent. The best-fit values of the gravitational slip in the case of a deprojected De Vaucouleurs profile depend less on the prior choice for \(\beta\). We find \(\gamma_{\rm PN}=0.90\), \(0.96\) and \(0.88\) for priors (P1), (P2) and (P3), results agreeing at the 68% confidence level and being consistent with GR. We further note that the De Vaucouleurs luminosity profile improves the AIC with a value of \(146.8\) against \(156.2\) in the power-law case using a logistic prior on \(\beta\). Hereafter, we use the logistic prior on the velocity anisotropy \(\beta\) since it represents well the most recent dynamical data. In the GR case (\(\gamma_{\rm PN}=1\)) with this logistic prior, the fitted lens mass model gives an \(\rm AIC_{GR,DV}=151.1\) and \(\chi^{2}_{GR,DV}=147.1\) for a De Vaucouleurs luminosity profile. The GR case is favored over the constant gravitational slip case, since adding a constant gravitational slip does not give a significantly better representation of the data. The GR-case will serve as our reference model. Figure 1: Distribution of the anisotropy parameter \(\beta\) from MaNGA DynPop modelling [41]. The blue and green histograms correspond to the distribution obtained with an NFW and a gNFW model, respectively. The red solid curve corresponds to the best-fit of the histograms obtained with a logistic distribution (See eq. (30)). In the case of a power-law luminosity density, we get \(\text{AIC}_{\text{GR,PL}}=158.8\) and \(\chi^{2}_{\text{GR,PL}}=154.8\) which performs better than the case with a gravitational slip parameter. We underline that the value of \(\gamma\) is positively correlated with the gravitational slip. Our result \(\gamma\in[1.9,2.1]\) is consistent with previous studies fitting values of E/S0 galaxies density slope close to the Singular Isothermal Sphere (SIS) value of \(\gamma=2\)[45]. ### Gravitational slip under screening We now introduce a scale dependent slip parameterised by the Compton wavelength \(\lambda_{g}\). The Vainshtein radius is computed using equation (31). We fit the gravitational slip and the lens mass parameters for values of the Compton wavelength spanning from pc to Gpc scales. Our interest here is how constraints on \(\gamma_{\text{PN}}\) evolve with the Compton wavelength \(\lambda_{g}\). Figure 3 shows the 95% confidence region of \(\gamma_{\text{PN}}\) depending on \(\lambda_{g}\) for a deprojected De Vaucouleurs luminosity density only. As we can see in the bottom panel of figure 3, there are two competing local \(\chi^{2}\)-minima for \(\lambda_{g}\sim 0.2\) Mpc and \(\lambda_{g}\sim 100\) Mpc. Note that the contour plot obtained with the Compton wavelength \(\lambda_{g}\) as a free parameters would look different since the two local minima correspond to slightly different best-fit values for the gravitational slip and the samples are drawn from different region of phase space. Gridding over \(\lambda_{g}\)'s allows for an analysis of the degeneracy between the Compton wavelength and the gravitational slip. The dependence of the gravitational slip on the Compton wavelength allow us to draw qualitative conclusions. We first highlight the inability of our model to constrain \(\gamma_{\text{PN}}\) for \(\lambda_{g}\leq 10^{-4}\) Mpc and \(\lambda_{g}\geq 10^{3}\) Mpc. In the latter case, the Vainshtein radius for a galaxy of mass \begin{table} \begin{tabular}{c c c c c c c} Luminosity density & Prior on \(\beta\) & Parameters & & \(\chi^{2}_{\text{min}}\) & AIC & BIC \\ \hline Power-law & (P1) & \(\gamma_{\text{PN}}=1.14^{+0.22}_{-0.18}\) & \(\gamma=1.99^{+0.04}_{-0.04}\) & \(\beta=-0.02^{+0.16}_{-0.19}\) & 156.2 & 162.2 & 170.8 \\ Power-law & (P2) & \(\gamma_{\text{PN}}=1.31^{+0.20}_{-0.17}\) & \(\gamma=2.00^{+0.04}_{-0.04}\) & \(\beta=0.14^{+0.12}_{-0.12}\) & 158.4 & 164.4 & 173.0 \\ Power-law & (P3) & \(\gamma_{\text{PN}}=1.22^{+0.17}_{-0.13}\) & \(\gamma=2.00^{+0.04}_{-0.04}\) & \(\beta=0.06^{+0.12}_{-0.12}\) & 157.2 & 163.2 & 171.8 \\ De Vaucouleurs & (P1) & \(\gamma_{\text{PN}}=0.90^{+0.14}_{-0.14}\) & \(\gamma=1.92^{+0.04}_{-0.04}\) & \(\beta=0.23^{+0.19}_{-0.19}\) & 146.8 & 152.8 & 161.4 \\ De Vaucouleurs & (P2) & \(\gamma_{\text{PN}}=0.96^{+0.15}_{-0.14}\) & \(\gamma=1.92^{+0.04}_{-0.05}\) & \(\beta=0.31^{+0.14}_{-0.13}\) & 146.8 & 152.8 & 161.4 \\ De Vaucouleurs & (P3) & \(\gamma_{\text{PN}}=0.88^{+0.14}_{-0.13}\) & \(\gamma=1.92^{+0.05}_{-0.05}\) & \(\beta=0.19^{+0.13}_{-0.13}\) & 146.8 & 152.8 & 161.4 \\ \hline Power-law & (P1) & \(\gamma_{\text{PN}}=1\) & \(\gamma=1.97^{+0.03}_{-0.03}\) & \(\beta=-0.15^{+0.11}_{-0.08}\) & 154.8 & 158.8 & 169.4 \\ De Vaucouleurs & (P1) & \(\gamma_{\text{PN}}=1\) & \(\gamma=1.94^{+0.04}_{-0.03}\) & \(\beta=0.30^{+0.11}_{-0.11}\) & 147.1 & 151.1 & 161.7 \\ \end{tabular} \end{table} Table 1: The 1D marginalized limit (68% confidence regions) for model parameters constrained from the truncated sample with 130 SGL systems with various priors on \(\beta\) for two different models of the luminosity density. The bottom panel corresponds to the case of GR where we fit the lens mass model to the data with \(\gamma_{\text{PN}}=1\). Figure 2: 1D and 2D marginalised probability distribution at the \(1\sigma\) and \(2\sigma\) confidence level for the gravitational slip parameter \(\gamma_{\text{PN}}\) and the lens mass model parameters in the case of power-law profile (_left panel_) or a De Vaucouleurs profile (_right panel_) for the luminosity density. The dashed lines represent \(\gamma_{\text{PN}}=1\) predicted by GR and \(\gamma=2\) expected for a Singular Isothermal Sphere. \(M\sim 10^{11}M_{\odot}\) is of the order \(r_{V}\sim 10^{3}-10^{4}\) kpc. As a result, lens galaxies in our sample are completely screened from fifth force lensing effects. Analogously, for Compton wavelengths below \(\sim 100\) parsec, Einstein radii \(\sim 10\) kpc correspond to large numbers of e-folds of the fifth force Yukawa decay. In both regimes, we end up fitting models effectively equivalent to the reference GR case. We note some discrepancies from GR when fixing the Compton wavelength to order \(\lambda_{g}\sim 10^{-2}\) Mpc and \(\lambda_{g}\sim 1\) Mpc. However, for these \(\lambda_{g}\) and values between, the obtained constraints on the gravitational slip have no statistically significant departures from GR. Quantitatively, for intermediate Compton wavelength \(\lambda_{g}\), various constraints on the gravitational slip are obtained but computing the \(\chi^{2}\) hints at the most likely configuration. The best fit is obtained for \(\lambda_{g}\sim 0.2\) Mpc i.e \(r_{V}\sim 1\) kpc. The corresponding gravitational slip \(\gamma_{\rm PN}=0.77^{+0.43}_{-0.23}\) at the 95% confidence level with \(\chi^{2}_{\rm min}=134.9\) yielding \({\rm AIC}_{\rm min}=142.9\). Including screening mechanisms provides a better fit to the data but the result is consistent with GR at the 95% confidence level. Note that the bottom panels of Figure 3 shows that \(\lambda_{g}\sim 100\) Mpc presents a local minimum with a \(\chi^{2}=136.5\), slightly larger than for \(\lambda_{g}\sim 0.2\) Mpc (See Table 2). It appears that the AIC is significantly decreased when we take screening effects into account. Screening mechanisms modify the shape of the likelihood used in the GR case adding sharp variations of the \(\chi^{2}\) sensitive to both \(\gamma_{\rm PN}\) and the lens mass model \((\gamma,\beta)\). However, the likelihood only slightly varies in some direction in the \((\gamma_{\rm PN},\gamma)\)-plane up to the GR case where the \(\chi^{2}_{\rm GR}\) amounts to \(\sim\)147 explaining the size of the error bars on the gravitational slip. This phenomenon will be further discussed in section III.3. We finally underline that the tightening of the constraints for \(\lambda_{g}\sim 1\) Mpc and \(\lambda_{g}\sim 10^{-3}\) Mpc correspond to the cases where the Vainshtein radius and the Compton wavelength cross the typical Einstein radii in our samples, respectively. As a result, only part of the systems are screened yielding tighter constraints on the gravitational slip. ### Discussion Our above results present no statistically significant departure from GR, except for possible hints at \(\lambda_{g}\sim 1\) kpc and \(\lambda_{g}\sim 1\) Mpc. However, these Compton wavelengths are not favored in terms of the quality of their \begin{table} \begin{tabular}{c c c c} \(\lambda_{g}\) [Mpc] & Grav. slip \(\gamma_{\rm PN}\) & \(\chi^{2}_{\rm min}\) & \(\Delta{\rm AIC}_{\rm GR}\) \\ \hline 0.2 & \(0.77^{+0.25}_{-0.14}\) & 134.9 & 8.2 \\ 100 & \(0.56^{+0.45}_{-0.35}\) & 136.5 & 6.6 \\ \end{tabular} \end{table} Table 2: The 1D marginalized limit of the gravitational slip constrained from the truncated sample with 130 SGL systems with confidence regions at the 68% confidence level for relevant Compton wavelengths \(\lambda_{g}\). The \(\Delta{\rm AIC}\) is computed between the best-fit values reported and the AIC obtained in GR \(\chi^{2}_{\rm GR}=147.1\) and \({\rm AIC}_{\rm GR}=151.1\). Figure 4: Confidence regions at the 68% and 95% confidence level of the fitted parameters for a Compton wavelength \(\lambda_{g}=100\) Mpc using a De Vaucouleurs luminosity density. Figure 3: Fitted values of \(\gamma_{\rm PN}\) for various Compton wavelength \(\lambda_{g}\) using a De Vaucouleurs luminosity profile. The upper panel shows the evolution of the estimated \(\gamma_{\rm PN}\) as well as its confidence interval at the 68% and 95% levels. Shaded areas correspond to regions of phase space ruled out by our constraints at the 95% confidence level. The lower panel shows the corresponding value of the \(\chi^{2}-\chi^{2}_{\rm min}\) for each Compton wavelength. The dashed purple line corresponds to the minimum of the \(\chi^{2}_{\rm min}=134.9\). fits, or \(\chi^{2}_{\rm min}\), meaning that if the Compton wavelength was fitted as a parameter in the MCMC analysis, the obtained contours would not include those values of the Compton wavelength. The obtained results however present important lessons regarding the importance of systematic uncertainties. These systematic uncertainties could be linked to the dependency of the gravitational slip on the lens mass model. In this work, we fit a common total-density slope \(\gamma\) for all the lens galaxies, possibly a too simplified approximation. Figure 4 shows confidence regions for a Compton wavelength \(\lambda_{g}=100\) Mpc, from which it is evident that the gravitational slip \(\gamma_{\rm PN}\) is negatively correlated with the total density slope \(\gamma\). This degeneracy explains the width of error bars even though the likelihood have important variations along \(\gamma_{\rm PN}\). This degeneracy could be broken by having independent constraints on the total-density slope from more detailed lens mass modelling, and shows that the lens mass model is a key feature to obtain good constraints on the gravitational slip. To further assess the influence of the lens mass model on the best-fit value of the gravitational slip, we run an MCMC analysis where the total density slope of each galaxy in our sample is a model parameter for a Compton wavelength \(\lambda_{g}=100\) Mpc. Together with the gravitational slip and the velocity anisotropy, we thus fit 132 parameters where we assume a De Vaucouleurs luminosity density and a logistic prior on \(\beta\). By doing so, we are able to study whether fitting a common matter density slope is a good assumption. Figure 5 presents a scatter plot of the fitted velocity dispersion against the observed one for the model with a common \(\gamma\) and the case of different \(\gamma\)'s for each lens. It first appears that the case where all \(\gamma\)'s are free performs better than the case studied in this work with a \(\chi^{2}_{\rm min}\) of \(\sim\)70 against \(\sim\)135 even though the IC's are worse. Moreover, it measures no departure from GR with a best-fit gravitational slip of \(\gamma_{\rm PN}=0.99^{+0.027}_{-0.033}\). The latter shows that a single total density slope \(\gamma\) poorly takes into account outliers (see black boxes in Figure 5) listed in Table 3. Most of them comes from BELLS survey and Ref. [45] fitted the total density slope for those lenses. We note that for each of those outliers, the power-law index \(\gamma\) is either poorly constrained or deviates significantly from \(\gamma=2\) which is the mean power-law index fitted in Ref. [45]. Correctly constraining the lens mass model is therefore key to find an unbiased estimate of the gravitational slip. Previous work added extra degree of freedoms using dependency on the lens redshift or its surface density [16, 19]. Those correlations are however not evident in MaNGA DynPop data [41] and should be used with caution. Fitting the power-law index \(\gamma\) for each lens, convergence of the MCMC analysis is difficult to assess and, even though we were able to reduce the \(\chi^{2}\) with this method, it is likely that the lens mass model has not converged for every lens in our sample2. Further investigations of the lens mass modelling should lead to significant improvement in the measurement of the gravitational slip. We suggest two directions to further investigate gravitational slip constraints. The first being an approach where we ensure a good control of the mass model. To do so, we select a small number of systems for which we have the required photometric and spectroscopic data to constrain the lens mass model individually for each system, e.g, using packages like lenstronomy [46]. We argue that this approach could prevent the presence of outliers in our dataset and yields more reliable constraints on the slip by lifting the degeneracy between the gravitational slip and the total-density slope. Second, it could be worthwhile to keep investigating ways to model as precisely as possible lens galaxies for larger samples of systems. Implementing an NFW total density profile could, for example, improve the modelling of galaxy-scale strong lensing systems. On the other hand, stage IV surveys will likely increase the amount of available strong lensing data by several Figure 5: Scatter plot of the predicted velocity dispersion by the model against the observed velocity dispersion for \(\lambda_{g}=100\) Mpc. The red solid line correspond to the ideal case where the model fits perfectly the observations. The blue dots correspond to a model where we fit a power-law index \(\gamma\) for each lens system in our sample. Green crosses are obtained with a single total density slope \(\gamma\). Black boxes correspond to empirically identified outliers listed in Table 3. They were selected as manifest outliers in both the De Vaucouleurs model and the model where all density slopes are fitted. orders of magnitude, possibly mitigating the effect of outliers and thus potentially overcoming issues related to the lens mass model. We finally underline that our model is good at constraining the gravitational slip with fixed screening scales but yields poor constraints on cosmological parameters such as the curvature, dark densities or matter density. This can be attributed to the poor sensitivity of the angular distances ratio to cosmological densities. Time-delay cosmography measurements could however be of interest to constrain the Hubble constant \(H_{0}\). ## IV Conclusion In this work, we used galaxy-scale strong gravitational lensing to constrain deviations from general relativity at the kpc-Mpc scale. A zoo of modified gravity theories have been developed in the past decades to come up with solutions to one or several drawbacks of the concordance model of cosmology \(\Lambda\)CDM, e.g., to unveil the nature of dark matter and dark energy. We used a phenomenological description of modified gravity theories in the weak-field limit where the gravitational slip parameter \(\gamma_{\rm PN}\) captures the deviation from general relativity. Strong lensing data from early-type galaxies with E/S0 morphologies from SLACS and BELLS samples constrain the gravitational slip by measuring the mass of the lens galaxy with two different messengers. On the one hand, using the deflection angle of massless photons in the lens potential and on the other hand by measuring the velocity dispersion of stars and gas in the galactic potential. To do so, a power-law index \(\gamma\) models the total density in the lens galaxy. The luminosity density of stars is modelled with a deprojected De Vaucouleurs profile to be compared with the commonly used power-law luminosity density. A degeneracy exists between the gravitational slip \(\gamma_{\rm PN}\) and the velocity anisotropy \(\beta\); the greater \(\beta\), the greater \(\gamma_{\rm PN}\). The power-law luminosity density model is sensitive to \(\beta\)'s prior and can lead to biased estimates of the \(\gamma_{\rm PN}\) whereas the De Vaucouleurs profile leads to results quite independent of the prior. A logistic prior on \(\beta\) correctly fits recent ETGs data from MaNGA DynPop dynamical modelling. For a constant slip, \(\gamma_{\rm PN}=1.14^{+0.22}_{-0.18}\) at the 68% confidence level for a power-law luminosity density and \(\gamma_{\rm PN}=0.90^{+0.18}_{-0.14}\) for a deprojected De Vaucouleurs profile, consistent with GR. Screening effects are ubiquitous in modified gravity theories and appear in high-density regions where general relativity is tested with great precision e.g in the solar system. Inspired by bimetric massive gravity, we parameterise a scale dependent slip by introducing the Vainshtein radius \(r_{V}\) and the Compton wavelength \(\lambda_{g}\) of the theory which represent characteristic scales for screening at small and large scales respectively. We fit the gravitational slip and the power-law index of the total density for various values of the Compton wavelength \(\lambda_{g}\) from the pc to Gpc scales, making the Vainshtein radii of the lens galaxies depend on their mass and \(\lambda_{g}\). We find no statistically significant deviation from GR. Using a De Vaucouleurs deprojected luminosity density, the best-fit is obtained for \(\lambda_{g}\sim 0.2\:{\rm Mpc}\) with \(\gamma_{\rm PN}=0.77^{0.25}_{-0.14}\) at the 68% confidence level. We also find a local minimum for \(\lambda_{g}\sim 100\:{\rm Mpc}\) with \(\gamma_{\rm PN}=0.56^{0.45}_{-0.35}\). We shed light on the fact that the best-fit obtained for the gravitational slip is correlated with the lens mass model. Having realistic constraints on the lens mass model is a key feature to find good and reliable constraints on the gravitational slip \(\gamma_{\rm PN}\) and, a fortiori, any other cosmological parameter of interest. Further investigations on the influence of the lens mass model on cosmological parameters would be worthwhile. Restraining the dataset to fewer samples with excellent knowledge of the lens mass model should reduce the effects associated to outliers and provide more reliable measurements of the gravitational slip. Constraining the deviation from GR is of rising interest with the cosmological surveys to come, e.g Euclid and LSST. Euclid, for example, is expected to provide millions of photometric and spectroscopic galactic observations leading to a sample of strong lenses several order of magnitudes larger than the one employed in this study. It will thus prove of interest, in the years to come, to apply our model to larger samples to see if such an amount of data is able to smooth out effects attributed to outliers. Moreover, strong lensing is not the only way to probe gravity. Fast radio bursts [22] or time-delay cosmography [21] are but examples of useful probe to detect gravitational discrepancies from the current concordance model. Time-delay measurement would be of interest since they allow to study the existence of degeneracies between the Hubble constant and the gravitational slip. Let alone our use of strong lensing data, our work have investigated ways to constrain the lens mass model on one hand and to include screening mechanisms on the other hand. ###### Acknowledgements. We thank Robert R. Caldwell and Kai Shu for helpful discusssions as well as Yun Chen for sharing his strong lensing data sample. EM acknowledges support from the Swedish Research Council under Dnr VR 2020-03384. ## Appendix A Empirically identified outliers in our fitted data In figure 5, we identified persistent outliers between the analysis using a common power-law index \(\gamma\) and individual \(\gamma\)'s for each lens system. Most of those systems were observed in the BELLS survey and studied in Ref. [45]. The density slope \(\gamma\) fitted is either outside the range \(\gamma\in[1.9,2.1]\) usually obtained, or has unusually large error bars \(\Delta\gamma\sim 0.5\). ## Appendix B Analytical expression of the velocity dispersion for a power-law luminosity density The velocity dispersion in the case of a power-law luminosity density is obtained in Ref. [16] using Jeans equation (2) to obtain the radial velocity dispersion (4). The luminosity-weighted average along the line of sight and over the effectives spectroscopic aperture \(R_{A}\) is obtained with equation (5) and yields with a luminosity density \(\nu_{\rm pl}=\nu_{0}(r/r_{0})^{-\delta}\): \[\begin{split}\tau_{\rm[pl^{2}(\leq R_{A})}^{2}&= \frac{c^{2}}{2\sqrt{\rho}\ D_{\mu}}\varepsilon_{E,\rm GR}\frac{3-\delta}{( \epsilon-2\beta)(3-\xi)}\\ &\times\left[\frac{r(\frac{R_{A}-1}{2\xi})}{\Gamma(\xi/2)}-\beta \frac{r(\frac{\xi+1}{2\xi})}{\Gamma(\frac{\gamma+2}{2\xi})}\right]\frac{\Gamma( \gamma)2\Gamma(\delta/2)}{\Gamma(\frac{\gamma-2}{2})\Gamma(\frac{\delta-1}{2} )}\left(\frac{\theta_{A}}{\theta_{E,GR}}\right)^{2-\gamma},\end{split} \tag{20}\] where \(\xi=\gamma+\delta-2\), \(\Gamma\) is the Gamma function and \(\theta_{A}\) is the angular spectroscopic aperture.
2309.05960
First constraints on Helium $^{+}{\rm He}^3$ evolution in $z=3-4$ using the 8.67GHz hyperfine transition
We present the first constraints on the cross-correlation power spectrum of HeII ($^{+}{\rm He}^3$) signal strength using the redshifted 8.67GHz hyperfine transition between $z=2.9$ and $z=4.1$ and with interferometric data obtained from the public archive of the Australia Telescope Compact Array. 210 hours of observations of the primary calibrator source B1934-638 were extracted from data obtained with the telescope from 2014--2021, and coherently combined in a power spectrum pipeline to measure the HeII power across a range of spatial scales, and at three redshifts that span the period of Helium reionization. Our best limit places the brightness temperature fluctuation to be less than 557$\mu$K on spatial scales of 30 arcmin at $z=2.91$, and less than 755$\mu$K on scales of 30 arcmin at $z=4.14$ (2-sigma noise-limited). We measure a temperature of 489$\mu$K at $z=2.91$. ATCA's few antennas and persistent remaining RFI in the data prevent deeper integrations improving the results. This work is a proof of principle to demonstrate how this type of experiment can be undertaken to reach the 0.01--1$\mu$K level expected for the Helium signal at $z \sim 4$.
Cathryn M. Trott, Randall B. Wayth
2023-09-12T04:54:35Z
http://arxiv.org/abs/2309.05960v2
First constraints on Helium \({}^{+}\)He\({}^{3}\) evolution in \(z=3-4\) using the 8.67GHz hyperfine transition ###### Abstract We present the first constraints on the cross-correlation power spectrum of HeII (\({}^{+}\)He\({}^{3}\)) signal strength using the redshifted 8.67GHz hyperfine transition between \(z=2.9\) and \(z=4.1\) and with interferometric data obtained from the public archive of the Australia Telescope Compact Array. 210 hours of observations of the primary calibrator source B1934-638 were extracted from data obtained with the telescope from 2014-2021, and coherently combined in a power spectrum pipeline to measure the HeII power across a range of spatial scales, and at three redshifts that span the period of Helium reionization. Our best limit places the brightness temperature fluctuation to be less than 557\(\mu\)K on spatial scales of 30 arcmin at \(z=2.91\), and less than 755\(\mu\)K on scales of 30 arcmin at \(z=4.14\) (2-sigma noise-limited). We measure a temperature of 489\(\mu\)K at \(z=2.91\). ATCA's few antennas and persistent remaining RFI in the data prevent deeper integrations improving the results. This work is a proof of principle to demonstrate how this type of experiment can be undertaken to reach the 0.01-1\(\mu\)K level expected for the Helium signal at \(z\sim 4\). Cosmology -- instrumentation: radio 0000-0002-4882-8858]Cathryn M. Trott 0000-0002-4882-7885]Randall B. Wayth ## 1 Introduction The reionization of neutral hydrogen in the first billion years of the Universe (HI\(\rightarrow\)HII) is a major phase transition in the intergalactic medium (IGM), with 75 percent of the IGM composed of hydrogen gas. This key period is being studied extensively through a number of observational tracers because it traces the formation of the first stars and galaxies (likely responsible for the reionization; Furlanetto et al., 2006). One major observational tracer is the hyperfine transition of neutral hydrogen due to the energy-level splitting from the coupling of the spin states of the proton and electron, with a rest frequency of 1420 MHz (21 cm). The brightness temperature of this line encodes key information about the thermal and radiative state of the IGM, thereby indirectly probing the nature of the first stars and galaxies (Furlanetto et al., 2006; Koopmans et al., 2015). The ionisation energy of ground state hydrogen is 13.6eV, an energy available to be ionised by photons emitted in the ultraviolet part of the spectrum. The redshifted 21 cm transition is being pursued by many international experiments (Barry et al., 2019; Garsden et al., 2021; HERA Collaboration et al., 2023; Mertens et al., 2020; Gehlot et al., 2019; Trott et al., 2020). Hydrogen combined at \(z\approx 1100\) when the Universe had expanded and cooled sufficiently to bind the electron to the proton. At higher redshifts, \(z\approx 6000\), Helium recombined (HeIII \(\rightarrow\) HeII), when its inner electron, with a binding energy of 54.4eV, recombined (Switzer and Hirata, 2008). Helium-4 comprises almost 24 percent, by number, of the IGM after recombination, but Helium-3, which has a non-zero magnetic dipole moment that can produce hyperfine splitting, has a substantially smaller abundance (1 part in \(10^{5}\), Kneller and Steigman, 2004). The first electron of neutral helium has a similar binding energy as hydrogen and is expected to reionize at similar redshifts to hydrogen (\(z>6\)), however the reionization of singly- to double-ionised helium is expected to occur much later, when AGN can provide sufficient high energy photons to unbind the 54.4eV second electron. This HeII to HeIII process therefore is the second major reionization period of the Universe, liberating further electrons into the IGM. Like hydrogen, the hydrogenic single-electron \({}^{+}\)He\({}^{3}\) ion has a hyperfine splitting of energy states, emitting a 8.67 GHz photon (3.5 cm, rest). There is indirect evidence for Helium reionization being complete by \(z\sim 3\), including optical depth to HeII (Worseck et al., 2011, 2016), increased IGM temperature (Lidz et al., 2010; Makan et al., 2021), and hardness of background radiation from metal ionization potentials (Turner et al., 2016; Morrison et al., 2019). Fast Radio Bursts (FRBs) also have the potential to probe this era because the spectral dispersion of their signals is linearly proportional to the line-of-sight electron component of the IGM (Caleb et al., 2019). Worseck et al. (2011, 2016) have studied the spectra of \(z>3.5\) AGN for evidence of Gunn-Peterson absorption of helium Ly-\(\alpha\) Forest of the redshifted 304 Angstrom line, reporting that the IGM was highly-ionized by \(z=3.4\) due to large transmission regions, and high variance between sightlines. Like the hydrogen Ly-\(\alpha\) Forest, this probe provides detailed information along skewers through the IGM, but is less sensitive to the large-scale signal evolution. Similarly, the large optical depth to the Ly-\(\alpha\) line (and the saturation of the line at low neutral fraction) makes this probe sensitive primarily to the end of reionization. McQuinn and Switzer (2010) considered the HeI Ly-\(\alpha\) forest, with rest wavelength of 584 Angstrom, as a tracer of the reionization of HeII. Unlike the HeII forest, which saturates easily and has a shorter wavelength, this tracer is weakly absorbed, providing the potential for a quantitative study of singly-ionised helium. Radio observation of the reionization of helium is via the radio hyperfine line, line-of-sight absorption to background AGN (Ly-\(\alpha\) Forest - 304 Angstrom line), and indirectly in the IGM electron density (e.g., through observations of the dispersion measure - redshift relation with Fast Radio Bursts and other high-redshift transients). Optical tracers can probe Helium-4 because they rely on electron transitions. To-date, there has been no detection of the Helium-3 hyperfine transition, or attempts to undertake this experiment. Observationally, its small optical depth relative to hydrogen (due primarily to its very small primordial abundance) is balanced somewhat by the lower system temperature and reduced radio foreground at higher frequencies. A number of studies have predicted theoretically (McQuinn and Switzer, 2009; Bagla and Loeb, 2009), or simulated (Khullar et al., 2020), the helium hyperfine signal in \(z=3-4\), predicting temperature fluctuations spanning 0.1-50\(\mu\)K on different scales. Khullar et al. (2020) considered the temperature fluctuations of HeII around QSOs in simulations at higher redshifts, commensurate with hydrogen reionization. Detection of intergalactic \({}^{+}\)He\({}^{3}\) requires a contrast between the resonant signal temperature and the CMB temperature. McQuinn and Switzer (2009) argues that both weak Wouthysen-Field coupling and small collisional coupling in the IGM would make the temperature contrast small and the signal very difficult to measure. Instead, they consider three avenues for using \({}^{+}\)He\({}^{3}\) to understand the evolution of the Universe. Firstly, because absorption of continuum light to a background source by HeII is proportional to the source flux density (and not the CMB), identifying and performing spectral absorption measurements on \(z>3.5\) QSOs would be possible. Secondly, at higher redshift, when HeI is transitioning to HeII, first producing the 8.67GHz hyperfine signal, absorption would probe HeI-HeII reionization. Finally, HeII that is self-shielded in halos with sufficient density for collisional coupling to produce an emission signal, could be used to trace the biased signal evolution (similar to post-reionization hydrogen). In this paper, we aim to place constraints on the spherically-averaged (three-dimensional) power spectrum of brightness temperature fluctuations of singly-ionised Helium-3 over \(z=2.9-4.2\). This work acts as a demonstration of how to undertake this experiment with real telescope data. In Section 2 we introduce the methods, including theoretical predictions and observations; in Section 2.3 the power spectrum methodology is introduced; in Section 3 the results are presented, before being discussed in Section 4. ## 2 Methods ### Theoretical predictions and observational approaches The optical depth to \({}^{3}\)He\({}^{+}\) is given by McQuinn and Switzer (2009): \[\tau_{\rm He^{+}}=\frac{c^{3}\hbar A_{10}}{16k_{B}T_{s}\nu_{10}^{2}}\frac{n_{ {}^{3}He^{+}}}{(1+z)dv/dy}, \tag{1}\] where \(A_{10}\) is the spontaneous emission rate, \(\nu_{10}\)=8666MHz is the rest frequency, \(n_{{}^{3}He^{+}}\) is the Helium-3 gas density, which is computed as a fractional abundance of the hydrogen density (1.04 \(\times\) 10\({}^{-5}\)), and \(dv/dy\) is the proper velocity per unit conformal distance. Although the helium fraction to hydrogen is small, the spontaneous emission rate is 670 times larger, offsetting some of the reduction. The spin temperature, \(T_{s}\), is set by the relative occupancy of the two energy states, and like hydrogen, is set by the thermal, collisional and radiative properties of the medium. The differential brightness temperature of the emission (temperature relative to the CMB), is then given by (Bagla and Loeb, 2009; Furlanetto et al., 2006): \[\delta T_{b}\simeq 18\mu{\rm K}\ x_{\rm HeII}\left(1-\frac{T_{CMB}}{T_{s}}\right) \left(\frac{1+\delta}{10}\right)\left(\frac{1+z}{9}\right)^{2}\] \[\times\left(\frac{n_{3,\mu}/n_{H}}{1.1\times 10^{-5}}\right) \left[\frac{H(z)}{(1+z)(dv_{\rm 1}/dv_{\rm 1})}\right]. \tag{2}\] During hydrogen reionization (\(z=6-10\)), brightness temperature fluctuations in Helium-3 are driven by spin temperature coupling to the gas temperature. Bagla and Loeb (2009) suggest that helium fluctuations at the time of hydrogen reionization are driven by density inhomogeneities and coupling to the gas kinetic temperature, which can be \(10^{4}\)K in ionized regions. In the hydrogen post-reionization era, they propose that high-energy photons from AGN can reionize Helium-3, and that brightness temperature fluctuations will be dominated by ionization status and gas kinetic temperature inhomogeneities. Equation 2 shows that a non-zero emission signal relies on \(T_{\rm CMB}\neq T_{s}\). At \(z=3.6\), McQuinn and Switzer (2009) argue that the low collisional coupling in the IGM and weak radiative spin coupling prevents \(T_{s}\) from coupling to the gas kinetic temperature and instead keeps it coupled to the CMB temperature. Observation of emission signal is therefore only feasible for HeII that is self-shielded from reionization in halos, where the density is such for collisional coupling to increase the spin temperature. These overdense regions (\(\Delta_{b}\geq 100,n_{e}\geq 3\times 10^{-6}cm^{-3}\)) may comprise \(X=1-10\%\) of the gas mass density. In IllustrisTNG simulations, Martizzi et al. (2019) found that 1-15% of gas resided at these densities from \(z=4-2\). In this scenario, the helium signal traces the halo distribution with an order-unity bias, \(b\), giving a nominal power of: \[P(k) \simeq \left(18\mu{\rm K}\left(\frac{1+\delta}{10}\right)\left(\frac{1+z }{9}\right)^{2}\right)^{2}X^{2}b^{2}P_{m}(k) \tag{3}\] \[= (0.6\mu{\rm K})^{2}X^{2}b^{2}P_{m}(k)\ h^{-3}{\rm Mpc}^{3}\] where \(P_{m}(k)\) is the matter power spectrum, and we have assumed \(z=4\) and \(\delta=0\). We use the CAMB software1(Lewis and Bridle, 2002; Lewis, 2013), to generate the matter power spectrum between \(z=2.9-4.1\). Footnote 1: [https://camb.info/](https://camb.info/) At \(z\simeq 3-4\), the redshifted hyperfine line is observed at 1700-2200 MHz, in contrast to hydrogen's 21cm line being redshifted to 100-200 MHz. Observationally, the sky temperature at higher frequencies (which dominates the system temperature at low frequencies), is substantially lower, yielding lower radiometric noise (\(T\propto\nu^{-2.6}\)). Other key factors in determining the utility of observations are the field-of-view (\(\propto\lambda\)), the instantaneous bandwidth (for continuum foreground fitting and removal) and spectral resolution (for matching ionization regions), and the amplitude of ionospheric refraction (\(\propto\lambda^{2}\)). Simulations of Compostella et al. (2013) and McQuinn et al. (2009) show that IGM kinetic temperature and ionization fraction fluctuations can be observed on spatial scales of 1-50 cMpc at \(z=3.5\), corresponding to 1-20 arcmin angular scales and 1-30 MHz spectral scales from an observational standpoint. Any experiment to attempt the intergalactic signal would be suited to an array with baselines of \(x=u\lambda=\lambda/\Delta\theta\simeq 30-500\) metres (at \(\lambda=17\) cm), and a dish aperture of \(d=\lambda/\)FOV \(<30\) metres. These parameters are well-matched to the ATCA, and provide the starting point for our analysis. ### Observations The Australia Telescope Compact Array2(ATCA) is a 6-element dish-based interferometer that is mostly operated in an East-West configuration, at the Paul Wild Observatory near Narrabri NSW. Each dish has a diameter of 22 m, and is fed by multiple interchangeable feeds, operating at centimeter and millimetre wavelengths. The CABB backend, installed in 2009, allows for an instantaneous 2048 MHz of bandwidth (Wilson et al., 2011). For our experiment, we limit data to those observed in the 1.1-3.1 GHz CABB band, with 1 MHz spectral resolution over 2048 channels. The ATCA operates in a number of array configurations. For this work, the archive was searched for all 6km, 1.5km and 750m configurations, with array minimum and maximum baselines spanning 30.6 m to 6000 m. The shorter of these baselines, and the spectral resolution and bandwidth of the CABB, are well-suited for an interferometric helium experiment. What is less well-suited, is the low sensitivity afforded by the small number of antennas. With the 6th antenna (fixed at a minimum baseline of 3 km from the others) effectively unusable for this science, there are only 10 baselines available for integration. This will limit the sensitivity of the experiment compared with another array with better \(uv\)-coverage and sensitivity. Footnote 2: [https://www.narrabri.atnf.csiro.au/](https://www.narrabri.atnf.csiro.au/) Due to the low sensitivity of the ATCA, and the desire to undertake a proof-of-principle experiment, we chose to use public data from the extensive ATCA archive, rather than propose for new multi-year observations. To undertake a coherent experiment (where the noise can be reduced most effectively by observing the same patch of sky for the full observation), a field is required where the ATCA has spent substantial observing time, and preferably with a simple foreground source that can be easily removed. The primary calibrator source, B1934-638, fits both of these criteria: it has been one of the most-observed calibrator sources for data over the CABB lifetime, and it is a well-modell single power law spectrum and a flux density of \(\sim\)12 Jy at 1500 MHz. The ATCA archive3 was searched for observations containing source B1934-638, using the CABB backend over 1.1-3.1 GHz with 1 MHz spectral resolution, for all 6km, 1.5km and 750m baseline configurations, and for dates spanning 2014-2022 (public data). These individual observations were inspected manually to extract those with \(>\)30 minutes continuous observation of the source. This provided a set of longer observations to reduce the number of observations to be flagged. The search yielded 410 hours of data. Footnote 3: [https://atoa.atnf.csiro.au/](https://atoa.atnf.csiro.au/) ### Data analysis We used the data reduction software Miriad(Sault et al., 1995) to flag and calibrate data for further analysis. The CABB backend outputs data across two IFs. For the data used in this project, the channels were listed in reverse order (decreasing frequency). After visual inspection of a few datasets using uvspec, it was clear that there were bands of persistent RFI that affect all CABB data. These were flagged across all datasets using uvflag. In addition, the top and bottom of the band were flagged due to large regions of RFI, and the desire to analyse data outside of the local 21cm band (1420 MHz) and below 2.2 GHz. The first 400, and final 827 channels were flagged, leaving 650 channels spanning 1676-2325 MHz (\(z=4.2-2.7\)). Each dataset was then manually inspected using uvspec, with auto- and cross-correlations visualised after averaging each 60 minutes of data. RFI identified by visual inspection were manually flagged using uvflag. In addition poorly-performing baselines or antennas were also flagged. After the first round of flagging, data were calibrated using mfcal. Because B1934-638 is the primary calibrator for ATCA, no other field data were required. After calibration, datasets were inspected again with uvspec. All data that appeared RFI-free and showed cross-correlations consistent with expectations (flat \(\sim\)12 Jy continuum source) were retained for further analysis. After rejection of poorly-calibrated observations, there were 210 hours of data. The foreground source, B1934-638, is well-fitted by a first-order polynomial. Fitting across the full 650 MHz does not affect the Helium-3 structures of interest to this work, due to the very broad backend. First-order fits were subtracted from the visibilities using the uvlin procedure. Residual data and weights were written to UVFITS format using the Miriad FITS procedure for the XX and YY polarisations, and all baselines, with a temporal and spectral resolution of 10 seconds and 1 MHz, respectively. ### Power spectrum estimation For hydrogen reionization, there are many power spectrum estimation pipelines available. For the Murchison Widefield Array EoR project, we employ the CHIPS software to estimate the spherically-averaged power spectrum (Trott et al., 2016). CHIPS natively reads the uvfits files produced by Miriad. We adjusted the software to create CHeIIPS, a package that uses the same methodology but with parameters updated to match that for the Helium-3 reionization experiment (e.g., gridding kernel size, uv-plane resolution). Full details of the methodology are available in Trott et al. (2016), and is briefly reproduced here. Channelised calibrated visibilities are cumulatively gridded onto a three-dimensional uv-plane grid \((u,v,\nu)\) as complex doubles, along with a separate weights grid that accumulates the gridding kernel. The kernel would optimally be set to the Fourier Transform of the telescope primary beam; for CHIPS and CHeIIPS, a matched-size 2D Blackman-Harris window is employed due to its better sidelobe suppression performance. In Trott et al. (2016), this choice was shown to not produce signal loss. Data are split into two datasets, interleaved in time, where each consecutive 10 second timestep is assigned alternately to even and odd grids. This allows for the data cross power spectrum to be computed, whereby noise power that would be present due to squaring of a quantity is absent (noise being uncorrelated between timesteps). This choice comes at the expense of slightly higher noise uncertainty. In the first stage of analysis, data were coherently gridded into 20 sets of 10 hours each. This allowed for inspection of individual sets of power spectra to identify any poorly-behaved observations that had passed calibration assessment. After coherent gridding, the data are normalised by the weights to produce the averaged gridded data: \[\tilde{V}(u,v,\nu)=\frac{\sum_{i}V_{i}W_{i}}{\sum_{i}W_{i}}, \tag{4}\] where \(W_{i}\) is the visibility weight multiplied by the gridding kernel. The final frequency transform to Fourier Space \((\nu-\eta)\) is performed for each \(uv\)-cell. For CHIPS, where the residual foregrounds are bright and complex, a Blackman-Harris window function is employed to reduce foreground sidelobe contamination. This is undertaken at the cost of a broader main lobe in \(\eta\) space. For these Helium-3 data, the foreground source has been effectively removed by fitting the first-order polynomial, and no spectral window function is required. After spectral transform via a DFT, the final \((u,v,\eta)\) data and weights cubes are used for power spectrum analysis. Transform across the full 650 MHz is not appropriate, because the signal evolves and is not ergodic. In addition, the many missing spectral channels due to flagging of persistent RFI leave gaps in the data that produce very poor power spectral results. As such, we choose bands of gridded data that contain no missing data, and limit the line-of-sight transform to a maximum of 100 MHz (\(\Delta z\simeq 0.25\)). Eight over-lapping subbands were identified that met these criteria. These are described in Table 3. Cylindrically- and spherically-averaged power spectra are produced by squaring the gridded, transformed visibilities and incoherently averaging them using the gridded weight data. The cylindrical transform plots angular (\(k_{\perp}=\sqrt{k_{x}^{2}+k_{y}^{2}}\), \(k_{x}\propto u\)) and line-of-sight (\(k_{\parallel}\propto\eta\)) modes separately: \[P(k_{\perp},k_{\parallel})=\frac{\sum_{i\in k}V_{i}^{*}V_{i}W_{ i}^{2}}{\sum_{i\in k}W_{i}^{2}}\ \mathrm{Jy^{2}Hz^{2}}, \tag{5}\] where \(W_{i}\) is the summed weights over all frequency channels. The power is transformed from observational units of Jy\({}^{2}\) Hz\({}^{2}\) to cosmological units of mK\({}^{2}\) h\({}^{-3}\) Mpc\({}^{3}\) using standard transforms and the transverse comoving distance at the sub-band centres. Finally, the dimensionless power spectrum is computed as: \[\Delta^{2}(k)=\frac{k^{3}P(k)}{2\pi^{2}}\ \mu\mathrm{K}^{2}, \tag{6}\] where a final conversion from mK to \(\mu\)K is performed. In one dimension, we display the square-root of this quantity, equivalent to the spectrum of temperature fluctuations. ## 3 Results The 2D and 1D power spectra were inspected for each subset of 10 hours of observations, for each sub-band described in Table 3. Most subsets displayed reasonable results, where the power were consistent with thermal noise across most wavemodes. One subset showed clear systematic contamination and was removed from the final coherent average. The remaining subsets were coherently averaged to \(\simeq\)190 hours of data, except for the \(z=4.14\) subband, where only 120 hours of data showed good behaviour. The eight individual subbands showed different systematic behaviour. Three that spanned the full band displayed the cleanest data, with most modes consistent with thermal noise. These subbands are italicised in Table, and correspond to power at \(z=2.91,3.39,4.14\). These were processed to the final power spectra. Figure 1 displays a 2D power spectrum at \(z=3.39\) in units of mK\({}^{2}\) h\({}^{-3}\) Mpc\({}^{3}\). The DC (\(k_{\parallel}=0\)) mode has been omitted. The data show positive and negative modes, consistent with thermal noise when cross power spectra are considered. The positive modes at high \(k_{\perp}\), low \(k_{\parallel}\) show residual foreground power that has not been removed by the linear fitting to the visibilities. The three subbands of data are then spherically-averaged to 1D, for each polarization, and square-rooted to produce the spectrum of temperature fluctuations shown in Figure 2. In each, the blue shaded region denotes the \(2\sigma\) thermal noise with noise-dominated regions showing shading to the bottom of the plot. The red points and blue line den \begin{table} \begin{tabular}{|c||c|c|} \hline \(\nu_{\mathrm{low}}\) (MHz) & \(z_{\mathrm{cent}}\) & \(N_{\mathrm{ch}}\) \\ \hline \hline 1676 & 4.17 & 60 \\ 1686 & 4.14 & 80 \\ 1781 & 3.86 & 54 \\ 1902 & 3.55 & 96 \\ 1933 & 3.48 & 96 \\ 1974 & 3.39 & 54 \\ 2110 & 3.10 & 42 \\ 2216 & 2.91 & 60 \\ \hline \end{tabular} \end{table} Table 1: Starting frequencies, central redshift and number of spectral channels for each subband selected from the data. Italicised subbands were used for the final analysis. Figure 1: Cylindrically-averaged power spectrum at \(z=3.4\) over 54 MHz of bandwidth, and the YY polarisation. Missing data denote negative cross-power, highlighting noise-like regions. This figure has units of mK\({}^{2}\)\(h^{-3}\) Mpc\({}^{3}\). Figure 2: Spherically-averaged temperature fluctuation (\(\mu\)K) as a function of wavenumber for \(z=4.1,3.4,2.9\) (top-to-bottom) across 80 MHz of bandwidth and XX (left) and YY (right) polarisations. The blue shaded region denotes the \(2\sigma\) thermal noise with noise-dominated regions showing shading to the bottom of the plot. The red points and blue line denote the measured power, while the green line denotes the estimated \(2\sigma\) noise level. while the green line denotes the estimated \(2\sigma\) noise level. Many modes are consistent with thermal noise. The best measured temperature is at \(z=2.91\) with \(\Delta(k)\) = \(489\mu\)K, and a \(2\sigma\) upper limit of \(557\mu\)K. Table 2 displays the measured and upper limit temperature fluctuations for the three subbands and their angular scale. For all redshifts, the YY power spectrum yielded cleaner results, and are reported here. The XX polarisation results are only marginally poorer, and this likely points to localised RFI that affects one polarisation more than the other. In each case, the angular mode corresponds to the shortest baseline available to the ATCA (30.6 m), which is typically occupied by a single baseline, and therefore has poor sensitivity. For comparison, Figure 3 shows the expected temperature fluctuation of the shielded model from Equation 3, for \(z=2.9,4.1\) and assuming that 10% of gas is collisionally-coupled. The broad level is \(\sim 0.01-0.1\mu\)K, which is 3-4 orders of magnitude below the measurements, demonstrating the difficulty of the experiment. ## 4 Discussion This work is an attempt to place upper limits on the power spectrum of temperature fluctuations for Helium-3, and provides an approach to how to undertake such an experiment. Despite not having the sensitivity to theoretically detect the signal, our data exhibit two important characteristics that currently impede interferometric 21 cm experiments pursuing hydrogen reionization: (1) A single clean and simple foreground source that can be removed effectively, due to the wide instantaneous bandwidth; (2) sufficiently-low residual RFI that the data are mostly noise-limited at 200 hours. The accuracy of the subtraction of B1934-638 is sufficient at the 200 hour level, but there are indications that both a first- and second-order polynomial fit leaves residuals that are not consistent with zero-mean noise in some parts of the band. The residual signal in a 1 MHz channel should not exceed 1 \(\mu\)Jy, according to McQuinn and Switzer (2009), requiring a dynamic range of \(10^{7}\) for a 12 Jy source. If the source can be fitted by a low-order polynomial, then the full band can be used for foreground estimation and subtraction, providing sufficient SNR. E.g., across 650 channels and fitting for a single parameter, 200 hours with all 15 ATCA baselines yields a dynamic range of \(2\times 10^{6}\). Increasing the order of the polynomial fit increases the chances of signal loss, by introducing spectral structure that has a non-zero projection onto the spectral Helium-3 structure (i.e., complex spectral structure may be due to Helium-3, and not the foreground source), whereas a first-order polynomial over 650 MHz will not impact the underlying Helium-3 structure (any global Helium-3 signal evolution has already been removed by omitting the auto-correlations). Residual RFI and spectral structure does affect a lot of the band; hence why only three out of the eight bands were used for the final analysis. Figure 4 displays YY power spectra for a poorly-performing subband of 96 channels near \(z=3.6\). For these data, there is residual spectral structure across the subband visible in the gridded visibility spectra, leading to increased leakage from foregrounds modes. It may be noted that the auto-correlation data may be used to study the global temperature evolution of the Helium-3 signal. With a single power-law continuum source as the only bright foreground in the field, a broad non-power law evolution of the spectrum may indicate the global reionization of Helium (similar to that undertaken with the EDGES experiment for hydrogen; Bowman et al., 2018). However, inspection of the auto-correlations from ATCA show high-amplitude oscilla \begin{table} \begin{tabular}{|c||c|c|c|c|} \hline \(z_{\rm cent}\) & Meas. \(\Delta(k)\)\(\mu\)K & \(2\sigma\) limit & k (h\({}^{-1}\) Mpc) & \(\Delta\theta\) (’) \\ \hline 4.14 & 534 & 757 & 0.15 & 29 \\ 3.39 & 689 & 760 & 0.16 & 29 \\ 2.91 & 489 & 557 & 0.17 & 29 \\ \hline \end{tabular} \end{table} Table 2: Measured temperature and \(2\sigma\) upper limits for each of the three subbands and the YY polarization, and the angular mode of the lowest temperature. Figure 3: The expected temperature fluctuation of the shielded model from Equation 3, for \(z=2.9,4.1\) and assuming that 10% of gas is collisionally-coupled. tions, consistent with reflections off structures within the dishes. The systematics in these data were not deemed able to be modelled sufficiently to undertake this experiment. Similarly, one may perform a Helium-3 absorption study with the cross-correlation spectra, _if_ B1934-638 were a background, high-redshift source (instead, it is \(z=0.12\)). McQuinn and Switzer (2009) predicted that absorption against a background continuum source provided the highest detectability, because the absorption flux density is dictated by the strength of the source and the optical depth. Given the lack of expected emission signal from the IGM, and the low signal expectation from the self-shielded gas in halos, the absorption study may be the best observational approach to tackle. ATCA is well-suited to undertake this experiment, due to its wide CABB backend and small field-of-view, but is ultimately hampered by its few baselines and lack of sensitivity. Prospects for detection with other current and future telescopes is dependent on: (1) overall sensitivity; (2) field-of-view (too small and the results are sample variance-limited, and too large and the foregrounds become complicated); (3) low-RFI environment to retain spectral smoothness and afford bands without flagged channels; (4) short baselines to access large-scale structures; (5) a broad, instantaneous bandwidth to measure the full signal evolution and for accurate foreground removal. Few telescopes have all of these properties; in particular the wide-band backend system. SKA-Mid is an obvious instrument for this experiment. Band 3, which is not proposed to be available initially4, spans 1400 MHz instantaneously across 1650-3050 MHz. Bands 1 and 2 would probe Helium at higher redshift. Using the SKA-Mid baseline distribution (Braun et al., 2019), there will be \(\simeq\)90 baselines with physical lengths shorter than 50 m. With this configuration, a 100-hour, 100 MHz bandwidth experiment should yield noise levels of 1\(\mu\)K at \(k=0.14h\)Mpc\({}^{-1}\), in the absence of RFI and other spectral systematics. The precursor MeerKAT array has \(\simeq\)10 baselines shorter than 50 m (minimum 29 m), and with an existing wide-band S-band receiver, should reach 1\(\simeq\mu\)K noise level after 1000 hours. Other facilities with good \(uv\)-coverage have narrower fractional bandwidth at these frequencies, which will hinder their ability to perform accurate foreground subtraction without the potential for signal loss (e.g., VLA). Higher-redshift \({}^{+}\)He\({}^{3}\), as explored by Khullar et al. (2020), is accessible at lower frequencies, but there are few radio interferometers with feeds that access 800-1200 MHz. For gas around \(z=5.1\) (1420 MHz), the signal will be confused with local neutral hydrogen, making its observation almost impossible. Footnote 4: [http://www.skao.int](http://www.skao.int) ## 5 Conclusions We attempted a measurement of the spherically-averaged power spectrum of \({}^{+}\)He\({}^{3}\) (single-ionised Helium-3) using the 3.5 cm (8.67 GHz) hyperfine line, at \(z=2.9-4.1\) using 190 hours of publicly-available data from the ATCA archive. After RFI flagging, identification of spectral bands with no missing data, and a simple foreground subtraction, power spectra were found to be noise-limited across many angular wavemodes. The upper limits on the power are not cosmologically-interesting, being 3-4 orders of magnitude larger in temperature than theoretical expectations, but is the first attempt to measure this signal. This work demonstrates the potential for this experiment to yield improved results, which would be cosmologically-relevant, with a telescope with higher sensitivity. Figure 4: Spherically-averaged temperature fluctuation (\(\mu\)K) as a function of wavenumber for \(z=3.6\) across 96 MHz of bandwidth and the YY polarisation. These data are affected by residual spectral structure from RFI or the foreground fitting leading to poor performance at low \(k\). The authors would like to specifically thank the referee for their detailed comments and correction of our understanding of the theory. This clarification has made a significant improvement to the manuscript. The authors would also like to thank Jishnu Thekkeppattu and Jamie Stevens for helpful discussions. This research was partly supported by the Australian Research Council Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO 3D), through project number CE170100013. CMT was supported by an ARC Future Fellowship under grant FT180100321. The International Centre for Radio Astronomy Research (ICRAR) is a Joint Venture of Curtin University and The University of Western Australia, funded by the Western Australian State government. Miriad[https://www.atnf.csiro.au/computing/software/](https://www.atnf.csiro.au/computing/software/)
2310.20473
Improved Roundtrip Spanners, Emulators, and Directed Girth Approximation
Roundtrip spanners are the analog of spanners in directed graphs, where the roundtrip metric is used as a notion of distance. Recent works have shown existential results of roundtrip spanners nearly matching the undirected case, but the time complexity for constructing roundtrip spanners is still widely open. This paper focuses on developing fast algorithms for roundtrip spanners and related problems. For any $n$-vertex directed graph $G$ with $m$ edges (with non-negative edge weights), our results are as follows: - 3-roundtrip spanner faster than APSP: We give an $\tilde{O}(m\sqrt{n})$-time algorithm that constructs a roundtrip spanner of stretch $3$ and optimal size $O(n^{3/2})$. Previous constructions of roundtrip spanners of the same size either required $\Omega(nm)$ time [Roditty, Thorup, Zwick SODA'02; Cen, Duan, Gu ICALP'20], or had worse stretch $4$ [Chechik and Lifshitz SODA'21]. - Optimal roundtrip emulator in dense graphs: For integer $k\ge 3$, we give an $O(kn^2\log n)$-time algorithm that constructs a roundtrip \emph{emulator} of stretch $(2k-1)$ and size $O(kn^{1+1/k})$, which is optimal for constant $k$ under Erd\H{o}s' girth conjecture. Previous work of [Thorup and Zwick STOC'01] implied a roundtrip emulator of the same size and stretch, but it required $\Omega(nm)$ construction time. Our improved running time is near-optimal for dense graphs. - Faster girth approximation in sparse graphs: We give an $\tilde{O}(mn^{1/3})$-time algorithm that $4$-approximates the girth of a directed graph. This can be compared with the previous $2$-approximation algorithm in $\tilde{O}(n^2, m\sqrt{n})$ time by [Chechik and Lifshitz SODA'21]. In sparse graphs, our algorithm achieves better running time at the cost of a larger approximation ratio.
Alina Harbuzova, Ce Jin, Virginia Vassilevska Williams, Zixuan Xu
2023-10-31T14:07:36Z
http://arxiv.org/abs/2310.20473v1
# Improved Roundtrip Spanners, Emulators, and Directed Girth Approximation ###### Abstract Roundtrip spanners are the analog of spanners in directed graphs, where the roundtrip metric is used as a notion of distance. Recent works have shown existential results of roundtrip spanners nearly matching the undirected case, but the time complexity for constructing roundtrip spanners is still widely open. This paper focuses on developing fast algorithms for roundtrip spanners and related problems. For any \(n\)-vertex directed graph \(G\) with \(m\) edges (with non-negative edge weights), our results are as follows: * **3-roundtrip spanner faster than APSP:** We give an \(\tilde{O}(m\sqrt{n})\)-time algorithm that constructs a roundtrip spanner of stretch 3 and optimal size \(O(n^{3/2})\). Previous constructions of roundtrip spanners of the same size either required \(\Omega(nm)\) time [Roditty, Thorup, Zwick SODA'02; Cen, Duan, Gu ICALP'20], or had worse stretch 4 [Chechik and Lifshitz SODA'21]. * **Optimal roundtrip emulator in dense graphs:** For integer \(k\geq 3\), we give an \(O(kn^{2}\log n)\)-time algorithm that constructs a roundtrip _emulator_ of stretch \((2k-1)\) and size \(O(kn^{1+1/k})\), which is optimal for constant \(k\) under Erdos' girth conjecture. Previous work of [Thorup and Zwick STOC'01] implied a roundtrip emulator of the same size and stretch, but it required \(\Omega(nm)\) construction time. Our improved running time is near-optimal for dense graphs. * **Faster girth approximation in sparse graphs:** We give an \(\tilde{O}(mn^{1/3})\)-time algorithm that 4-approximates the girth of a directed graph. This can be compared with the previous 2-approximation algorithm in \(\tilde{O}(n^{2},m\sqrt{n})\) time by [Chechik and Lifshitz SODA'21]. In sparse graphs, our algorithm achieves better running time at the cost of a larger approximation ratio. Introduction A \(t\)-spanner of a graph is a subgraph that approximates all pairwise distances within a factor of \(t\). Spanners are useful in many applications since they can be significantly sparser than the graphs they represent, yet are still a good representation of the shortest paths metric. As many algorithms are much faster on sparse graphs, running such algorithms on a spanner rather than the graph itself can be significantly more efficient, with only a slight loss in approximation quality. For undirected graphs, the spanner question is very well understood. It is known that for all integers \(k\geq 2\), every \(n\)-vertex undirected (weighted) graph contains a \((2k-1)\)-spanner on \(O(n^{1+1/k})\) edges [1] and this is optimal under Erdos' girth conjecture [13]. For directed graphs, however, there can be no non-trivial spanners under the usual shortest paths metric: consider for instance a complete bipartite graph, with edges directed from one partition to the other. Omitting a single edge \((u,v)\) would cause the distance \(d(u,v)\) to go from \(1\) to \(\infty\). Nevertheless, one can define a notion of a spanner in directed graphs based on the _roundtrip_ metric defined by Cowen and Wagner [14]: \(d(u\leftrightarrows v)=d(u,v)+d(v,u)\). A roundtrip \(t\)-spanner of a directed graph is a subgraph that preserves all pairwise roundtrip distances within a factor of \(t\). Cen, Duan and Gu [11] showed that basically the same existential results are possible for roundtrip spanners as in undirected graphs: for every integer \(k\geq 2\) every \(n\)-vertex directed graph contains a \((2k-1)\)-roundtrip spanner on \(O(kn^{1+1/k}\log n)\) edges. For the special case of \(k=2\), it was known earlier that every \(n\)-vertex graph contains a \(3\)-roundtrip spanner on \(O(n\sqrt{n})\) edges [10]. The known results on algorithms for constructing spanners and roundtrip spanners differ drastically however. Baswana and Sen [1] presented a randomized linear time algorithm for computing an \(O(kn^{1+1/k})\)-edge \((2k-1)\)-spanner of any \(n\)-vertex weighted graph (which was later derandomized [10]). Meanwhile, the algorithms for constructing roundtrip spanners are much slower. The first construction of roundtrip spanners was given by Roditty, Thorup and Zwick in [10], where they gave the construction of \((2k+\varepsilon)\)-roundtrip spanners on \(\tilde{O}((k^{2}/\varepsilon)n^{1+1/k})\) edges for any graph with edge weights bounded by \(\operatorname{poly}n\) (\(\log nW\) dependence in the size otherwise) in \(O(mn)\) time. Later, Zhu and Lam [11] derandomized this construction and improved the sparsity of the spanner to contain \(\tilde{O}((k/\varepsilon)n^{1+1/k})\) edges. Most recently, Chechik and Lifshitz constructed a \(4\)-roundtrip spanner on \(O(n^{3/2})\) edges in \(\tilde{O}(n^{2})\) time. All currently known results on constructions of roundtrip spanners are summarized in Table 1. Notice that for all cases with running time faster than \(mn\), the stretch is suboptimal for the used sparsity. This motivates the following: _Question: What is the best construction time for roundtrip spanners of optimal stretch-sparsity tradeoff?_ Alongside the construction of roundtrip spanners, another closely related problem is approximating the girth (i.e. the length of the shortest cycle) in directed graphs. The first nontrivial algorithm is by Pachocki, Roditty, Sidford, Tov, Vassilevska Williams [10], who gave an \(O(k\log n)\) approximation algorithm running in \(\tilde{O}(mn^{1/k})\) time. Further improvements by [12, 14] followed. Most recently, Chechik and Lifshitz [12] obtained a \(2\)-approximation in \(\tilde{O}(\min\{n^{2},m\sqrt{n}\})\) time, which is optimal for dense graphs. The current known results are summarized in Table 2. While the \(2\)-approximation result is optimal for dense graphs, and while a \(2-\varepsilon\)-approximation is (conditionally) impossible in \(O((mn)^{1-\delta})\) time [14], it is unclear what other approximations (2.5\(\farcm\)3?) are possible with faster algorithms. This motivates the following question: _Question: what is the best running time-approximation tradeoff for the girth of directed graphs?_ ### Our Results Throughout this paper, we consider directed graphs on \(n\) vertices and \(m\) edges with non-negative edge weights. We use \(\tilde{O}(\cdot)\) to hide \(\operatorname{poly}\log(n)\) factors. All our algorithms are Las Vegas randomized. **Theorem 1.1**: _There is a randomized algorithm that computes a 3-roundtrip spanner of \(O(n^{3/2})\) size in \(\tilde{O}(m\sqrt{n})\) time._ This can be compared with the 4-roundtrip spanner of \(O(n^{3/2})\) size constructable in \(O(n^{2}\log n)\) time from [1]. Alongside spanners, another important object of study are _emulators_: sparse graphs that approximate all pairwise distances; the difference here is that emulators are not required to be subgraphs, and can be weighted even if the original graph was unweighted. Similar to roundtrip spanners being analogs of spanners in directed graphs, we consider roundtrip emulators which are the analogs of emulators in directed graphs. While emulators are very well studied in undirected graphs [1, 2, 1, 1, 2, 3, 4, 5, 6, 7, 8], the authors are not aware of any results, for the roundtrip metric. The only known construction of roundtrip emulators is implied from using the roundtrip metric in Thorup-Zwick's distance oracle in [11], which has \((2k-1)\)-stretch and \(O(kn^{1+1/k})\) edges but requires \(\tilde{O}(mn)\) construction time. We obtain a very fast algorithm that constructs essentially optimal roundtrip emulators (up to the Erdos girth conjecture). **Theorem 1.2**: _For integers \(k\geq 3\), there is a randomized algorithm that computes a \((2k-1)\)-roundtrip emulator of \(O(kn^{1+1/k})\) size in \(O(kn^{2}\log n)\) time._ While the result is only for roundtrip emulators, rather than spanners, it achieves a much faster running time than any result on roundtrip spanners with optimal approximation-size tradeoff. This is the first algorithm that achieves a sub-\(mn\) running time for the problem. We next focus on the closely related question of girth approximation. We prove: **Theorem 1.3**: _There is a randomized algorithm that computes a 4-multiplicative approximation of the girth of a directed graph in \(\tilde{O}(mn^{1/3})\) time._ Let us compare with the previous known directed girth approximation algorithms. Compared with the 2-approximation in \(\tilde{O}(n^{2},m\sqrt{n})\) time from [1], Theorem 1.3 achieves a better running time for \(m\leq o(n^{5/3})\) while raising the approximation ratio to 4. Dalirrooyfard and Vassilevska W. [4] gave for every constant \(\varepsilon>0\), a \((4+\varepsilon)\)-approximation algorithm running in \(\tilde{O}(mn^{\sqrt{2}-1})\) time. Our algorithm removes the \(\varepsilon\) from the approximation factor and further improves the running time. ### Paper organization After introducing useful notations and terminologies in Section2, we give a high level overview of our techniques in Section3. Then, in Section4 we describe our 3-roundtrip spanner algorithm (Theorem1.1). In Section5 we describe our roundtrip emulator algorithm. In Section6 we describe our girth approximation algorithm. We conclude with a few open questions in Section7. \begin{table} \begin{tabular}{|l|c|c|c|} \hline **Citation** & **Stretch** & **Sparsity** & **Time** \\ \hline Roditty, Thorup, Zwick [14]\({}^{\triangle}\) & \(2k+\varepsilon\) & \(\tilde{O}\left(\frac{k^{2}}{\varepsilon}n^{1+1/k}\right)\) & \(O(mn)\) \\ \hline Pachocki, Roditty, Sidford, Tov, Vassilevska W. [13] & \(O(k\log n)\) & \(\tilde{O}(n^{1+1/k})\) & \(\tilde{O}(mn^{1/k})\) \\ \hline Chechik, Liu, Rotem, Sidford [12] & \(O(k\log k)\) & \(\tilde{O}(n^{1+1/k})\) & \(\tilde{O}(m^{1+1/k})\) \\ \hline Cen, Duan, Gu [1] & \(2k-1\) & \(\tilde{O}(kn^{1+1/k})\) & \(\tilde{O}(mn\log W)\) \\ \hline Chechik, Liu, Rotem, Sidford [12]\({}^{\triangle}\) & \(8+\varepsilon\) & \(\tilde{O}(n^{3/2}/\varepsilon^{2})\) & \(\tilde{O}(m\sqrt{n})\) \\ \hline Dalirrooyfard and Vassilevska & \(5+\varepsilon\) & \(\tilde{O}(n^{3/2}/\varepsilon^{2})\) & \(\tilde{O}(m\sqrt{n})\) \\ \hline Chechik and Lifshitz [12] & \(4\) & \(O(n^{3/2})\) & \(\tilde{O}(n^{2})\) \\ \hline **New** & \(3\) & \(O(n^{3/2})\) & \(\tilde{O}(m\sqrt{n})\) \\ \hline \end{tabular} \end{table} Table 1: Known results on constructions of roundtrip spanners on a weight directed graph on \(n\) vertices and \(m\) edges with edge weight bounded by \(W\). Results marked with \(\triangle\) are subsumed by other results. \begin{table} \begin{tabular}{|l|c|c|} \hline **Citation** & **Approximation Factor** & **Time** \\ \hline Pachocki, Roditty, Sidford, Tov, Vassilevska W. [13] & \(O(k\log n)\) & \(\tilde{O}(mn^{1/k})\) \\ \hline Chechik, Liu, Rotem, Sidford [12]\({}^{\triangle}\) & \(3\) & \(\tilde{O}(m\sqrt{n})\) \\ \hline Chechik, Liu, Rotem, Sidford [12] & \(O(k\log k)\) & \(\tilde{O}(m^{1+1/k})\) \\ \hline Dalirrooyfard and Vassilevska & \(4+\varepsilon\) & \(\tilde{O}(mn^{\sqrt{2}-1})\) \\ \hline Dalirrooyfard and Vassilevska & \(2+\varepsilon\) & \(\tilde{O}(m\sqrt{n})\) \\ \hline Dalirrooyfard and Vassilevska & \(2\) & \(\tilde{O}(mn^{3/4})\) (unweighted) \\ \hline Chechik and Lifshitz [12] & \(2\) & \(\tilde{O}(\min\{n^{2},m\sqrt{n}\})\) \\ \hline **New** & \(4\) & \(\tilde{O}(mn^{1/3})\) \\ \hline \end{tabular} \end{table} Table 2: Known results on girth approximation on a weight directed graph on \(n\) vertices and \(m\) edges with edge weight bounded by \(W\). Results marked with \(\triangle\) are subsumed by other results. ## 2 Preliminaries We use \(\tilde{O}(\cdot)\) to hide \(\operatorname{poly}\log(n)\) factors, where \(n\) is the number of vertices in the input graph. In this paper, the input graph \(G=(V,E)\) is always a weighted directed graph with vertex set \(V\) of size \(|V|=n\) and edge set \(E\) of size \(|E|=m\) with non-negative edge weights. Without loss of generality, we assume \(G\) does not have parallel edges. We use \(\operatorname{wt}(u,v)\) to denote the weight of the directed edge \((u,v)\in E\). For any two vertices \(u,v\in V\), we use \(d_{G}(u,v)\) to denote the distance (length of the shortest path) from \(u\) to \(v\) in \(G\), and we use \(d_{G}(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{ $\sim$}}}}v):=d_{G}(u,v)+d_{G}(v,u)\) to denote the _roundtrip distance_ between \(u\) and \(v\). When the context is clear, we simply use \(d(u,v)\) and \(d(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{ $\sim$}}}}v)\). For a subset of vertices \(W\subseteq V\), we use \(G[W]\) to denote the subgraph of \(G\) induced by the vertex set \(W\). The _girth_ of \(G\) is the length (total edge weight) of the shortest cycle in \(G\). We say a graph \(H=(V,E^{\prime})\) is an \(\alpha\)-_roundtrip emulator_ of graph \(G=(V,E)\), if for every two vertices \(u,v\in V\) it holds that \(d_{G}(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{ $\sim$}}}}v)\leq d_{H}(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt\hbox{ $\sim$}}}}v)\leq\alpha\cdot d_{G}(u\mathrel{\mathop{\kern 0.0pt\hbox to 0.0pt{\lower 4.3pt \hbox{ $\sim$}}}}v)\). Furthermore, if \(H\) is a subgraph of \(G\), we say \(H\) is an \(\alpha\)-_roundtrip spanner_ of \(G\). Without loss of generality, we may assume \(G\) is strongly-connected, since otherwise we can run the algorithm for girth approximation (or roundtrip spanner/emulator) on each strongly-connected component. In addition, we may assume the maximum degree of \(G\) is bounded by \(O(m/n)\). This is due to the following regularization lemma shown in [10]. This assumption will be used in Section6. **Lemma 2.1** (Regularization [10]): _Given a directed weighted graph \(G=(V,E)\) on \(n\) vertices and \(m\) edges, one can construct a graph \(H\) on \(O(n)\) vertices and \(O(m)\) edges with non-negative edge weights and maximum degree \(O(m/n)\) in \(O(m)\) time such that all of the following holds:_ 1. _All roundtrip distances between pairs of vertices in_ \(G\) _are the same in_ \(H\) _as in_ \(G\)_._ 2. _Given a cycle in_ \(H\)_, one can find a cycle of the same length in_ \(G\) _in_ \(O(m)\) _time._ 3. _Given a subgraph_ \(H^{\prime}\) _in_ \(H\)_, one can find in_ \(O(m)\) _time a subgraph_ \(G^{\prime}\) _of_ \(G\) _such that_ \(|E(G^{\prime})|\leq|E(H^{\prime})|\) _and the roundtrip distances in_ \(G^{\prime}\) _are the same as in_ \(H^{\prime}\)_._ In our algorithms, we often use Dijkstra's algorithm to compute single-source distances. On a weighted directed graph \(G=(V,E)\), we use _out-Dijkstra_ from a source \(s\in V\) to refer to Dijkstra algorithm computing distances \(d(s,\cdot)\) from \(s\), and use _in-Dijkstra_ from \(s\) to refer to Dijkstra algorithm computing distances \(d(\cdot,s)\) into \(s\). ## 3 Technical Overview ### Previous Work Throughout this paper, our techniques are based on the following key observation introduced in [10]. **Lemma 3.1** (Key Observation [10]): _Let \(G=(V,E)\) be a weighted directed graph with nonnegative edge weights. For vertices \(u,v,r\in V\), if_ \[2\cdot d(v,r)+d(r,u)\leq 2\cdot d(v,u)+d(u,r), \tag{1}\] _then_ \[d(u\eqsim r)\leq 2\cdot d(u\eqsim v).\] An important property of the above observation is that 1 is symmetric with respect to the roles of \(u\) and \(r\). This symmetry is crucial to the analysis of the applications of 3 in the previous work [1] as well as in our new algorithms, so we first describe it in more details as follows. The Symmetry ArgumentConsider the following routine that sparsifies a graph \(G=(V,E)\) on \(n\) vertices using a random sample \(S\subseteq V\). For every vertex \(v\in V\), we check for every vertex \(s\in S\cap N(v)\) and \(u\in N(v)\) where \(N(v)\) denotes the out neighborhood of \(v\), if \(2d(v,s)+d(s,u)\leq 2d(v,u)+d(u,s)\) then remove the edge \((v,u)\). We say that we use the set \(S\) as _eliminators_ to perform the sparsification since we are comparing the distance \(d(v,u)\) using the distance information involving \(s\in S\). For any two neighbors \(u,u^{\prime}\in N(v)\) (possible \(u=u^{\prime}\)), notice that the condition involving \(v,u,u^{\prime}\) compares the distances \(2d(v,u)+d(u,u^{\prime})\) against \(2d(v,u^{\prime})+d(u^{\prime},u)\), which is the same as if we switch the roles of \(u\) and \(u^{\prime}\). This means that either \(u\) eliminates \(u^{\prime}\) or \(u^{\prime}\) eliminates \(u\). (We say "\(u\) eliminates \(u^{\prime}\)" meaning that, if \(u\in S\), then the edge \((v,u^{\prime})\) will be removed, namely \(u^{\prime}\) is eliminated from \(N(v)\).) So given a random \(u\in N(v)\), in expectation half of the pairs \((u,u^{\prime})\) falls in the case where \(u\) can eliminate \(u^{\prime}\) and additionally \(u\) can eliminate \(u\) itself. Thus, if \(N(v)\cap S\neq\varnothing\), then in expectation the procedure will remove at least \(|N(v)|/2\) edges. This implies that the graph sparsification can effectively remove a constant fraction of the edges adjacent to the vertices with high out-degree. More specifically, since on expectation, the sample \(S\) can hit vertex sets with size \(\Omega(n/|S|)\), this procedure can remove a constant fraction of the outgoing edges adjacent to vertices with degree \(\Omega(n/|S|)\). So if we repeat this process \(\Theta(\log n)\) rounds, on expectation we can reduce the out-degree of every vertex to at most \(O(n/|S|)\). Applications of the key observationNow we are ready to explain how the above 3 is useful for constructing roundtrip spanners and approximating directed cycles. 1. **Girth approximation: reduce search space.** Suppose we can take a small random subset of vertices \(S\subseteq V\) and for each vertex \(s\in S\) and set the current girth estimate as the length of the shortest cycle passing through any vertex in \(S\). Then if \(r\in S\), 3 shows that we no longer have to consider the shortest cycle passing through \(v\) and \(u\) satisfying 1. This is because the shortest cycle passing through \(u\) and \(v\) can already be \(2\)-approximated by the shortest cycle passing through \(r\). Then if we want to search for cycles passing through that cannot be \(2\)-approximated, we would not need to consider the vertex \(u\). Thus, using the sample \(S\), we can compute a pruned vertex set \(B(v)\subseteq V\) that contains all the vertices \(u\in V\) such that Eq.1 does not hold for any \(r\in S\). By the symmetry argument, each sample that hits the set \(B(v)\) can reduce the size of \(B(v)\) by a constant fraction. So over \(\Theta(\log n)\) rounds, we can obtain a pruned set of size roughly \(O(n/|S|)\). This technique is used in the \(2\)-approximation in [1] and will be used in our algorithm for computing a \(4\)-approximation of the girth in Section6. 2. **Roundtrip spanners: graph sparsification.** Suppose we take a random subset \(S\subseteq V\) and add all the in/out shortest path tree from \(S\) to our spanner \(H\). We apply Lemma3.1 to the vertices \(u,v,r\in V\) where \(u,r\) are out-neighbors of \(v\). If \(r\in S\), Lemma3.1 implies that we can delete the edge \((v,u)\) since the shortest cycle containing \((v,u)\) can be \(2\)-approximated by the cycle passing through \(r\) and \(u\), which is already added to the spanner \(H\). As explained previously by the symmetry argument, in expectation we can reduce the out-degree of every vertex to roughly \(O(n/|S|)\) if we repeat this process for \(\Theta(\log n)\) rounds. This technique was used in the construction of \(4\)-roundtrip spanners in [1] and will be used in our construction of \(3\)-roundtrip spanner in Section4 and our \((2k-1)\)-roundtrip emulator in Algorithm2. ### Our Techniques Our techniques consist of a collection of extensions to the techniques introduced in [1]. We now highlight the novel components in each of our algorithms. \(3\)-Roundtrip Spanner in \(\tilde{O}(m\sqrt{n})\) TimeOur algorithm follows from a modification of Chechik and Lifshitz's [1] \(4\)-roundtrip spanner algorithm, which was based on the graph sparsification approach mentioned earlier. Our new idea lies in a more careful analysis of the stretch of the spanner: instead of directly bounding the roundtrip distance \(d_{H}(u\leftrightarrows v)\) between vertices \(u,v\) in the spanner \(H\) as Chechik and Lifshitz did, we separately bound the one-way distances \(d_{H}(u,v),d_{H}(v,u)\) and add them up. After a slight change in their algorithm (namely, by computing distances in the original graph rather than in the sparsified graph in each round), this analysis enables us to improve the stretch from \(4\) to \(3\). \((2k-1)\)-Roundtrip Emulator in \(\tilde{O}(n^{2})\) TimeThe celebrated approximate distance oracle result of Thorup and Zwick [14] immediately yields \((2k-1)\)-emulators of \(O(kn^{1+1/k})\) size for any metric. But a straightforward implementation of their generic algorithm in the roundtrip metric would require computing single source shortest paths from all vertices, in \(\tilde{O}(mn)\) total time. For the easier case of undirected graphs, [14] reduced the construction time to \(O(kmn^{1/k})\), but unfortunately these techniques based on balls and bunches do not yield a speedup in our roundtrip distance setting. Our faster roundtrip emulator algorithm combines Thorup and Zwick's technique [14] with the graph sparsification approach of [1]. The intuition is that, since the bottleneck of the generic Thorup-Zwick algorithm lies in computing single source shortest paths, a natural idea is to use [1]'s approach to gradually sparsify the graph so that Dijkstra's algorithm can run faster. More specifically, recall that the Thorup-Zwick algorithm takes a sequence of nested vertex samples \(S_{1}\subseteq\cdots\subseteq S_{k}=V\) which serve as intermediate points for routing approximate shortest paths. In our case, these vertex samples also play the same role as in the graph sparsification approach described earlier, where short cycles going through these vertex samples can approximate the cycles we care about. This results in a multi-round algorithm that interleaves graph sparsification steps and running Dijkstra from vertices of \(S_{i}\) (with gradually increasing size) in \(\tilde{O}(n^{2})\) total time. It is not obvious that the \((2k-1)\)-stretch of Thorup-Zwick still holds after adding these graph sparsification steps, but it turns out the stretch analysis of Thorup-Zwick fits nicely with the cycle approximation arguments, and with a careful analysis we are still able to show \((2k-1)\) stretch when \(k\geq 3\). For some technical reason related to the sampling argument of Thorup-Zwick, we had to slightly simplify the graph sparsification techniques of [1], in order to avoid an undesirable extra logarithmic factor in the sparsity bound of our roundtrip emulator. See the discussion in Remark4.1 and the proof of Lemma5.2. \(4\)-Approximation of Girth in \(\tilde{O}(mn^{1/3})\) TimeOur algorithm vastly extends the technique of the \(2\)-approximate girth algorithm in \(\tilde{O}(\min\{n^{2},m\sqrt{n}\})\) time by Chechik and Lifshitz [1]. In the \(2\)-approximation algorithm, one takes a sample \(S\) of size \(O(\sqrt{n})\) and uses in/out Dijkstra's to exactly compute the shortest cycle going through every \(s\in S\). Then using \(S\) as eliminators, compute for every vertex \(v\in V\) a pruned vertex set \(B(v)\) of size \(O(\sqrt{n})\), and search for short cycles from \(v\) on \(G[B(v)]\). A natural attempt to improve the running time is to generalize this framework to multiple levels: take a sequence of vertex samples of increasing sizes \(S_{1},\ldots,S_{k-1},S_{k}=V\) and compute a sequence of pruned vertex subsets \(V=B_{1}(v),B_{2}(v),\ldots,B_{k}(v)\) of decreasing sizes for every \(v\), so that one can run Dijkstra from/to every vertex in \(S_{i}\) on \(G[B_{i}(v)]\) in \(\tilde{O}(mn^{1/k})\) time. However, it is unclear how to do this since one can no longer check the condition Eq.1 due to not having all the distance information from/to every vertex \(s\in S_{i}\), and thus we cannot compute the sets \(B_{i}(v)\) as desired. In this work we are able to implement the above plan for \(k=3\), obtaining a \(4\)-approximation girth algorithm in \(\tilde{O}(mn^{1/3})\) time. We deal with the problem of not having enough distance information to compute \(B_{3}(v)\) by using a certain distance underestimate obtained from the distance information from \(S_{1}\), and enforcing a stricter set of requirements on the vertices that we explore, so that we always have their distance information available. We also apply more novel structural lemmas about cycle approximation that extend the key observation Lemma3.1 of [1] in various ways, which may be of independent interest. As a result, our \(4\)-approximation algorithm becomes more technical than the previous \(2\)-approximation algorithm in \(\tilde{O}(m\sqrt{n})\) time. Here, we highlight the key structural lemma (Lemma6.16) that enabled us to overcome the above described difficulty. It is illustrated in the following Fig.2, which can be viewed as an extension of Lemma3.1 from 3 vertices to \(4\) vertices. As illustrated, if there exists some vertex \(r_{2}\) that is in a short cycle with \(v\) but not in a short cycle with \(u\), then we can find some vertex \(r_{1}\) such that the cycle \(v\rightsquigarrow r_{2}\rightsquigarrow r_{1}\rightsquigarrow u\rightsquigarrow v\) (highlighted in green) can approximate the shortest cycle passing through \(u\) and \(v\) (the cycle in red). Then similar to how we can use Lemma3.1, we can ignore the vertex \(u\) in our search for the shortest cycle passing through \(v\). Furthermore, we note that we had to introduced a number of technicalities and a new structural theorem just to implement our proposed generalization for \(k=3\). So it is entirely unclear how to further generalize this approach for \(k\geq 4\). Moreover, even if one successfully implements the proposed generalization naively, one would only obtain a \(2^{k-1}\)-approximation in \(\tilde{O}(mn^{1/k})\) time, which is far from being desirable. ## 4 3-Roundtrip Spanner In this section, we present our algorithm for constructing a 3-roundtrip spanner with \(O(n^{3/2})\) edges in time \(\tilde{O}(m\sqrt{n})\) (Theorem 1.1). Our algorithm closely follows the previous \(\tilde{O}(n^{2})\)-time 4-roundtrip spanner algorithm by Chechik and Lifshitz [14], but we use a more careful analysis to improve the stretch from 4 to 3. ### Algorithm and stretch analysis Our algorithm (see pseudocode in Algorithm 1) has a similar structure as in [14]: We iteratively sample vertex subsets \(S_{i}\subseteq V\) with geometrically increasing expected sizes \(\mathbb{E}[|S_{i}|]\) up to \(\sqrt{n}\). In each iteration \(i\), we add the shortest path trees from/to every \(s\in S_{i}\) into the spanner, and sparsify the input graph \(G\) using the method of [14] based on \(S_{i}\) (Line 8 - Line 11). Finally, we are able to sparsify the graph to contain only \(O(n^{3/2})\) edges in expectation, and we will add these remaining edges to the spanner. Over all iterations, we add a total of \(2n\cdot O(\sqrt{n})=O(n^{3/2})\) edges to the spanner, and we only run \(O(\sqrt{n})\) instances of Dijkstra which take \(\tilde{O}(m\sqrt{n})\) total time. The main difference from [14] lies in the sparsification rule at Line 10. Our rule is based on comparing distances in the original input graph \(G\), while Chechik and Lifshitz's rule was based on distances in the sparsified graph \(G_{i}\). **Remark 4.1**: _Readers familiar with [14] may notice some other technical differences between Algorithm 1 and [14]: in order to remove a \(\log n\) factor from the spanner size, Chechik and Lifshitz [14] had to resample \(S_{i}\) in case it is "unsuccessful" (i.e., Line 11 did not remove sufficiently many edges), whereas our Algorithm 1 achieves the same goal without resampling. Another difference is that we fix the sample rate of each iteration \(i\) at Line 5, while [14] determines sample rate based on the current size \(|E(G_{i})|\)._ Figure 2: If there exists a vertex \(r_{2}\) that is in a short cycle with \(v\) but not in a short cycle with \(u\), then we can find a vertex \(r_{1}\) such that the cycle passing through \(v\rightsquigarrow r_{2}\rightsquigarrow r_{1}\rightsquigarrow u\) (the cycle highlighted in green) can approximate the shortest cycle passing through \(u\) and \(v\) (the cycle in red). _These modifications are not essential for obtaining this \(3\)-spanner result. In particular, our algorithm is equivalent to simply sampling \(\sum_{i=0}^{\Delta-1}|S_{i}|=O(\sqrt{n})\) vertices all at once. Nonetheless, we present it in this way because it leads to cleaner implementation and analysis. Furthermore, it will be useful later for our emulator algorithm in Section 5 (where we require \(S_{i}\) to be uniformly and independently sampled)._ Now we prove the stretch of the spanner constructed by Algorithm 1. Our proof mostly follows [10]; the key difference is that [10] estimated \(d_{H}(u\mathrel{\hbox to 0.0pt{\kern 2.0pt\lower 4.3pt\hbox{$\sim$}}\raise 1.0pt \hbox{$\Rightarrow$}}v)\) as a whole, while our improvement comes from separately estimating \(d_{H}(u,v)\) and \(d_{H}(v,u)\) and combine them to obtain an upper bound for \(d_{H}(u\mathrel{\hbox to 0.0pt{\kern 2.0pt\lower 4.3pt\hbox{$\sim$}}\raise 1.0pt \hbox{$\Rightarrow$}}v)\). **Lemma 4.2**: _For any two vertices \(u,v\in V\),_ \[d_{H}(u,v)\leq 2d_{G}(u,v)+d_{G}(v,u).\] _As a consequence, \(d_{H}(u\mathrel{\hbox to 0.0pt{\kern 2.0pt\lower 4.3pt\hbox{$\sim$}}\raise 1.0pt \hbox{$\Rightarrow$}}v)\leq 3d_{G}(u\mathrel{\hbox to 0.0pt{\kern 2.0pt\lower 4.3pt \hbox{$\sim$}}\raise 1.0pt\hbox{$\Rightarrow$}}v)\) for any \(u,v\in V\)._ Proof.: Let \(P\) denote the shortest path from \(u\) to \(v\) in \(G\). If \(P\) is completely contained in the final \(G_{\Delta}\), then by Line 12 clearly \(d_{H}(u,v)=d_{G}(u,v)\) and we are done. For the remaining case, consider any iteration \(i\) in which some edge \((x,y)\) of \(P\) is removed from \(G_{i+1}\) at Line 11. By Line 10, there is a vertex \(s\in S_{i}\) such that \[2d_{G}(x,s)+d_{G}(s,y)\leq 2\operatorname{wt}(x,y)+d_{G}(y,s),\] which means \[d_{G}(x,s)+d_{G}(s,y) \leq 2\operatorname{wt}(x,y)+d_{G}(y,s)-d_{G}(x,s)\] \[\leq 2\operatorname{wt}(x,y)+d_{G}(y,x). \tag{2}\] Since \(H\) contains the shortest path trees in \(G\) from \(s\) and to \(s\) (by Line 7), we have \[d_{H}(u,v) \leq d_{H}(u,s)+d_{H}(s,v)\] \[=d_{G}(u,s)+d_{G}(s,v)\] \[\leq d_{G}(u,x)+d_{G}(x,s)+d_{G}(s,y)+d_{G}(y,v)\] \[\leq d_{G}(u,x)+2\operatorname{wt}(x,y)+d_{G}(y,x)+d_{G}(y,v).\] (by Eq. ( 2 )) Then, using \(d_{G}(y,x)\leq d_{G}(y,v)+d_{G}(v,u)+d_{G}(u,x)\), we immediately obtain \[d_{H}(u,v) \leq 2\big{(}d_{G}(u,x)+\operatorname{wt}(x,y)+d_{G}(y,v)\big{)}+ d_{G}(v,u)\] \[=2d_{G}(u,v)+d_{G}(v,u).\qed\] ### Analysis of sparsity and running time Now we analyze the expected size of \(H\) and the running time of Algorithm 1. We first prove the following lemma that bounds the expected number of edges in \(G_{i}\). From now on we use \(m_{i}:=|E(G_{i})|\). Recall from Line 3 that \(\Delta=\lceil\log_{3/2}\sqrt{n}\rceil\), \(\alpha=(\sqrt{n})^{1/\Delta}\), and note that \(\alpha\in[5/4,3/2]\) when \(\sqrt{n}\geq 2\). **Lemma 4.3**: _For \(i=0,\ldots,\Delta\), we have_ \[\mathbb{E}[m_{i}]\leq 2n^{2}/\alpha^{i}.\] Proof.: In the \(i\)-th iteration, we sample \(S_{i}\subseteq V\) by including each vertex independently with probability \(p_{i}:=\alpha^{i}/n\). In the following, we focus on a particular vertex \(x\in V\), and let \(\deg_{i}(x)=|N_{G_{i}}(x)|\) denote the out-degree of \(x\) in \(G_{i}\). For any two out-neighbors \(v_{s},v_{y}\in N_{G_{i}}(x)\), we say \(v_{s}\)_eliminates_\(v_{y}\), if the inequality at Line 10 holds for \((s,y):=(v_{s},v_{y})\). Observe that the inequality at Line 10 is (essentially) symmetric with respect to \(y\) and \(s\), and one immediately observes that for any two \(v,v^{\prime}\in N_{G_{i}}(x)\) (possibly \(v=v^{\prime}\)), either \(v\) eliminates \(v^{\prime}\), or \(v^{\prime}\) eliminates \(v\).1 Then, Line 11 indicates that, for any \(v_{s},v_{y}\in N_{G_{i}}(x)\), if \(v_{s}\in S_{i}\) and \(v_{s}\) eliminates \(v_{y}\), then \(v_{y}\notin N_{G_{i+1}}(x)\). Therefore, \(\deg_{i+1}(x)\) is the number of out-neighbors of \(x\) that are not eliminated by anyone from \(S_{i}\). Footnote 1: In more detail, by symmetry we can pick \((s,y):=(v,v^{\prime})\) or \((v^{\prime},v)\) to satisfy \(2d_{G}(x,s)+d_{G}(s,y)\leq 2d_{G}(x,y)+d_{G}(y,s)\). Then, the inequality at Line 10 holds due to \(d_{G}(x,y)\leq\operatorname{wt}(x,y)\). For every \(v\in N_{G_{i}}(x)\), let \(e_{v}\) denote the number of \(v^{\prime}\in N_{G_{i}}(x)\) that eliminates \(v\) (including \(v\) itself). We have \[1\leq e_{v}\leq\deg_{i}(x) \tag{3}\] and \[\frac{1}{|N_{G_{i}}(x)|}\sum_{v\in N_{G_{i}}(x)}e_{v}=\frac{\deg_{i}(x)+1}{2}. \tag{4}\] Then, over a uniformly independently sampled set \(S_{i}\) of eliminators, we analyze the expected number of out-neighbors of \(x\) that are not eliminated, as follows: \[\operatorname*{\mathbb{E}}_{S_{i}}[\deg_{i+1}(x)\mid G_{i}] =\sum_{v\in N_{G_{i}}(x)}(1-p_{i})^{e_{v}}\] \[\leq\frac{|N_{G_{i}}(x)|}{2}\cdot(1-p_{i})^{1}+\frac{|N_{G_{i}}(x )|}{2}\cdot(1-p_{i})^{\deg_{i}(x)}\] (by convexity of \[f(x)=(1-p_{i})^{x}\], and Eqs. ( 3 ) and ( 4 ) ) \[=\frac{\deg_{i}(x)}{2}\cdot(1-p_{i}+(1-p_{i})^{\deg_{i}(x)})\] \[\leq\frac{\deg_{i}(x)}{2}\cdot(1+e^{-p_{i}\deg_{i}(x)}).\] Multiplying both sides by \(p_{i+1}\), \[\operatorname*{\mathbb{E}}_{S_{i}}[p_{i+1}\deg_{i+1}(x)\mid G_{i}] \leq\frac{p_{i+1}\deg_{i}(x)}{2}\cdot(1+e^{-p_{i}\deg_{i}(x)})\] \[=\frac{\alpha}{2}(p_{i}\deg_{i}(x)+p_{i}\deg_{i}(x)e^{-p_{i}\deg_ {i}(x)})\] (by \[p_{i}=\alpha^{i}/n\] ) \[<\frac{\alpha}{2}(p_{i}\deg_{i}(x)+1).\] Hence, \[\operatorname*{\mathbb{E}}[p_{i+1}\deg_{i+1}(x)]\leq\frac{\alpha}{2}( \operatorname*{\mathbb{E}}[p_{i}\deg_{i}(x)]+1).\] Since \(0\leq p_{0}\deg_{0}(x)\leq\frac{1}{n}\cdot(n-1)<1\), by induction we obtain \(\operatorname*{\mathbb{E}}[p_{i}\deg_{i}(x)]<\alpha/(2-\alpha)\) for all \(i\) (recall \(\alpha\leq 3/2<2\)). Summing over all \(x\in V\), we obtain \[\operatorname*{\mathbb{E}}[m_{i}]=\frac{1}{2}\sum_{x\in V}\operatorname*{ \mathbb{E}}[\deg_{i}(x)]\leq\frac{n}{2}\cdot\frac{\alpha/(2-\alpha)}{p_{i}}= \frac{\alpha}{4-2\alpha}n^{2}/\alpha^{i}<2n^{2}/\alpha^{i}.\qed\] Now we are ready to present the analysis of the sparsity of \(H\) and the running time of our algorithm. SparsityFor each iteration \(i\) of Algorithm 1, by definition (Line 5) we have expected sample size \[\operatorname*{\mathbb{E}}[|S_{i}|]=\alpha^{i},\] and we add the shortest path trees from / to every vertex in \(s\in S_{i}\) in \(G\), which contain \(|S_{i}|\cdot 2(n-1)\) edges. So summing over all iterations, the number of edges we add in expectation is at most (recall \(\alpha\geq 5/4\)) \[\operatorname*{\mathbb{E}}\left[\sum_{i=0}^{\Delta-1}|S_{i}|\cdot 2(n-1) \right]=2(n-1)\cdot\sum_{i=0}^{\log_{\alpha}\sqrt{n}-1}\alpha^{i}=\frac{(2n-1) (\sqrt{n}-1)}{\alpha-1}<8n^{3/2}.\] In the last step (Line 12), we add all the edges in \(G_{\Delta}\) to \(H\). By Lemma 4.3, we have \[\operatorname*{\mathbb{E}}[|E(G_{\Delta})|]\leq 2n^{2}/\alpha^{\Delta}=2n^{3/2}.\] Thus, in expectation we add \(O(n^{3/2})\) edges to \(H\) in total as desired. Running TimeIn each iteration \(i\), the bottleneck is at Line 6 where we run \(|S_{i}|\) instances of Dijkstra on \(G\), each taking \(O(m+n\log n)\) time. The sparsification steps (Line 8 - Line 11) can be implemented in \(O(|S_{i}|\cdot|E(G_{i})|)\leq O(|S_{i}|\cdot m)\) time. So in expectation the total time taken by the algorithm is bounded by \[\mathbb{E}\left[\sum_{i=0}^{\Delta-1}|S_{i}|\cdot O(m+n\log n)\right]=O(m+n\log n )\sum_{i=0}^{\log_{\alpha}\sqrt{n}-1}\alpha^{i}=O(m\sqrt{n}+n\sqrt{n}\log n).\] ## 5 \((2k-1)\)-roundtrip emulator in nearly quadratic time In this section, we give the construction of a \((2k-1)\)-roundtrip emulator on \(O(kn^{1+1/k})\) edges running in \(O(kn^{2}\log n)\) time for \(k\geq 3\) (Theorem 1.2). Our algorithm does not work for \(k=2\). (For \(k=2\), our 3-roundtrip spanner algorithm from Section 4 has \(\tilde{O}(m\sqrt{n})\) time complexity, which is slower than \(\tilde{O}(n^{2})\) for any nontrivial input size \(m\gg n^{1.5}\).) ### Algorithm Our algorithm carefully combines ideas from Thorup-Zwick distance oracle [11] and the graph sparsification technique introduced in [10]. The pseudocode of our algorithm is given in Algorithm 2. The main body contains \((k-1)\Delta=\Theta(\log n)\) iterations (indexed by \(i=r\Delta+t\)), divided into \((k-1)\) rounds (indexed by \(r\in\{0,\ldots,k-2\}\)), where each round consists of \(\Delta\) iterations (indexed by the inner loop variable \(t\in\{0,\ldots,\Delta-1\}\)). The \(i\)-th iteration samples a vertex subset \(S_{i}\), whose expected size \(\mathbb{E}[|S_{i}|]\) gradually increases from \(1\) in the 0-th iteration to \(\Theta(n^{(k-1)/k})\) in the last iteration. In each iteration we run in/out-Dijkstra from every sampled vertex \(s\in S_{i}\) on the current (sparsified) graph \(G_{i}\subseteq G\). Using the obtained distance information from/to \(S_{i}\), we not only perform the graph sparsification steps (Line 14-Line 17) as in [10], but also compute pivots \(p_{i}(u)\in S_{i}\) and bunches \(B_{i}(u)\subseteq S_{i}\) used in Thorup and Zwick's algorithm [11] (in the roundtrip metric) and adds edges to the emulator \(H\) accordingly (Line 10 - Line 13). The main complication compared to [11] is that we now have a sequence of (gradually sparsified) graphs \(G_{i}\) involved rather than a single graph \(G\), and the pivots \(p_{i}(u)\) are defined using the distances on the current graph \(G_{i}\), while the bunches \(B_{i}(u)\) are defined with respect to the pivot \(p_{r\Delta-1}(u)\) on the graph \(G_{r\Delta-1}\) from the _previous round_ of the outer loop \(r\). By our parameter setting, we expect each round in the outer loop to roughly decrease the size of the current graph by a factor of \(n^{1/k}\). After running all \((k-1)\) rounds, we can show the remaining graph \(G_{(k-1)\Delta}\) has \(O(n^{1+1/k})\) edges in expectation, and we add all of them to the emulator \(H\). ### Analysis of sparsity and running time We can without loss of generality assume \(k\leq\log n\), since otherwise we can run the algorithm for \(k=\lfloor\log n\rfloor\) and still satisfy all the requirements. Recall from Line 3 that \(\Delta=\lceil\log_{3/2}n^{1/k}\rceil\), \(\alpha:=(n^{1/k})^{1/\Delta}\), and note that \(\alpha\in[5/4,3/2]\). Algorithm 2 has \((k-1)\Delta=\log_{\alpha}n^{1-1/k}\) iterations. It has a similar structure as our earlier Algorithm 1 for 3-roundtrip spanner (except for the additional Line 10 - Line 13 here). For each iteration \(i=r\cdot\Delta+t\) where \(r\in\{0,\ldots,k-2\}\) and \(t\in\{0,1,\ldots,\Delta-1\}\), by Line 8 we have \[\mathbb{E}[|S_{i}|]=\alpha^{i}.\] ``` Input: A weighted directed graph \(G=(V,E)\) Output: A \((2k-1)\)-roundtrip emulator \(H\) of \(G\) 1\(H\leftarrow(V(G),\varnothing)\) 2\(G_{0}\gets G\) 3 Let \(\Delta:=\lceil\log_{3/2}n^{1/k}\rceil\), and \(\alpha:=(n^{1/k})^{1/\Delta}\). // \(\alpha\in[5/4,3/2]\) when \(n^{1/k}\geq 2\) 4 Let \(G_{-1}=\) empty graph and \(p_{-1}(u):=\bot\) for all \(u\in V\). // \(d_{G_{-1}}(u\leftrightarrows p_{-1}(u))=+\infty\). 5for\(r\gets 0,\ldots,k-2\)do 6for\(t\gets 0,\ldots,\Delta-1\)do 7 Let \(i:=r\Delta+t\) 8 Sample \(S_{i}\subseteq V\) by including each vertex with probability \(\alpha^{i}/n\) independently 9 Compute \(d_{G_{i}}(s,v),d_{G_{i}}(v,s)\) for all \(s\in S_{i}\) and \(v\in V\) using Dijkstra 10 Define pivot \(p_{i}(u):=\operatorname*{arg\,min}_{s\in S_{i}}d_{G_{i}}(u\leftrightarrows)\) for all \(u\in V\) 11 Define bunch \(B_{i}(u):=\{s\in S_{i}:d_{G_{i}}(u\leftrightarrows)<d_{G_{r\Delta-1}}(u \leftrightarrows p_{r\Delta-1}(u))\}\). 12for\(u\in V,s\in\{p_{i}(u)\}\cup B_{i}(u)\)do 13 Add edge \((u,s)\) with weight \(d_{G_{i}}(u,s)\) and edge \((s,u)\) with weight \(d_{G_{i}}(s,u)\) to \(H\) 14\(G_{i+1}\gets G_{i}\) 15for\((x,y),(x,s)\in E(G_{i})\) such that \(s\in S_{i}\)do 16if\(2d_{G_{i}}(x,s)+d_{G_{i}}(s,y)\leq 2\operatorname*{wt}(x,y)+d_{G_{i}}(y,s)\)then 17 Remove the edge \((x,y)\) from \(G_{i+1}\) 18 19\(H\gets H\cup G_{(k-1)\Delta}\) return\(H\) ``` **Algorithm 2**\((2k-1)\)-Emulator(\(G\)) (for \(k\geq 3\)) Similar to the analysis of our 3-roundtrip spanner algorithm, we have the following lemma on the expected number edges \(m_{i}:=|E(G_{i})|\). **Lemma 5.1**_In Algorithm 2, for \(0\leq i\leq(k-1)\Delta\) we have_ \[\mathbb{E}[m_{i}]\leq 2n^{2}/\alpha^{i}.\] The proof of Lemma 5.1 is identical to the proof of Lemma 4.3 for Algorithm 1, and is omitted here. Note that in Algorithm 2, Line 10 - Line 13 do not affect edges of \(G_{i}\), and the remaining part of the algorithm is almost identical to Algorithm 1 except that the number of iterations is changed from \(\Delta\) to \((k-1)\Delta\) (and \(\alpha\) is changed accordingly), and the sparsification rule (Line 16) now depends on distances of \(G_{i}\) instead of \(G\). These modifications do not affect the proof of Lemma 4.3. Running TimeOver all iterations of the inner **for** loop, for every \(i=0,\ldots,(k-1)\Delta-1\), the bottleneck is to run \(|S_{i}|\) instances of in/out-Dijkstras on \(G_{i}\) (Line 9), each taking \(O(m_{i}+n\log n)\) time. The sparsification steps (Line 14 - Line 17) can be implemented in \(O(|S_{i}|\cdot m_{i})\) time. Thus by Lemma 5.1, the expected total running time of our algorithm can be bounded by (note that \(S_{i}\) and \(m_{i}\) are independent random variables) \[\mathbb{E}[\sum_{i=0}^{(k-1)\Delta-1}|S_{i}|\cdot O(m_{i}+n\log n)] \leq\sum_{i=0}^{(k-1)\Delta-1}\alpha^{i}\cdot O\big{(}2n^{2}/ \alpha^{i}+n\log n\big{)}\] \[=\sum_{i=0}^{(k-1)\Delta-1}\alpha^{i}\cdot O\big{(}2n^{2}/\alpha^ {i}\big{)}\] \[=O(n^{2}\cdot(k-1)\Delta)\] \[=O(n^{2}\log n).\] SparsitySimilar to in [11], we first bound the expected size of the bunches defined in Line 11. As mentioned earlier in Remark 4.1, here we rely on the property that the vertex samples \(S_{i}\) are uniform and independent. **Lemma 5.2**_For each \(i=r\cdot\Delta+t\) (where \(r\in\{0,\ldots,k-2\},t\in\{0,\ldots,\Delta-1\}\)) and each vertex \(u\in V\), we have_ \[\mathbb{E}[|B_{i}(u)|]=\alpha^{t+1}.\] Proof.: By definition of \(B_{i}\) at Line 11, since \(G_{i}\subseteq G_{r\Delta-1}\) and thus \(d_{G_{r\Delta-1}}(\cdot,\cdot)\leq d_{G_{i}}(\cdot,\cdot)\), we have \[|B_{i}(u)| =|\{s\in S_{i}:d_{G_{i}}(u\leftrightarrows)<d_{G_{r\Delta-1}}(u \leftrightarrows p_{r\Delta-1}(u))\}|\] \[\leq|\{s\in S_{i}:d_{G_{r\Delta-1}}(u\leftrightarrows)<d_{G_{r \Delta-1}}(u\leftrightarrows p_{r\Delta-1}(u))\}|.\] Sort all \(v\in V\) in increasing order of \(d_{G_{r\Delta-1}}(u\leftrightarrows)\). Then \(p_{r\Delta-1}(u)=\operatorname*{arg\,min}_{s\in S_{r\Delta-1}}d_{G_{r\Delta- 1}}(u\leftrightarrows)\) is the first vertex in this ordering that is included in \(S_{r\Delta-1}\), and \(|B_{i}(u)|\) is bounded by the number of vertices included in \(S_{i}\) that occur before \(p_{r\Delta-1}(u)\) in this ordering. Since \(S_{r\Delta-1}\) and \(S_{i}\) are sampled uniformly and independently (conditioned on this ordering determined by \(G_{r\Delta-1}\)), the expected number of vertices included by \(B_{i}(u)\) is at most \[\sum_{j=1}^{n}\frac{\alpha^{i}}{n}\cdot\Big{(}1-\frac{\alpha^{r\Delta-1}}{n} \Big{)}^{j}\leq\frac{\alpha^{i}/n}{\alpha^{r\Delta-1}/n}=\alpha^{t+1}.\qed\] As a direct corollary, we can bound the expected total bunch size. **Corollary 5.3** ().: \[\mathbb{E}\left[\sum_{i=0}^{(k-1)\Delta-1}\sum_{u\in V}|B_{i}(u)|\right]\leq O(kn ^{1+1/k}).\] Proof.: For each \(r\in\{0,\ldots,k-2\}\), by Section5.2 and linearity of expectation, we have \[\mathbb{E}\left[\sum_{t=0}^{\Delta-1}|B_{r\Delta+t}(u)|\right]=\sum_{t=0}^{ \log_{\alpha}n^{1/k}-1}\alpha^{t+1}=O(n^{1/k})\] for each \(u\in V\). Summing over all \(r\in\{0,\ldots,k-2\}\) and \(u\in V\), \[\mathbb{E}\left[\sum_{i=0}^{(k-1)\Delta-1}\sum_{u\in V}|B_{i}(u)|\right]=\sum _{u\in V}\sum_{r=0}^{k-2}\mathbb{E}\left[\sum_{t=0}^{\Delta-1}|B_{r\Delta+t}(u )|\right]\leq O(kn^{1+1/k}).\qed\] Now we can analyze the size of the emulator constructed by Section2. **Lemma 5.4** ().: _The emulator \(H\) returned by Section2 has expected size_ \[\mathbb{E}[|H|]\leq O(kn^{1+1/k}).\] Proof.: By Section5.3, the total number edges added at Section1 has expectation at most \[\sum_{i=0}^{(k-1)\Delta-1}\sum_{u\in V}2(|B_{i}(u)|+1)\leq O(kn^{1+1/k}).\] In the end at Section1, we add all the edges in \(G_{(k-1)\Delta}\) to \(H\). By Section5.1, we know that \[\mathbb{E}[m_{(k-1)\Delta}]\leq 2n^{2}/\alpha^{(k-1)\Delta}=2n^{2}/\alpha^{(k- 1)\log_{\alpha}n^{1/k}}=2n^{1+1/k}.\] Thus the expected size of \(H\) is \(O(kn^{1+1/k})\) as desired. ### Stretch analysis By construction, it is clear that \(d_{H}(u,v)\geq d_{G}(u,v)\) for all \(u,v\in V\). From now on we fix a pair of \(u,v\in V\) and consider the shortest cycle \(C\) of length \(g:=d_{G}(u\leftrightarrows v)\) containing the vertices \(u,v\). We will prove \(d_{H}(u\leftrightarrows v)\leq(2k-1)d_{G}(u\leftrightarrows v)\). If \(C\) is included in the final sparsified graph \(G_{(k-1)\Delta}\), then by Section1 we know \(C\) is included in the emulator \(H\) and thus \(d_{H}(u\leftrightarrows v)=d_{G}(u\leftrightarrows v)\). Hence, in the following we assume \(C\not\subseteq G_{(k-1)\Delta}\), and let \(0\leq i<(k-1)\Delta\) be the first iteration in which \(C\) is destroyed by the sparsification steps, that is, \(C\subseteq E(G_{i})\) but \(C\not\subseteq E(G_{i+1})\). We first prove the following Section5 (which is essentially from [2]), which shows that when \(C\) is destroyed in iteration \(i\), it can be \(2\)-approximated by a cycle going through some sampled vertex in iteration \(i\). **Lemma 5.5**: _Then there exists some \(s\in S_{i}\) such that_ \[d_{G_{i}}(v\leftrightarrows)\leq 2g,\text{ and }d_{G_{i}}(u\leftrightarrows)\leq 2g. \tag{5}\] Proof.: By definition of \(i\), \(d_{G_{i}}(u\leftrightarrows v)=g=d_{G}(u\leftrightarrows v)\). Let \((x,y)\in C\setminus E(G_{i+1})\) be an edge on the cycle that is removed. Assume without loss of generality that \((x,y)\) lies on the shortest path from \(u\) to \(v\) (otherwise, we can swap the roles of \(u\) and \(v\)). By Line 16, this means that there exists some \(s\in S_{i}\) where \[2d_{G_{i}}(x,s)+d_{G_{i}}(s,y)\leq 2\operatorname{wt}(x,y)+d_{G_{i}}(y,s), \tag{6}\] which implies the following estimate on the length of the shortest cycle going through \(u,s,v\) in \(G_{i}\): \[d_{G_{i}}(u,s)+d_{G_{i}}(s,v)+d_{G_{i}}(v,u)\] \[\leq d_{G_{i}}(u,x)+d_{G_{i}}(x,s)+d_{G_{i}}(s,y)+d_{G_{i}}(y,v)+d_ {G_{i}}(v,u)\] (triangle inequality) \[\leq d_{G_{i}}(u,x)+2\operatorname{wt}(x,y)+d_{G_{i}}(y,s)-d_{G_{i}}(x,s)+d_{G_{i}}(y,v)+d_{G_{i}}(v,u)\] (by Eq. ( 6 ) \[\leq d_{G_{i}}(u,x)+2\operatorname{wt}(x,y)+(d_{G_{i}}(y,v)+d_{G_ {i}}(v,u)+d_{G_{i}}(u,x)+d_{G_{i}}(x,s))\] \[\quad-d_{G_{i}}(x,s)+d_{G_{i}}(y,v)+d_{G_{i}}(v,u)\] (expanding \[d_{G_{i}}(y,s)\] using triangle inequality) \[= 2d_{G_{i}}(u,x)+2\operatorname{wt}(x,y)+2d_{G_{i}}(y,v)+2d_{G_ {i}}(v,u)\] \[= 2d_{G_{i}}(u,v)+2d_{G_{i}}(v,u)\] ( \[(x,y)\] lies on shortest path from \[u\] to \[v\] ) \[= 2g.\] Thus \[d_{G_{i}}(u\leftrightarrows)\leq d_{G_{i}}(u,s)+d_{G_{i}}(s,v)+d_{G_{i}}(v,u) \leq 2g\] and the same holds for \(d_{G_{i}}(v\leftrightarrows)\) as desired. By Lemma 5.5 and the definition of the pivots \(p_{i}(u):=\operatorname*{arg\,min}_{s\in S_{i}}d_{G_{i}}(u\leftrightarrows s ),p_{i}(v):=\operatorname*{arg\,min}_{s\in S_{i}}d_{G_{i}}(v\leftrightarrows)\) (Line 10), we have \[d_{G_{i}}(u\leftrightarrows p_{i}(u))\leq 2g,\text{ and }d_{G_{i}}(v \leftrightarrows p_{i}(v))\leq 2g. \tag{7}\] We first consider the case when both \(s\in B_{i}(u)\) and \(s\in B_{i}(v)\) hold (where \(s\) is defined in Lemma 5.5). In this case, we have \[d_{H}(u\leftrightarrows v) \leq d_{H}(u\leftrightarrows)+d_{H}(s\leftrightarrows v)\] \[\leq d_{G_{i}}(u\leftrightarrows)+d_{G_{i}}(s\leftrightarrows v)\] (by Line 13 ) \[\leq 4g\] (by Eq. ( 5 ) \[\leq(2k-1)g\] (since \[k\geq 3\] ) as desired. Hence it remains to consider the case when either \(s\notin B_{i}(u)\) or \(s\notin B_{i}(v)\). In the following we only consider \(s\notin B_{i}(v)\), and the other case where \(s\notin B_{i}(u)\) follows from an analogous argument. By definition of bunches at Line 11, \(s\notin B_{i}(v)\) implies \[d_{G_{i}}(v\leftrightarrows)\geq d_{G_{r\Delta-1}}(v\leftrightarrows p_{r \Delta-1}(v)), \tag{8}\] where \(i=r\Delta+t\) (\(r\in\{0,\ldots,k-2\},t\in\{0,\ldots,\Delta-1\}\)). Now we use an induction similar to [13]. **Lemma 5.6** _Suppose integer \(J\geq 0\) satisfies_ * \(p_{(r-j)\Delta-1}(v)\notin B_{(r-j)\Delta-1}(u)\) _for all even_ \(0\leq j<J\)_, and_ * \(p_{(r-j)\Delta-1}(u)\notin B_{(r-j)\Delta-1}(v)\) _for all odd_ \(0\leq j<J\)_._ _Then,_ * _If_ \(J\) _is even, then_ \[d_{(r-J)\Delta-1}(v\leftrightarroweq p_{(r-J)\Delta-1}(v))\leq(J+2)g.\] * _If_ \(J\) _is odd, then_ \[d_{(r-J)\Delta-1}(u\leftrightarroweq p_{(r-J)\Delta-1}(u))\leq(J+2)g.\] Proof.: We prove by induction on \(J\). The base case \(J=0\) follows from \[d_{G_{r\Delta-1}}(v\leftrightarroweq p_{r\Delta-1}(v)) \leq d_{G_{i}}(v\leftrightarroweq s)\] (by _ _ _(_ _ Then, \[d_{H}(u\leftrightarrow v) \leq d_{H}(u\leftrightarrow p_{(r-J)\Delta-1}(u))+d_{H}(p_{(r-J) \Delta-1}(u)\leftrightarrow v)\] (by Line 13) \[\leq d_{G_{(r-J)\Delta-1}}(u\leftrightarrow p_{(r-J)\Delta-1}(u))+d _{G_{(r-J)\Delta-1}}(p_{(r-J)\Delta-1}(u)\leftrightarrow v)\] (by Line 13 ) \[\leq 2d_{G_{(r-J)\Delta-1}}(u\leftrightarrow p_{(r-J)\Delta-1}(u))+ d_{G_{(r-J)\Delta-1}}(u\leftrightarrow v)\] (by Line 13 and Eq. (10)) \[\leq 2(J+2)g+d_{G_{(r-J)\Delta-1}}(u\leftrightarrow v)\] (triangle inequality ) \[=(2J+5)g.\] (by Eq. (11)) **Lemma 5.8** ().: \(d_{H}(u\leftrightarrow v)\leq(2k-1)g\)_._ Proof.: By Line 4 and Line 11, we know \(B_{\Delta-1}(u)=B_{\Delta-1}(v)=S_{\Delta-1}\). In particular, this means \(J\) cannot satisfy the assumption of Lemma 5.6 if \(J\geq r\). Hence, the maximum \(J\) that could possibly satisfy the assumption of Lemma 5.6 is at most \(r-1\leq k-3\). Then, by Lemma 5.7, we have \(d_{H}(u\leftrightarrow v)\leq(2J+5)g\leq(2(k-3)+5)g=(2k-1)g\). ## 6 \(4\)-Approximation of girth in \(\tilde{O}(mn^{1/3})\) time In this section, we present our algorithm for computing a \(4\)-approximation of the girth in a weighted directed graph (Theorem 1.3). In general, we follow the approach of Chechik and Lifshitz [2] which uses uniformly random vertex samples and certain elimination rules to prune the search space for each vertex \(v\in V\). Our running time improvement comes from extending the framework of [2] by one more layer, using several novel structural and algorithmic ideas. Throughout this section, \(d(u,v)\) always means \(d_{G}(u,v)\), where \(G=(V,E)\) is the input directed graph. ### Main Algorithm By Lemma 2.1, we assume each vertex in \(G\) has degree at most \(O(m/n)\). Before describing our algorithm in detail, we first give a high-level overview of the structure of our algorithm. Our algorithm runs in three phases: 1. **Phase I.** Take a random sample \(S_{1}\subseteq V\) of \(O(n^{1/3})\) vertices. For each \(s_{1}\in S_{1}\) use Dijkstra to find the shortest cycle going through \(s_{1}\). 2. **Phase II.** Take a sample \(S_{2}\subseteq V\) of \(O(n^{2/3})\) vertices. Based on the distance information from \(S_{1}\) obtained in Phase I, for every \(s_{2}\in S_{2}\) we use the elimination rule from [2] (Lemma 3.1) to compute the pruned sets \(B_{\text{out}}^{(2)}(s_{2}),B_{\text{in}}^{(2)}(s_{2})\subseteq V\) of size \(\tilde{O}(n^{2/3})\). For each \(s_{2}\in S_{2}\) use Dijkstra to find the shortest cycle going through \(s_{2}\) and some \(u\in B_{\text{out}}^{(2)}(s_{2})\cap B_{\text{in}}^{(2)}(s_{2})\). 3. **Phase III.** Based on the distance information obtained from Phase I and II, use our novel elimination rules (Definition 6.17 and Definition 6.20, which are more technical than [10]) to compute for every vertex \(v\in V\) a pruned set \(\tilde{B}^{\prime}(v)\subseteq V\) of size \(\tilde{O}(n^{1/3})\). For each \(v\in V\) use Dijkstra to find the shortest cycle going through \(v\) in the induced subgraph \(G[\tilde{B}^{\prime}(v)]\). Finally output the length of the shortest cycle encountered in the three phases as the girth estimate. We present our main algorithm in Algorithm 3 as follows. It follows the three-phase structure described above (indicated by the comments), but involves more definitions and subroutines that will be explained in the following sections. The main statements for the correctness and running time of Algorithm 3 will be given in Theorem 6.24 and Theorem 6.26. ``` Input: A strongly connected directed graph \(G=(V,E)\) with maximum degree \(O(m/n)\) Output: An estimate \(g^{\prime}\) such that \(g\leq g^{\prime}\leq 4g\), where \(g\) is the girth of \(G\) 1 Initialize \(g^{\prime}\leftarrow\infty\) // Phase I 2 Sample \(S_{1}\subseteq V\) of size \(|S_{1}|=O(n^{1/3})\) 3for\(s_{1}\in S_{1}\)do 4 From \(s_{1}\) run in- and out-Dijkstra on \(G\) 5\(g^{\prime}\leftarrow\min_{u\in V\setminus\{s_{1}\}}d(s_{1}\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{\mathrel{ \mathrel{ \mathrel{ \mathrel{ }}}}}}{}{}}{{{{{{{{{{{{{{{{{{ }}}}}}}}}}{{{{{{{{{{{{ } } \mid}{{{{\mid} \mid}\mid}\mid}\mathord{{{{{\mid}\ piece missing from [10] but necessary for us is a certain closedness property of the pruned sets \(B^{(2)}_{\mathrm{out}}(v)\), which allows us to find all vertices in \(B^{(2)}_{\mathrm{out}}(v)\) by simply running Dijkstra from \(v\) (Lemma 6.6).2 Footnote 2: We need to compute these pruned sets \(B^{(2)}_{\mathrm{out}}(v)\) in order to prepare for the later Phase III, which was not required in [10]’s two-phase algorithm. Phase I (Line 2-Line 5) uniformly samples a set \(S_{1}\) of \(O(n^{1/3})\) vertices, and runs \(O(n^{1/3})\) Dijkstra instances on \(G\) to find the shortest cycle going through any vertex in \(S_{1}\). **Observation 6.1**: _Phase I of Algorithm 3 runs in \(\tilde{O}(mn^{1/3})\) total time._ In Phase II we try to find other short cycles in \(G\) that are not \(2\)-approximated by the estimate obtained in Phase I. The first step (Line 6) computes eliminators \(R_{1,\mathrm{out}}(v),R_{1,\mathrm{in}}(v)\subseteq S_{1}\) of small size \(|R_{1,\mathrm{out}}(v)|,|R_{1,\mathrm{in}}(v)|\leq O(\log n)\) for all \(v\in V\). Intuitively these eliminators retain the usefulness of the sample \(S_{1}\) in effectively pruning the search space, while being small enough for the benefit of time efficiency. We defer the algorithm for computing eliminators (Algorithm 4) to the end of this subsection; instead we first present the following important definition that relies on these eliminators \(R_{1,\mathrm{out}}(v),R_{1,\mathrm{in}}(v)\subseteq S_{1}\). **Definition 6.2** (\(B^{(2)}_{\mathrm{out}}(v)\) and \(B^{(2)}_{\mathrm{in}}(v)\), [10]): For \(v\in V\), given \(R_{1,\mathrm{out}}(v),R_{1,\mathrm{in}}(v)\subseteq V\), we define vertex subsets \[B^{(2)}_{\mathrm{out}}(v)=\{u\in V:2d(v,r_{1})+d(r_{1},u)>2d(v,u)+d(u,r_{1})\text { for all }r_{1}\in R_{1,\mathrm{out}}(v)\},\] and symmetrically, \[B^{(2)}_{\mathrm{in}}(v)=\{u\in V:2d(r_{1},v)+d(u,r_{1})>2d(u,v)+d(r_{1},u) \text{ for all }r_{1}\in R_{1,\mathrm{in}}(v)\}.\] Definition 6.2 is motivated by the following lemma, which follows from the key observation (Lemma 3.1) of [10]. Intuitively it says \(B^{(2)}_{\mathrm{out}}(\cdot)\) captures cycles that cannot be \(2\)-approximated by the estimate in phase I. 3 Footnote 3: We use superscript (2) in the notation of \(B^{(2)}_{\mathrm{out}}(v)\) for this reason, to distinguish it from the set \(B^{(4)}_{\mathrm{out}}(v)\) that will be introduced later in Section 6.3. **Lemma 6.3** (\(2\)-approximation [10]): _If \(u\notin B^{(2)}_{\mathrm{out}}(v)\), then there exists \(r_{1}\in R_{1,\mathrm{out}}(v)\subseteq S_{1}\) such that \(d(r_{1}\leftrightharpoons u)\leq 2d(u\leftrightharpoons v)\)._ _The same statement holds if we replace "\(\mathrm{out}\)" by "\(\mathrm{in}\)"._ Proof.: By Definition 6.2, since \(u\notin B^{(2)}_{\mathrm{out}}(v)\), there exists \(r_{1}\in R_{1,\mathrm{out}}(v)\) such that \[2d(v,r_{1})+d(r_{1},u)\leq 2d(v,u)+d(u,r_{1}).\] Then applying Lemma 3.1 to \(u,v,r_{1}\), we have \(d(r_{1}\leftrightharpoons u)\leq 2d(u\leftrightharpoons v)\). The statement with "\(\mathrm{out}\)" replaced by "\(\mathrm{in}\)" can be proved symmetrically by reversing the edge directions. The following corollary of Lemma 6.3 shows that cycles passing through some \(s_{2}\in S_{2}\) are \(2\)-approximated by Phase I and II of Algorithm 3. This is essentially how [10] obtained their \(2\)-approximation. **Corollary 6.4** ([21]): _Let \(s_{2}\in S_{2}\) and \(C\) be the shortest cycle in \(G\) going through \(s_{2}\). Then, the girth estimate \(g^{\prime}\) obtained by the end of Phase II of Algorithm 3 satisfies \(g^{\prime}\leq 2g\), where \(g\) denotes the length of \(C\)._ Proof.: We can assume \(S_{1}\cap C=\varnothing\), since otherwise the Phase I of Algorithm 3 can find \(C\) and hence \(g^{\prime}\leq g\). If \(C\not\subseteq B_{\mathrm{out}}^{(2)}(s_{2})\), let \(u\in C\setminus B_{\mathrm{out}}^{(2)}(s_{2})\). Then by Lemma 6.3 there exists \(r_{1}\in S_{1}\) such that \(d(r_{1}\Leftarrow u)\leq 2d(u\Leftarrow s_{2})=2g\), so Phase I of Algorithm 3 will update \(g^{\prime}\) with \(d(r_{1}\Leftarrow u)\leq 2g\) (since \(r_{1}\in S_{1}\) and \(u\neq r_{1}\)). Similarly, if \(C\not\subseteq B_{\mathrm{in}}^{(2)}(s_{2})\), we also have \(g^{\prime}\leq 2g\). The remaining case is \(C\subseteq B_{\mathrm{out}}^{(2)}(s_{2})\cap B_{\mathrm{in}}^{(2)}(s_{2})\). Then, Line 10 in Phase II of Algorithm 3 updates \(g^{\prime}\) with \(g\). The following key lemma (which will be proved later) states that the sets \(B_{\mathrm{in}}^{(2)}(v),B_{\mathrm{out}}^{(2)}(v)\) defined using \(R_{1,\mathrm{in}}(v)\), \(R_{1,\mathrm{out}}(v)\) returned by compute-eliminators-\(1(G,S_{1})\) (Algorithm 4) have small sizes. Intuitively, this is due to the symmetry of the elimination rule in Definition 6.2 and the sample size being \(|S_{1}|=O(n^{1/3})\). **Lemma 6.5** (Sizes of \(B_{\mathrm{in}}^{(2)}(v),B_{\mathrm{out}}^{(2)}(v)\), [21]): _With high probability4 over the random sample \(S_{1}\subseteq V\), we have \(|B_{\mathrm{in}}^{(2)}(v)|,|B_{\mathrm{out}}^{(2)}(v)|\leq\tilde{O}(n^{2/3})\) for all \(v\in V\)._ Footnote 4: We use “with high probability” to mean probability \(1-1/n^{c}\) for arbitrary given constant \(c\geq 1\). Then, the next step of Phase II is to uniformly sample a set \(S_{2}\) of \(O(n^{2/3})\) vertices (Line 7). We then run out-Dijkstra from every \(s_{2}\in S_{2}\) on the induced subgraph \(G[B_{\mathrm{out}}^{(2)}(s_{2})]\), and update the girth estimate \(g^{\prime}\) with the found cycles going through \(s_{2}\) (Line 8-Line 10). In order to implement the out-Dijkstra on \(G[B_{\mathrm{out}}^{(2)}(s_{2})]\) at Line 9, we need the following Lemma 6.6 which states that the set \(B_{\mathrm{out}}^{(2)}(s_{2})\) (as well as distances \(d(s_{2},u)\) for all \(u\in B_{\mathrm{out}}^{(2)}(s_{2})\)) can be efficiently computed given the eliminators \(R_{1,\mathrm{out}}(v)\) due to its special structure. Recall that \(d(\cdot,\cdot)\) always denotes distances in the input graph \(G\). **Lemma 6.6** (Compute \(B_{\mathrm{out}}^{(2)}(v)\)): _For any vertex \(v\in V\), given \(R_{1,\mathrm{out}}(v)\) of size \(O(\log n)\), there exists an algorithm running in \(\tilde{O}(\frac{m}{n}\cdot|B_{\mathrm{out}}^{(2)}(v)|)\) time that computes the set \(B_{\mathrm{out}}^{(2)}(v)\), and the distances \(d(v,u)\) for all \(u\in B_{\mathrm{out}}^{(2)}(v)\)._ _The same statement holds if we replace "\(\mathrm{out}\)" by " \(\mathrm{in}\)" and replace \(d(v,u)\) by \(d(u,v)\)._ Proof.: We run a modified out-Dijkstra from \(v\) on graph \(G\), and let \(D[u]\) denote the length of the shortest path from \(v\) to \(u\) found by this out-Dijkstra. The modification is that whenever we pop a vertex \(u\) from the heap, we relax the out-neighbors of \(u\) only if \(u\) satisfies \[2d(v,r_{1})+d(r_{1},u)>2D[u]+d(u,r_{1}),\text{ for all }r_{1}\in R_{1,\mathrm{ out}}(v). \tag{12}\] Comparing Eq. (12) with the definition of \(B_{\mathrm{out}}^{(2)}\) (Definition 6.2), the difference is that we use \(D[u]\) in place of \(d(v,u)\). Note that the other three terms in Eq. (12) are already computed in Phase I because \(r_{1}\in S_{1}\). To show the correctness of the modified out-Dijkstra, the key claim is the following closedness property of \(B_{\mathrm{out}}^{(2)}(v)\): **Claim 6.7** If \(u\in B^{(2)}_{\mathrm{out}}(v)\), then for every vertex \(x\) on the shortest path from \(v\) to \(u\) in \(G\), it holds that \(x\in B^{(2)}_{\mathrm{out}}(v)\). Proof.: For all \(r_{1}\in R_{1,\mathrm{out}}(v)\), we have \[2d(v,r_{1})+d(r_{1},x) \geq 2d(v,r_{1})+d(r_{1},u)-d(x,u)\] (by triangle inequality) \[>2d(v,u)+d(u,r_{1})-d(x,u)\] (by \[u\in B^{(2)}_{\mathrm{out}}(v)\] ) \[=2d(v,x)+2d(x,u)+d(u,r_{1})-d(x,u)\] (by assumption on \[x\] ) \[\geq 2d(v,x)+d(x,r_{1}).\] (by triangle inequality) Hence, we have \(x\in B^{(2)}_{\mathrm{out}}(v)\) by definition. By Claim 6.7, it is clear that our modified out-Dijkstra visits exactly all the vertices \(u\in B^{(2)}_{\mathrm{out}}(v)\), and correctly computes distances \(D[u]=d(v,u)\) for all \(u\in B^{(2)}_{\mathrm{out}}(v)\). Since \(|R_{1,\mathrm{out}}(v)|=O(\log n)\), checking the condition 12 for all \(r_{1}\in R_{1,\mathrm{out}}(v)\) only takes \(O(\log n)\) time per vertex \(u\in V\). By our assumption that the degree of every vertex is at most \(O(\frac{m}{n})\), it follows that the modified Dijkstra runs in time \(\tilde{O}(\frac{m}{n}\cdot|B^{(2)}_{\mathrm{out}}(v)|)\). Hence, we observe the following corollary: **Corollary 6.8** ().: _Line 8-Line 10 of Algorithm 3 take total time \(\tilde{O}(mn^{1/3})\)._ Proof.: By 6, the modified out-Dijkstra from all \(s_{2}\in S_{2}\) takes total time \[\tilde{O}(\frac{m}{n}\sum_{s_{2}\in S_{2}}|B^{(2)}_{\mathrm{out}}(s_{2})|) \leq\tilde{O}(\frac{m}{n}\cdot|S_{2}|\cdot n^{2/3})\leq\tilde{O}(mn^{1/3}),\] where we used \(|B^{(2)}_{\mathrm{out}}(s_{2})|\leq\tilde{O}(n^{2/3})\) from 6.5. The update step at 10 takes \(O(m/n)\cdot|B^{(2)}_{\mathrm{out}}(s_{2})|\) time for each \(s_{2}\in S_{2}\), which also sums up to \(\tilde{O}(mn^{1/3})\). Computing eliminators.Finally, we describe how to compute the eliminators \(R_{1,\mathrm{out}}(v),R_{1,\mathrm{in}}(v)\subseteq S_{1}\) (12 of Algorithm 3). This subroutine is basically the same as in [11], but we present it here using our notation for completeness. See the pseudocode of compute-eliminators-\(1(G,S_{1})\) in Algorithm 4, which takes the uniform vertex sample \(S_{1}\subseteq V\), and returns \(R_{1,\mathrm{out}}(v)\subseteq S_{1}\) for all \(v\). The algorithm for computing \(R_{1,\mathrm{in}}(v)\) is analogous: we simply run Algorithm 4 on the graph obtained by reversing the edge orientations of \(G\), and we omit the detailed descriptions here. ``` Input: The input graph \(G=(V,E)\), and \(S_{1}=\{s_{1},s_{2},\ldots,s_{|S_{1}|}\}\subseteq V\) of size \(|S_{1}|=O(n^{1/3})\) sampled uniformly and independently (with replacement) Output: Sets \(R_{1,\mathrm{out}}(v)\subseteq S_{1}\) of size \(O(\log n)\) for every vertex \(v\in V\) 1\(T^{(0)}(v),R^{(0)}(v)\leftarrow\varnothing\) for every \(v\in V\) 2for\(i\in\{1,\ldots,k\}\) where \(k=10\log n\)do 3\(S^{(i)}\leftarrow\) the next \(10n^{1/3}/\log n\) samples from \(S_{1}\) 4 Run in- and out-Dijkstra from every \(s\in S^{(i)}\) on \(G\) 5for\(v\in V\)do 6\(T^{(i)}(v)\leftarrow\{s\in S^{(i)}\mid\forall t\in R^{(i-1)}(v),2d(v,s)+d(s,t) <2d(v,t)+d(t,s)\}\) 7if\(T^{(i)}(v)\neq\varnothing\)then 8\(t\leftarrow\) a random vertex \(t\in T^{(i)}(v)\) 9\(R^{(i)}(v)\gets R^{(i-1)}(v)\cup\{t\}\) 10else 11\(R^{(i)}(v)\gets R^{(i-1)}(v)\) 12 13return\(R_{1,\mathrm{out}}(v)\gets R^{(k)}(v)\) for each \(v\in V\) ``` **Algorithm 4**compute-eliminators-\(1(G,S_{1})\) Algorithm 4 runs in \(k=10\log n\) iterations. In each iteration, it takes \(10n^{1/3}/\log n\) fresh vertex samples (from \(S_{1}\)), and runs Dijkstra from them on \(G\). Then, based on the obtained distance information, it possibly adds one sampled vertex \(t\) to each \(R_{1,\mathrm{out}}(v)\). By inspecting Algorithm 4, one immediately observes the following properties. **Observation 6.9**: _Algorithm 4 runs in time \(\tilde{O}(mn^{1/3})\), and outputs sets \(R_{1,\mathrm{out}}(v)\subseteq S_{1}\) for all \(v\in V\) of size \(|R_{1,\mathrm{out}}(v)|=O(\log n)\)._ Proof.: First note that the total number of vertex samples required at Line 3 is \(|S^{(1)}\uplus\cdots\uplus S^{(k)}|=k\cdot 10n^{1/3}/\log n=100n^{1/3}\leq|S_{1}|\). In each iteration \(1\leq i\leq k\), the algorithm only adds at most one sampled vertex \(t\in S_{1}\) to the set \(R^{(i)}(v)\) for each \(v\in V\) (Line 7-Line 11), so each output set \(R_{1,\mathrm{out}}(v)=R^{(k)}(v)\subseteq S_{1}\) and has size \(|R_{1,\mathrm{out}}(v)|\leq k\leq O(\log n)\). In each iteration, the Dijkstra instances at Line 4 take time \(|S^{(i)}|\cdot O(m+n\log n)\leq\tilde{O}(mn^{1/3})\). Then, to compute \(T^{(i)}(v)\subseteq S^{(i)}\) at Line 6 for each \(v\in V\), we check for every \(s\in S^{(i)}\) whether \(s\in T^{(i)}(v)\), by simply going over all \(t\in R^{(i-1)}(v)\) and checking the condition \(2d(v,s)+d(s,t)<2d(v,t)+d(t,s)\). Note that all four terms in this inequality have already been computed by the in- and out-Dijkstras since \(s\in S^{(i)}\) and \(t\in S^{(1)}\cup\cdots\cup S^{(i-1)}\). So \(T^{(i)}(v)\) can be computed in time \(O(|S^{(i)}|\cdot|R^{(i-1)}(v)|)=O((n^{1/3}/\log n)\cdot\log n)=O(n^{1/3})\) for each \(v\in V\). Thus each iteration runs in time \(O(mn^{1/3})\) time and over all \(k=O(\log n)\) iterations, Algorithm 4 runs in total time \(\tilde{O}(mn^{1/3})\). Now we prove the key Lemma 6.5, which states that Algorithm 4 guarantees \(B^{(2)}_{\mathrm{out}}(v)\) and \(B^{(2)}_{\mathrm{in}}(v)\) to have small size with high probability. Proof of Lemma 6.5.: The proof more or less follows from Section 6 in [21]. For purpose of the proof, we define the sets \[B_{i}(v)=\{u\in V\mid 2d(v,u)+d(u,r)<2d(v,r)+d(r,u)\,\forall r\in R^{(i)}(v)\}.\] Then note that by definition \(B_{\text{out}}^{(2)}(v)=\{u\in V\mid 2d(v,u)+d(u,r)<2d(v,r)+d(r,u)\,\forall r\in R_{1, \text{out}}(v)\}=B_{k}(v)\). We want to show that \[\Pr\left[|B_{k}(v)|>n^{2/3}\log n\right]\leq\frac{1}{n^{2}}.\] We first show that if \(|B_{i}(v)|>n^{2/3}\log n\), then \[\mathbb{E}\left[|B_{i}(v)|\,\Big{|}\,\big{|}B_{i-1}(v)\big{|}\right]\leq\frac {3}{4}|B_{i-1}(v)|.\] By symmetry 5 of the condition \(2d(v,u)+d(u,s)<2d(v,s)+d(s,u)\) with respect to \(u\in B_{i-1}(v)\) and \(s\in S^{(i)}\cap B_{i}(v)\), for any pair of vertices \(u,u^{\prime}\in B_{i-1}(v)\), either \(u\) can eliminate \(u^{\prime}\) or \(u^{\prime}\) can eliminate \(u\). Thus given a random \(s\in S^{(i)}\cap B_{i}(v)\), on expectation \(s\) can eliminate half of the vertices in \(B_{i-1}(v)\). So conditioned on the event that \(S^{(i)}\cap B_{i}(v)\neq\varnothing\), we have the expected size of \(B_{i}(v)\) is at most half the size of \(B_{i-1}(v)\). Specifically we have Footnote 5: For more details, refer to the proof of Lemma 3.3 in [1] \[\mathbb{E}\left[|B_{i}(v)|\,\Big{|}\,S^{(i)}\cap B_{i}(v)\neq\varnothing\right] \leq\frac{1}{2}|B_{i-1}(v)|.\] Now since \(|S^{(i)}|=10n^{1/3}/\log n\) is a uniform random sample, we can compute \(\Pr[S^{(i)}\cap B_{i}(v)=\varnothing]\) as \[\Pr\left[S^{(i)}\cap B_{i}(v)=\varnothing\right] =\left(1-\frac{|B_{i}(v)}{n}\right)^{10n^{1/3}/\log n}\approx \exp\left(-\frac{|B_{i}(v)|\cdot 10n^{1/3}}{n\log n}\right)\] \[\leq\left(\frac{1}{4}\right)^{\frac{|B_{i}(v)|}{n^{2/3}\log n}} \leq\frac{1}{4}.\] Thus we have \[\mathbb{E}\left[|B_{i}(v)|\,\Big{|}\,|B_{i-1}(v)|\right] =\mathbb{E}\left[|B_{i}(v)|\,\Big{|}\,|B_{i-1}(v)|,S^{(i)}\cap B_ {i}(v)\neq\varnothing\right]\cdot\Pr\left[S^{(i)}\cap B_{i}(v)\neq\varnothing\right]\] \[\quad+\mathbb{E}\left[|B_{i}(v)|\,\Big{|}\,|B_{i-1}(v)|,S^{(i)} \cap B_{i}(v)=\varnothing\right]\cdot\Pr\left[S^{(i)}\cap B_{i}(v)=\varnothing\right]\] \[\leq\frac{1}{2}|B_{i-1}(v)|+\frac{1}{4}|B_{i-1}(v)|=\frac{3}{4}|B _{i-1}(v)|\] as desired. Now we can easily finish the proof by applying Markov's inequality. \[\Pr\left[|B_{k}(v)|>n^{2/3}\log n\right]\leq\frac{\mathbb{E}[|B_{k}(v)|]}{n^{2 /3}\log n}\leq\frac{\left(\frac{3}{4}\right)^{k}n}{\left(n^{2/3}\log n\right)} \leq\left(\frac{3}{4}\right)^{k}n^{1/3}\leq\frac{1}{n^{2}}.\] **Proposition 6.10**: _Phase II of Algorithm 3 runs in \(\tilde{O}(mn^{1/3})\) total time._ Proof.: Follows from Observation 6.9 and Corollary 6.8. ### New lemmas for \(4\)-approximation In this section we describe our new structural lemmas that are useful for \(4\)-approximation. We start with the following Lemma6.11, which naturally extends the \(2\)-approximation lemma (Lemma6.3) for \(B_{\mathrm{in}}^{(2)}(v)\) from one layer to two layers by exploiting the second sample set \(S_{2}\). **Lemma 6.11**: _Let \(u,v\in V\)\((u\neq v)\) and \(r_{2}\in S_{2}\). Suppose_ \[2d(r_{2},v)+d(u,r_{2})\leq 2d(u,v)+d(r_{2},u).\] _Then, the girth estimate \(g^{\prime}\) obtained by the end of Phase II of Algorithm3 satisfies \(g^{\prime}\leq 4d(u\leftrightarrows v)\)._ Proof.: Apply Lemma3.1 (_with edge direction reversed_) to \(u,v,r_{2}\), and obtain \[d(r_{2}\leftrightarrows u)\leq 2d(u\leftrightarrows v).\] If \(u\notin B_{\mathrm{out}}^{(2)}(r_{2})\), then by Lemma6.3 there exists \(r_{1}\in R_{1,\mathrm{out}}(r_{2})\subseteq S_{1}\) such that \[d(r_{1}\leftrightarrows u)\leq 2d(u\leftrightarrows r_{2})\leq 4d(u \leftrightarrows v).\] This implies \(g^{\prime}\leq 4d(u\leftrightarrows v)\) due to the update at Line5 in Phase I of Algorithm3 for \(r_{1}\in S_{1}\).6 Footnote 6: This argument requires \(r_{1}\neq u\). This can be ensured by assuming \(u\notin S_{1}\) without loss of generality: if \(u\in S_{1}\), then Phase I of Algorithm3 will update \(g^{\prime}\) using \(d(u\leftrightarrows v)\). Similarly, if \(u\notin B_{\mathrm{in}}^{(2)}(r_{2})\), then we also have \(g^{\prime}\leq 4d(u\leftrightarrows v)\). It remains to consider the case where \(u\in B_{\mathrm{in}}^{(2)}(r_{2})\cap B_{\mathrm{out}}^{(2)}(r_{2})\). In this case, Line10 of Algorithm3 updates \(g^{\prime}\) with \(d(r_{2}\leftrightarrows u)\leq 2d(u\leftrightarrows v)\) (here we need to assume \(u\neq r_{2}\); the \(u=r_{2}\) case is already covered by Corollary6.4). Hence, we always have \(g^{\prime}\leq 4d(u\leftrightarrows v)\). In light of Lemma6.11, a natural attempt for a \(4\)-approximation algorithm is to imimate Phase II and focus on for each \(v\in V\) the pruned vertex set \(\{u\in V:2d(r_{2},v)+d(u,r_{2})>2d(u,v)+d(r_{2},u)\text{ for all }r_{2}\in R(v)\}\) for some suitably defined \(R(v)\subseteq S_{2}\). As mentioned in the technical overview, this attempt would require distance information for all \(r_{2}\in S_{2}\), which is infeasible to compute efficiently enough due to the large size \(|S_{2}|=O(n^{2/3})\). Thus, we need to use more structural lemmas for our algorithm, described as follows. First, we generalize the key observation (Lemma3.1) of [10] to the following Lemma6.12. Note that Lemma3.1 corresponds to the \(k=2\) case of Lemma6.12. See Fig.1 (the same figure as Lemma3.1) for an illustration. **Lemma 6.12** (Generalized key observation): _For any \(k\geq 1\) and vertices \(u,v,r\), if_ \[k\cdot d(v,r)+d(r,u)\leq k\cdot d(v,u)+(k-1)\cdot d(u,r),\] _then_ \[d(r\leftrightarrows u)\leq k\cdot d(u\leftrightarrows v).\] Proof.: Note that by triangle inequality, we have \(d(u,v)\geq d(u,r)-d(v,r)\), so \[k\cdot d(v,u)+k\cdot d(u,v) \geq k\cdot d(v,u)+k\cdot d(u,r)-k\cdot d(v,r)\] \[\geq\left(k\cdot d(v,r)+d(r,u)-(k-1)\cdot d(u,r)\right)+k\cdot d (u,r)-k\cdot d(v,r)\] \[=d(r,u)+d(u,r).\qed\] Lemma 6.12 inspires the following definition of \(B^{(4)}_{\mathrm{out}}(v)\) and a \(4\)-approximation lemma (Corollary 6.14), which are analogous to \(B^{(2)}_{\mathrm{out}}(v)\) (Definition 6.2) and the \(2\)-approximation lemma (Lemma 6.3). **Definition 6.13** (\(B^{(4)}_{\mathrm{out}}(v)\)): For \(v\in V\), given \(R_{1,\mathrm{out}}(v)\subseteq V\), we define vertex subsets \[B^{(4)}_{\mathrm{out}}(v)=\{u\in V:4d(v,r_{1})+d(r_{1},u)>4d(v,u)+3d(u,r_{1})\text { for all }r_{1}\in R_{1,\mathrm{out}}(v)\}.\] **Corollary 6.14** (\(4\)-approximation): _If \(u\notin B^{(4)}_{\mathrm{out}}(v)\), then there exists \(r_{1}\in R_{1,\mathrm{out}}(v)\) such that \(d(r_{1}\leftrightarrows u)\leq 4d(u\leftrightarrows v)\)._ Proof.: By Definition 6.13, since \(u\notin B^{(4)}_{\mathrm{out}}(v)\), there exists \(r_{1}\in R_{1,\mathrm{out}}(v)\) such that \[4d(v,r_{1})+d(r_{1},u)\leq 4d(v,u)+3d(u,r_{1}).\] Then applying Lemma 6.12 with \(k=4\) to \(u,v,r_{1}\), we have \(d(r_{1}\leftrightarrows u)\leq 4d(u\leftrightarrows v)\). We also have the following relationship between \(B^{(4)}_{\mathrm{out}}(v)\) and \(B^{(2)}_{\mathrm{out}}(v)\). **Lemma 6.15**: _For all \(v\in V\), \(B^{(4)}_{\mathrm{out}}(v)\subseteq B^{(2)}_{\mathrm{out}}(v)\)._ _As a consequence, the algorithm of Lemma 6.6 for computing \(B^{(2)}_{\mathrm{out}}(v)\) can also compute \(B^{(4)}_{\mathrm{out}}(v)\) in the same running time._ Proof.: If \(u\in B^{(4)}_{\mathrm{out}}(v)\), then by Definition 6.13 for all \(r_{1}\in R_{1,\mathrm{out}}(v)\), \[2d(v,r_{1})+d(r_{1},u) >4d(v,u)+3d(u,r_{1})-2d(v,r_{1})\] \[=2d(v,u)+d(u,r_{1})+2\big{(}d(v,u)+d(u,r_{1})-d(v,r_{1})\big{)}\] \[\geq 2d(v,u)+d(u,r_{1}).\] So \(u\in B^{(2)}_{\mathrm{out}}(v)\) by Definition 6.2. Now we state and prove our main novel technical lemma, which is a key ingredient of our \(4\)-approximation algorithm. **Lemma 6.16** (\(4\)-Approximation Filtering Lemma): _Consider vertices \(r_{2},v,u\in V\) such that \(v\in B^{(4)}_{\mathrm{out}}(r_{2})\) and \(u\not\in B^{(2)}_{\mathrm{out}}(r_{2})\). Then there exists \(r_{1}\in R_{1,\mathrm{out}}(r_{2})\) such that \(d(v\leftrightarrows r_{1})\leq 4d(v\leftrightarrows u)\)._ Proof.: Since \(u\not\in B^{(2)}_{\mathrm{out}}(r_{2})\), by Definition6.2 there exists \(r_{1}\in R_{1,\mathrm{out}}(r_{2})\) such that \[2d(r_{2},u)+d(u,r_{1})\geq 2d(r_{2},r_{1})+d(r_{1},u). \tag{13}\] Since \(v\in B^{(4)}_{\mathrm{out}}(r_{2})\) and \(r_{1}\in R_{1,\mathrm{out}}(r_{2})\), by Definition6.13 we have \[4d(r_{2},r_{1})+d(r_{1},v)>4d(r_{2},v)+3d(v,r_{1}). \tag{14}\] Adding Eq.13 multiplied by 2 with Eq.14, and cancelling \(4d(r_{2},r_{1})\) on both sides, we get \[4d(r_{2},u)+2d(u,r_{1})+d(r_{1},v)>2d(r_{1},u)+4d(r_{2},v)+3d(v,r_{1}).\] Combining with \(4d(r_{2},v)+4d(v,u)\geq 4d(r_{2},u)\) (triangle inequality), this implies \[4d(v,u)+2d(u,r_{1})+d(r_{1},v)>2d(r_{1},u)+3d(v,r_{1}).\] Adding \(4d(u,v)\) to both sides gives \[4d(u\Leftrightarrow v)+2d(u,r_{1})+d(r_{1},v) >\big{(}2d(r_{1},u)+2d(u,v)\big{)}+\big{(}2d(u,v)+2d(v,r_{1}) \big{)}+d(v,r_{1})\] \[\geq 2d(r_{1},v)+2d(u,r_{1})+d(v,r_{1}),\] which immediately simplifies to \[4d(u\Leftrightarrow v)>d(r_{1},v)+d(v,r_{1})=d(v\Leftrightarrow r_{1}).\qed\] ### Phase III Now we are ready to describe Phase III, the most technical part of our Algorithm3. It has a similar structure as Phase II: we first compute eliminators \(R_{2,\mathrm{in}}(v)\subseteq S_{2}\) of size \(|R_{2,\mathrm{in}}(v)|=O(\log n)\) for all \(v\in V\), and then use these eliminators to define pruned vertex sets \(\tilde{B}^{\prime}(v)\) (which is a subset of \(B^{\prime}(v)\cup\{v\}\) which we will define shortly) to search for short cycles. In light of the \(4\)-approximation Figure 3: Illustration of the relationship between the vertices involved in Lemma6.16. The two bold black cycle is relatively short and the dashed cycle is relatively long, the goal is to approximate the red cycle using the cycle highlighted green. As labeled, \(v\in B^{(4)}_{\mathrm{out}}(r_{2})\) meaning that \(v\) and \(r_{2}\) are in a short cycle, \(u\not\in B^{(2)}_{\mathrm{out}}(r_{2})\) meaning that \(u\) and \(r_{2}\) are in a relatively long cycle. Then we can find some \(r_{1}\) in the set of eliminators for \(r_{2}\) such that the cycle passing through \(v\) and \(r_{1}\) (highlighted green) approximate the red cycle passing through \(v\) and \(u\). filtering lemma (Lemma 6.16), we will ensure the eliminators satisfy the following property (it will be later shown in Observation 6.25): \[\text{For every $v\in V$ and $r_{2}\in R_{2,\text{in}}(v)$, we have $v\in B^{(4)}_{\text{out}}(r_{2})$.} \tag{15}\] Again, we defer the algorithm for computing the eliminators \(R_{2,\text{in}}(v)\) to the end of this subsection. We first make the following technical definition of pruned vertex sets \(B^{\prime}(v)\), which is directly motivated by the structural lemmas from Section 6.3. **Definition 6.17** (\(B^{\prime}(v)\)): For \(v\in V\), let \(B^{\prime}(v)\) denote the set of vertices \(s\in V\) that satisfy all the following conditions: 1. \(v\in B^{(4)}_{\text{out}}(s)\), and 2. \(s\in B^{(2)}_{\text{out}}(r_{2})\) for all \(r_{2}\in R_{2,\text{in}}(v)\), and 3. \(2d(s,v)+d(r_{2},s)<2d(r_{2},v)+\underline{d}(s,r_{2})\) for all \(r_{2}\in R_{2,\text{in}}(v)\). (where \(\underline{d}\) is defined in Lemma 6.18) In this definition, condition 1 is motivated by the \(4\)-approximation lemma (Corollary 6.14), condition 2 is motivated by our \(4\)-approximation filtering lemma (Lemma 6.16) and Eq. (15), and condition 3 is motivated by Lemma 6.11. For technical reason, condition 3 involves a certain distance underestimate that is easier to compute, defined as follows (readers are encouraged to think of the underestimate as the original distance, and skip this definition at first read): **Lemma 6.18** (Under-estimate of \(d(u,r_{2})\)): _For all \(u\in V\) and \(r_{2}\in V\), define \(\underline{d}(u,r_{2})\) as follows:_ * _Case_ \(u\in B^{(2)}_{\text{in}}(r_{2})\)_:_ _Let_ \(\underline{d}(u,r_{2}):=d(u,r_{2})\)_._ * _Case_ \(u\notin B^{(2)}_{\text{in}}(r_{2})\)_:_ _Let_ \[\underline{d}(u,r_{2}):=\frac{1}{2}\min_{r_{1}\in R_{1,\text{in}}(r_{2})}(2d(r _{1},r_{2})+d(u,r_{1})-d(r_{1},u)).\] (16) _Then, \(\underline{d}(u,r_{2})\leq d(u,r_{2})\) holds._ Proof.: In order to prove \(\underline{d}(u,r_{2})\leq d(u,r_{2})\), it suffices to focus on the second case, \(u\notin B^{(2)}_{\text{in}}(r_{2})\). By definition of \(B^{(2)}_{\text{in}}(r_{2})\) (Definition 6.2), there exists \(r_{1}\in R_{1,\text{in}}(r_{2})\) such that \[2d(r_{1},r_{2})+d(u,r_{1})\leq 2d(u,r_{2})+d(r_{1},u).\] This immediately implies \(\underline{d}(u,r_{2})\) as defined in Eq. (16) satisfies \(2\underline{d}(u,r_{2})\leq 2d(u,r_{2})\). The following key lemma (analogous to Lemma 6.5 from Phase II) bounds the size of \(B^{\prime}(v)\). **Lemma 6.19** (size of \(B^{\prime}(v)\)): _With high probability over the random samples \(S_{1},S_{2}\subseteq V\), we have \(|B^{\prime}(v)|\leq\tilde{O}(n^{1/3})\) for all \(v\in V\)._ Intuitively, this is due to the symmetry of the elimination rule (Condition 3 in Definition 6.17 of \(B^{\prime}(v)\)), and because the sample size is \(|S_{2}|=O(n^{2/3})\). We will prove Lemma 6.19 later after describing the algorithm computing eliminators \(R_{2,\mathrm{in}}(v)\). Our actual algorithm performs a modified in-Dijkstra from every \(v\in V\) on the induced subgraph \(G[\tilde{B}^{\prime}(v)]\) (Line 13), where \(\tilde{B}^{\prime}(v)\) is a slight variant of \(B^{\prime}(v)\), which we shall define shortly. The reason for not using \(B^{\prime}(v)\) is because our modified in-Dijkstra algorithm does not know the true distance \(d(s,v)\) needed for checking the condition 1 and 3 in the definition of \(B^{\prime}(v)\).7 Instead, we use the current distance found by the in-Dijkstra to replace \(d(s,v)\). The formal definition is as follows (again, readers are encouraged to skip this definition at first read, and think of \(\tilde{B}^{\prime}(v)\) as the same as \(B^{\prime}(v)\) for intuition): Footnote 7: Note that we introduced the under-estimate \(\underline{d}(s,r_{2})\) in condition 3 of Definition 6.17 for the same reason. **Definition 6.20** (Modified in-Dijkstra and \(\tilde{B}^{\prime}(v)\)): For \(v\in V\), consider the following modified in-Dijkstra algorithm starting from \(v\) on graph \(G\), where we let \(D[u]\) denote the length of the shortest path from \(u\) to \(v\) found by this in-Dijkstra. The modification is that whenever we pop a vertex \(s\neq v\) from the heap, we relax the in-neighbors of \(s\) only if \(s\) satisfies all the following three conditions: 1. \(4d(s,r_{1})+d(r_{1},v)>4D[s]+3d(v,r_{1})\) for all \(r_{1}\in R_{1,\mathrm{out}}(s)\), and 2. \(s\in B^{(2)}_{\mathrm{out}}(r_{2})\) for all \(r_{2}\in R_{2,\mathrm{in}}(v)\), and 3. \(2D[s]+d(r_{2},s)<2d(r_{2},v)+\underline{d}(s,r_{2})\) for all \(r_{2}\in R_{2,\mathrm{in}}(v)\). Let \(\tilde{B}^{\prime}(v)\) denote the set of vertices \(s\) that are popped out from the heap and satisfy all the three conditions above, and additionally we also let \(v\in\tilde{B}^{\prime}(v)\). (Note that the source vertex \(v\) always relaxes all its in-neighbors in the beginning of in-Dijkstra) **Observation 6.21**: \(\tilde{B}^{\prime}(v)\subseteq B^{\prime}(v)\cup\{v\}\) _for all \(v\in V\)._ Proof.: Note that the three conditions in Definition 6.20 are the same as the three conditions in Definition 6.17 except that the terms \(d(s,v)\) in condition 1 and 3 are replaced by \(D[s]\). Since the distance \(D[s]\) found by the in-Dijkstra from \(v\) must be greater than or equal to the true distance \(d(s,v)\), we see that both condition 1 and 3 are strengthened. Hence, \(\tilde{B}^{\prime}(v)\subseteq B^{\prime}(v)\). Phase III of our algorithm (Line 13) is implemented by the modified in-Dijkstra described in Definition 6.20. It remains to show that we can implement it efficiently. In particular, we need to show that checking the three conditions in Definition 6.20 is efficient. We first show that the underestimate \(\underline{d}(u,r_{2})\) from Lemma 6.18 can be computed efficiently. **Lemma 6.22** (Compute \(\underline{d}(u,r_{2})\)): _For \(r_{2}\in V\), assume we know \(B^{(2)}_{\mathrm{in}}(r_{2})\) and \(d(x,r_{2})\) for all \(x\in B^{(2)}_{\mathrm{in}}(r_{2})\). Then \(\underline{d}(u,r_{2})\) can then be computed for any \(u\in V\) in \(O(\log n)\) time._ Proof.: According to the definition in Lemma 6.18, we first check whether \(u\in B^{(2)}_{\mathrm{in}}(r_{2})\). In the first case where \(u\in B^{(2)}_{\mathrm{in}}(r_{2})\), the answer is \(d(u,r_{2})\), which we know by assumption. In the second case where \(u\notin B^{(2)}_{\mathrm{in}}(r_{2})\), we need to compute Eq. (16) by going over all \(O(\log n)\) many \(r_{1}\in R_{2,\mathrm{in}}(r_{2})\). The expression of Eq. (16) only involves distances \(d(r_{1},\cdot)\) and \(d(\cdot,r_{1})\) for \(r_{1}\in R_{1,\mathrm{in}}(r_{2})\subseteq S_{1}\), which are already computed in Phase I of Algorithm 3. So we can compute the answer in \(O(\log n)\) time. Now we show \(\tilde{B}^{\prime}(v)\) can be computed efficiently. **Lemma 6.23**: _The modified in-Dijkstra of Definition 6.20 computes \(\tilde{B}^{\prime}(v)\) in \(\tilde{O}(\frac{m}{n}\cdot|\tilde{B}^{\prime}(v)|)\) time._ Proof.: Suppose the modified in-Dijkstra pops vertex \(s\) from the heap. * The condition 1 of Definition 6.20 can be checked in \(O(1)\) time because we already know \(d(r_{1},\cdot),d(\cdot,r_{1})\) for all \(r_{1}\in S_{1}\) from Phase I of Algorithm 3. * The condition 2 can be checked in \(O(|R_{2,\mathrm{in}}(v)|)\leq O(\log n)\) time since we already computed \(B_{\mathrm{out}}^{(2)}(r_{2})\) for all \(r_{2}\in S_{2}\) in Phase II of Algorithm 3. * For condition 3, we need to check \(2D[s]+d(r_{2},s)<2d(r_{2},v)+\underline{d}(s,r_{2})\) for all \(r_{2}\in R_{2,\mathrm{in}}(v)\). * Since the check for condition 2 has passed, we have \(s\in B_{\mathrm{out}}^{(2)}(r_{2})\). So we know the value of \(d(r_{2},s)\) from Phase II of Algorithm 3 (note that \(r_{2}\in S_{2}\)). * Since \(r_{2}\in R_{2,\mathrm{in}}(v)\), we have \(v\in B_{\mathrm{out}}^{(4)}(r_{2})\subseteq B_{\mathrm{out}}^{(2)}(r_{2})\) by Eq. (15). So we know the value of \(d(r_{2},v)\) from Phase II of Algorithm 3. * We can compute \(\underline{d}(s,r_{2})\) in \(O(\log n)\) time due to Lemma 6.22 and Phase II of Algorithm 3. Hence, we can check whether \(s\in\tilde{B}^{\prime}(v)\) in \(O(\log^{2}n)\) time. Now we are ready to prove that our Algorithm 3 achieves \(4\)-approximation. **Theorem 6.24** (Correctness of Algorithm 3): _Algorithm 3 returns \(g^{\prime}\) satisfying \(g\leq g^{\prime}\leq 4g\), where \(g\) is the girth of the input directed graph \(G\)._ Proof.: Let \(C\) be the shortest cycle of \(G\) with length \(g\). Consider an arbitrary vertex \(v\) on \(C\). If \(v\in S_{1}\), then \(C\) is found in Phase I of Algorithm 3 and hence \(g^{\prime}=g\). If all vertices on \(C\) are contained in \(\tilde{B}^{\prime}(v)\), then it is eventually found at Line 1 in Algorithm 3, and \(g^{\prime}=g\). Hence, in the following we assume \(v\notin S_{1}\), and there is some vertex \(u\in C\) that is not included in \(\tilde{B}^{\prime}(v)\). We choose \(u\) to be the first ancestor of \(v\) on the cycle that is not in \(\tilde{B}^{\prime}(v)\) (in particular, \(u\) is a minimizer of \(d(u,v)\) among \(u\in C\setminus\tilde{B}^{\prime}(v)\)). Note that \(u\neq v\) because \(v\in\tilde{B}^{\prime}(v)\) by definition. Let \(x\in C\) denote the out-neighbor of \(u\) on the cycle \(C\). By our definition of \(u\), we know the entire shortest path from \(x\) to \(v\) on \(C\) are contained in \(\tilde{B}^{\prime}(v)\). Then, \(x\) must have relaxed its in-neighbor \(u\) during the modified in-Dijkstra, which makes \(D[u]\) equal to the true distance \(d(u,v)\). The fact that \(u\notin\tilde{B}^{\prime}(v)\) then means some of the three conditions in Definition 6.20 is violated for \(u\), which then implies \(u\notin B^{\prime}(v)\), as these conditions are equivalent to the three conditions in the definition of \(B^{\prime}(v)\) (Definition 6.17) due to \(D[u]=d(u,v)\). As \(u\notin B^{\prime}(v)\), we now divide into three cases depending on which condition in Definition 6.17 fails for \(u\). * Condition 1 fails, i.e., \(v\notin B_{\mathrm{out}}^{(4)}(u)\). Then by Corollary 6.14, there exists \(r_{1}\in R_{1,\mathrm{out}}(u)\) such that \(4d(u\leftrightharpoons v)\geq d(r_{1}\leftrightharpoons v)\geq g^{\prime}\) (due to the update at Line 5 for \(r_{1}\in R_{1,\mathrm{out}}(u)\subseteq S_{1}\) during Phase I of Algorithm 3; note that \(v\neq r_{1}\) since \(v\notin S_{1}\)). * Condition 2 fails, i.e., \(u\notin B^{(2)}_{\mathrm{out}}(r_{2})\) for some \(r_{2}\in R_{2,\mathrm{in}}(v)\). Since \(r_{2}\in R_{2,\mathrm{in}}(v)\), by Eq.15 we have \(v\in B^{(4)}_{\mathrm{out}}(r_{2})\). Then, by the \(4\)-approximation filtering lemma (Lemma6.16), there exists \(r_{1}\in R_{1,\mathrm{out}}(r_{2})\) such that \(4d(v\leftrightarrows u)\geq d(v\leftrightarrows r_{1})\geq g^{\prime}\) (due to the update at Line5 in Phase I of Algorithm3). * Condition 3 fails, and Conditions 1,2 hold. This is saying that there exists \(r_{2}\in R_{2,\mathrm{in}}(v)\) such that \[2d(u,v)+d(r_{2},u)\geq 2d(r_{2},v)+\underline{d}(u,r_{2}).\] (17) And, we have \(v\in B^{(4)}_{\mathrm{out}}(u)\) (by Condition1) and \(u\in B^{(2)}_{\mathrm{out}}(r_{2})\) (by Condition2). We further divide into two cases: * Case \(u\in B^{(2)}_{\mathrm{in}}(r_{2})\): In this case we have \(\underline{d}(u,r_{2})=d(u,r_{2})\) by Lemma6.18. So we apply Lemma6.11 to Eq.17 and obtain \(g^{\prime}\leq 4d(u\leftrightarrows v)\). * Case \(u\not\in B^{(2)}_{\mathrm{in}}(r_{2})\): Plugging the definition \(\underline{d}(u,r_{2})=\min_{r_{1}\in R_{1,\mathrm{in}}(r_{2})}\frac{1}{2}(2d (r_{1},r_{2})+d(u,r_{1})-d(r_{1},u))\) (from Lemma6.18) into Eq.17, we obtain that there exists \(r_{1}\in R_{1,\mathrm{in}}(r_{2})\) such that \[2d(u,v)+d(r_{2},u)\geq 2d(r_{2},v)+\frac{1}{2}(2d(r_{1},r_{2})+d(u,r_{1})-d (r_{1},u)).\] Multiplying both sides by \(2\), and then adding \(4d(v,u)-2d(r_{2},u)\) to both sides, we get \[4d(u,v)+4d(v,u) \geq 4d(r_{2},v)+2d(r_{1},r_{2})+d(u,r_{1})-d(r_{1},u)+4d(v,u)-2d (r_{2},u)\] \[\geq 2d(r_{2},v)+2d(r_{1},r_{2})+d(u,r_{1})-d(r_{1},u)+2d(v,u)\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad By inspecting Algorithm 5, we observe the following properties (analogous to Observation 6.9 for Algorithm 4 from Phase II). **Observation 6.25**: _Algorithm 5 runs in time \(\tilde{O}(mn^{1/3})\), and outputs sets \(R_{2,\mathrm{in}}(v)\subseteq S_{2}\) of size \(|R_{2,\mathrm{in}}(v)|=O(\log n)\) for all \(v\in V\). Moreover, for every \(v\in V\) and \(s\in R_{2,\mathrm{in}}(v)\), we have \(v\in B^{(4)}_{\mathrm{out}}(s)\)._ Proof.: First note that the total number of vertex samples required at Line 3 is \(|S^{(1)}\uplus\cdots\uplus S^{(k)}|=k\cdot 10n^{2/3}/\log n=100n^{2/3}\leq|S_{2}|\). In each iteration \(1\leq i\leq k\), the algorithm only adds at most one sampled vertex \(t\in S_{2}\) to the set \(R^{(i)}(v)\) for each \(v\in V\), so each output set \(R_{2,\mathrm{in}}(v)=R^{(k)}(v)\subseteq S_{2}\) and has size \(|R_{2,\mathrm{in}}(v)|\leq k\leq O(\log n)\). To prove the moreover part, note that by definition of \(T^{(i)}(v)\) at Line 7, \(v\in B^{(4)}_{\mathrm{out}}(s)\) holds for all \(s\in T^{(i)}(v)\) and thus for all \(s\in R_{2,\mathrm{in}}(v)\). It remains to bound the running time. In each iteration, Line 5 takes time \(\tilde{O}(\frac{m}{n}\cdot n^{2/3})\) for each \(s\in S^{(i)}\) by Lemma 6.6 (recall that \(|B^{(2)}_{\mathrm{out}}(s)|,|B^{(2)}_{\mathrm{in}}(s)|,|B^{(4)}_{\mathrm{out} }(s)|\leq\tilde{O}(n^{2/3})\) by Lemma 6.5 and Lemma 6.15), which sums to \(\tilde{O}(n^{2/3})\cdot\tilde{O}(\frac{m}{n}\cdot n^{2/3})=\tilde{O}(mn^{1/3})\) in total. To implement Line 7 efficiently, for any given \(v\in V\) we want to quickly go over all \(s\in S^{(i)}\) such that \(v\in B^{(4)}_{\mathrm{out}}(s)\). This can be achieved by a preprocessing stage that iterates over \(s\in S^{(i)}\) and inserts \(s\) to the \(v\)-th bucket for every \(v\in B^{(4)}_{\mathrm{out}}(s)\), in \(\tilde{O}(n^{2/3}\cdot n^{2/3})=\tilde{O}(n^{4/3})\) total time. Then, we show that all four terms in the inequality at Line 7 are known from the computation at Line 5 or can be computed efficiently from there: \(d(s,v)\) is known because \(v\in B^{(4)}_{\mathrm{out}}(s),s\in S^{(i)}\), \(d(t,s)\) is known because \(s\in B^{(2)}_{\mathrm{out}}(t)\) and \(t\in S_{2}\), \(d(t,v)\) is known because \(v\in B^{(4)}_{\mathrm{out}}(t)\) and \(t\in S_{2}\), and \(\underline{d}(s,t)\) can be computed by Lemma 6.22 in \(O(\log n)\) time because we know \(B^{(2)}_{\mathrm{in}}(t)\) and \(d(x,t)\) for all \(x\in B^{(2)}_{\mathrm{in}}(t)\). Thus overall \(k=O(\log n)\) iterations, Algorithm 5 takes \(\tilde{O}(mn^{1/3})\) time. Now we prove the key Lemma 6.19, which states that Algorithm 5 guarantees \(B^{\prime}(v)\) to have small size with high probability. Proof of Lemma 6.19.: The proof is based on symmetry of elimination, which is similar to the earlier proof of Lemma 6.5. Fix the \(v\in V\) from Definition 6.17. Due to Item 1 of Definition 6.17, here we only need to consider vertices from \(C_{v}:=\{s\in V:v\in B^{(4)}_{\mathrm{out}}(s)\}\). We make the following definition motivated by Item 2 and Item 3 of Definition 6.17: for two vertices \(s,t\in C_{v}\), we say \(t\)_eliminates_\(s\), if \(s\notin B^{(2)}_{\mathrm{out}}(t)\) or \(2d(s,v)+d(t,s)\geq 2d(t,v)+\underline{d}(s,t)\). Then, observe that \(B^{\prime}(v)\) consists of exactly the vertices \(s\in C_{v}\) that are not eliminated by any vertex in \(R_{2,\mathrm{in}}(v)\). Now we show that for any \(s,t\in C_{v}\), either \(s\) eliminates \(t\) or \(t\) eliminates \(s\). Suppose to the contrary that \(s\) does not eliminate \(t\), and \(t\) does not eliminate \(s\). Then we have inequalities \[2d(s,v)+d(t,s)<2d(t,v)+\underline{d}(s,t)\leq 2d(t,v)+d(s,t)\] and \[2d(t,v)+d(s,t)<2d(s,v)+\underline{d}(t,s)\leq 2d(s,v)+d(t,s),\] which are contradicting each other. Having proved this symmetry property, the rest of the arguments is the same as in Lemma 6.5, and we omit it here. Finally, we can state the time complexity of the entire Algorithm 3. **Theorem 6.26** (Running time of Algorithm 3): _Algorithm 3 runs in \(\tilde{O}(mn^{1/3})\) time with high probability._ Proof.: The running time of Phase I is \(\tilde{O}(mn^{1/3})\) by Observation 6.1. The running time of Phase II is \(\tilde{O}(mn^{1/3})\) by Proposition 6.10. For Phase III, Line 11 (computing eliminators \(R_{2,\mathrm{in}}(v)\) for all \(v\in V\)) takes \(\tilde{O}(mn^{1/3})\) time by Observation 6.25. Then, the **for** loop takes \(\tilde{O}(\frac{m}{n}\cdot|\tilde{B}^{\prime}(v)|)\) time for each \(v\in V\). Since \(\tilde{B}^{\prime}(v)\subseteq B^{\prime}(v)\cup\{v\}\) (by Observation 6.21) and \(|B^{\prime}(v)|\leq\tilde{O}(n^{1/3})\) (by Lemma 6.19), the total time for this loop is \(n\cdot\tilde{O}(\frac{m}{n}\cdot n^{1/3})=\tilde{O}(mn^{1/3})\). Thus, the overall running time of Algorithm 3 is \(\tilde{O}(mn^{1/3})\). ## 7 Conclusion We conclude with a few open questions: 1. Can we compute \(3\)-roundtrip spanner in \(\tilde{O}(n^{2})\) time (or even faster)? 2. Can we compute \((2k-1)\)-approximate roundtrip emulators faster on sparse graphs? 3. For the \(O(mn^{1/k})\)-time roundtrip spanner (or directed girth) algorithm of [10], can we improve its \(O(k\log k)\) approximation ratio to \(O(k)\)? Can our technique be combined with the divide-and-conquer techniques of [12, 10]? 4. Can we show fine-grained lower bounds for the task of computing roundtrip spanners? In particular, can we rule out \(\tilde{O}(m)\)-time algorithms for computing \((2k-1)\)-roundtrip spanners of sparsity \(O(n^{1+1/k})\)?
2309.16908
Accurate estimate of $C_5$ dispersion coefficients of the alkali atoms interacting with different material media
By inferring the dynamic permittivity of different material media from the observations and calculating dynamic electric dipole polarizabilties of the Li through Cs alkali atoms, precise values of $C_3$ coefficients were estimated in Phys. Rev. A {\bf 89}, 022511 (2014) and Phys. Lett. A {\bf 380}, 3366 (2016). Since significant contribution towards the long range van der Waals potential is given by the quadrupole polarization effects, we have estimated the $C_5$ coefficients in this work arising from the quadrupole polarization effects of all the alkali atoms interacting with metal (Au), semiconductor (Si) and four dielectric materials (SiO$_2$, SiN$_x$, YAG and sapphire). The required dynamic electric quadrupole (E2) polarizabilities are evaluated by calculating E2 matrix elements of a large number of transitions in the alkali atoms by employing a relativistic coupled-cluster method. Our finding shows that contributions from the $C_5$ coefficients to the atom-wall interaction potentials are pronounced at short distances (1$-$10 nm). The $C_3$ coefficients of Fr atom interacting with the above material media are also reported. These results can be useful in understanding the interactions of alkali atoms trapped in different material bodies during the high-precision measurements.
Harpreet Kaur, Vipul Badhan, Bindiya Arora, B. K. Sahoo
2023-09-29T00:25:28Z
http://arxiv.org/abs/2309.16908v1
Accurate estimate of \(C_{5}\) dispersion coefficients of the alkali atoms interacting with different material media ###### Abstract By inferring the dynamic permittivity of different material media from the observations and calculating dynamic electric dipole polarizabilities of the Li through Cs alkali atoms, precise values of \(C_{3}\) coefficients were estimated in Phys. Rev. A **89**, 022511 (2014) and Phys. Lett. A **380**, 3366 (2016). Since significant contribution towards the long range van der Waals potential is given by the quadrupole polarization effects, we have estimated the \(C_{5}\) coefficients in this work arising from the quadrupole polarization effects of all the alkali atoms interacting with metal (Au), semiconductor (Si) and four dielectric materials (SiO\({}_{2}\), SiN\({}_{x}\), YAG and sapphire). The required dynamic electric quadrupole (E2) polarizabilities are evaluated by calculating E2 matrix elements of a large number of transitions in the alkali atoms by employing a relativistic coupled-cluster method. Our finding shows that contributions from the \(C_{5}\) coefficients to the atom-wall interaction potentials are pronounced at short distances (1\(-\)10 nm). The \(C_{3}\) coefficients of Fr atom interacting with the above material media are also reported. These results can be useful in understanding the interactions of alkali atoms trapped in different material bodies during the high-precision measurements. ## I Introduction Dispersion coefficients due to van der Waals (vdW) interactions between atoms and material walls have gained significant interest in the last two decades [1] after their numerous applications in physisorption [2; 3], storage [4], nano electromechanical systems [5], quantum reflection [6], atomic clocks [7], atomic chips [8], atom vapour sensors [9] and so on. The attractive potential between an atom and wall arises from quantum fluctuations at the zero contact point due to resonant coupling of virtual photons emitted from the atom with different electromagnetic modes of the surface of the wall [10]. This phenomenon can be described by non-pairwise additive Lifshitz theory [11]. However, often crude approximations have been made in this theory for simplicity by considering only the dipole polarization effects due to their predominant contributions. Following the perturbation theory analysis, the atom-wall interaction potentials can be expressed as a sum of contributions from multipole-polarizability (i.e. dipole, quadrupole, octupole, etc.) effects of atoms [12]. It has been pointed out that the corrections to the total potential due to multipole polarizations in atom-wall systems must be taken into account in the vicinity of physisorption rendered by the vdW interactions [13]. Leibsch investigated the importance of the quadrupole contributions of atomic properties in the determination of atom-metal attractive interaction potentials and found 5-10% enhancements in the contributions to the interaction potentials using the density functional theory (DFT) [13]. For atoms placed closed to surfaces, some of the selective quadrupole resonances could play pivotal roles to enhance the atom-surface interaction significantly such that their contributions to the potentials can be higher than the dipole component contributions as noted by KloInv _et al._ using an analytical analysis [14]. The dispersion coefficients arising from dipole (\(C_{3}\)) and quadrupole (\(C_{5}\)) interactions with the material walls were inspected by Tao _et al._ between different atoms and metal surfaces using the DFT method and showed that \(C_{5}\) term makes about 20% contribution to the long range part [15]. There are many other works that highlight the importance of higher-order multipole contributions to the atom-wall interaction potentials [16; 17; 18; 19; 20; 21], whereas Lach _et al._[21] have provided a more accurate description of the vdW potentials for the interactions of atoms with the surfaces of perfect conductors and dielectrics materials by taking into account contributions from the dipole, quadrupole, octupole and hexapole polarizabilities of the atoms within the framework of Lifshitz theory. In the past decade, alkali atoms have been used to understand the behaviour of the vdW interactions with different materials using various theoretical and experimental techniques due to their fairly simple electronic configuration [22; 23; 24; 25; 9; 25]. To gain the insight into the importance of quadrupole polarizability contributions from the alkali atoms towards their interaction potential with different material walls, we have evaluated the vdW dispersion coefficients arising due to the dipole term (\(C_{3}\)) and next higher-order quadrupole term (\(C_{5}\)) for all alkali-metal atoms with different materials including metal, semiconductor and dielectrics over an arbitrary range of separation distance. Our work is in accordance with the previous studies that indicated the dominance of quadrupole polarization effects evaluated by other methods [13; 14; 15]. Particularly, we have probed the range of separation distance for which quadrupole effects are more significant. The \(C_{3}\) and \(C_{5}\) coefficients depend upon the polarizabilities of atoms and permittivity of the material walls at imaginary frequencies. The accuracy of these coefficients can be achieved by using the appropriate methods to calculate these properties. We have used a relativistic all-order (AO) method to calculate the polarizabilities of the alkali atoms and Kramers-Kronig relation is used to determine the permittivity of materials at imaginary frequencies. Using these vdW coefficients, we have computed and investigated the potential curves for considered atom-wall systems. In the following sections, we have provided brief theory related to the interaction potential at arbitrary separation, the method of evaluation of required properties of materials and atom in Sec. III, results and discussion in Sec. IV and finally concluded our work in Sec. V. Unless stated otherwise, atomic units (a.u.) are being used through out the manuscript. ## II Theory The exact theory for the calculation of vdW interaction potential between an atom and material surface has been given in Ref. [21]. Here, we give only a brief outline of the expressions for the atom-wall vdW interaction potentials due to multipole dispersion coefficients. The general expression of total attractive interaction potential (\(U_{total}\)) arising from the fluctuating multipole moments of an atom interacting with its image in the surface is given by [20] \[U_{total}(z)=U_{d}(z)+U_{q}(z)+..., \tag{1}\] where \(U_{d}\), \(U_{q}\), and so on are the contributions from the dipole, quadrupole etc. contributions and \(z\) is the separation distance between atom and wall in nm. Due to predominant nature of dipole component, often \(U_{total}(z)\) is approximated as \(U_{d}(z)\), but we also estimate contributions from \(U_{q}(z)\) in this work. In terms of permittivity values of the material and dynamic polarizabilities of the atoms, we can express [21; 26; 27] \[U_{d}(z)=-\frac{\alpha_{fs}^{3}}{2\pi}\int_{0}^{\infty}d\omega \omega^{3}\alpha_{d}(\omega)\] \[\times\int_{1}^{\infty}d\chi e^{2\chi\alpha_{fs}\omega z}H(\chi, \epsilon_{r}(\omega)) \tag{2}\] and \[U_{q}(z)=-\frac{\alpha_{fs}^{5}}{12\pi}\int_{0}^{\infty}d\omega \omega^{5}\alpha_{q}(\omega)\] \[\times\int_{1}^{\infty}d\chi e^{2\chi\alpha_{fs}\omega z}(2\chi^{ 2}-1)H(\chi,\epsilon_{r}(\omega)). \tag{3}\] In the above two expressions, \(\alpha_{fs}\) is the fine-structure constant, \(\chi\) is the Matsubara frequency, \(\alpha_{d}(\omega)\) and \(\alpha_{q}(\omega)\) are the dynamic dipole and quadrupole polarizabilities of the ground state of the considered atom at imaginary frequencies. The expression of function \(H(\chi,\epsilon(\omega))\) is given by [28] \[H(\chi,\epsilon)=(1-2\chi^{2})\frac{\chi^{\prime}-\epsilon_{r}\chi}{\chi^{ \prime}+\epsilon_{r}\chi}+\frac{\chi^{\prime}-\chi}{\chi^{\prime}+\chi}, \tag{4}\] where \(\chi^{\prime}=\sqrt{\chi^{2}+\epsilon_{r}-1}\) and \(\epsilon_{r}\) is the real part of dynamic permittivity of the material wall at imaginary frequency. Approximating the total potential till quadrupole effects, at short distances (\(z\to 0\)), the preceding formulas are now given by \[U_{total}(z)=-\frac{C_{3}}{z^{3}}-\frac{C_{5}}{z^{5}}, \tag{5}\] where \(C_{3}\) and \(C_{5}\) coefficients are defined as \[C_{3}=\frac{1}{4\pi}\int_{0}^{\infty}d\omega\alpha_{d}(\omega)\frac{\epsilon_ {r}(\iota\omega)-1}{\epsilon_{r}(\iota\omega)+1}, \tag{6}\] and \[C_{5}=\frac{1}{4\pi}\int_{0}^{\infty}d\omega\alpha_{q}(\iota\omega)\frac{ \epsilon_{r}(\iota\omega)-1}{\epsilon_{r}(\iota\omega)+1}. \tag{7}\] ## III Method of evaluation As mentioned in the previous section, evaluation of the \(C_{3}\) and \(C_{5}\) coefficients require knowledge of \(\epsilon_{r}(\iota\omega)\) and \(\alpha(\iota\omega)\) of the material media and atoms, respectively. The real part of permittivity at imaginary frequency \(\epsilon_{r}(\iota\omega)\) values cannot be obtained experimentally, but their values can be inferred from the imaginary part of permittivity at real frequencies using the Kramers-Kronig relations. Similarly, accurate determination of dynamic values of \(\alpha_{d}\) and \(\alpha_{q}\) values at the imaginary frequencies are challenging in the _ab initio_ approach. However for alkali atoms, these can be evaluated very accurately using the sum-over-states approach. Below, we discuss evaluation procedures of \(\epsilon_{r}(\iota\omega)\) and \(\alpha(\iota\omega)\). ### Dynamic electric permittivity The imaginary part of dynamic electric permittivity \(\epsilon_{i}(\omega)\) can be given by \[\epsilon_{i}(\omega)=2n(\omega)\kappa(\omega), \tag{8}\] where \(n(\omega)\) and \(\kappa(\omega)\) are the refractive indices and extinction coefficients of the materials at the real frequencies, respectively. Discrete \(n(\omega)\) and \(\kappa(\omega)\) values of the considered material media for a wide range of frequencies are tabulated in the Handbook on optical constants of solids by Palik [29]. Using these values, we have extrapolated values of \(\epsilon_{i}(\omega)\) for continuous frequencies for a large range. Now using the Kramers-Kronig relation, we can express the real part of dynamic permittivities (\(\epsilon_{r}(\iota\omega)\)) at imaginary frequencies such that \[\epsilon_{r}(\iota\omega)=1+\frac{2}{\pi}\int_{0}^{\infty}d\omega^{ \prime}\frac{\omega^{\prime}\epsilon_{i}(\omega^{\prime})}{\omega^{2}+\omega^{ \prime 2}}. \tag{9}\] For the case of semiconductor - Si and dielectrics - SiO\({}_{2}\), YAG, ordinary sapphire (oSap) and extraordinary sapphire (eSap), we have used the optical constants ranging from 0.1 eV to 10000 eV from Handbook of optical constants by Palik [29]. For the case of Au, the \(\epsilon\) values at very small energies are very significant, hence in addition to the experimental values from Ref. [29], we have extrapolated the values of real permittivity at imaginary frequencies using the Drude model for metals as \[\epsilon_{r}(\iota\omega)=1-\frac{\omega_{p}^{2}}{\omega(\omega+ \iota\gamma)}, \tag{10}\] where \(\omega_{p}\) is the plasma frequency and \(\gamma\) is the relaxation frequency. We have used \(\omega_{p}\)=9.0 eV and \(\gamma\)=0.035 eV as referred in [27; 30]. For the case of SiN\({}_{x}\), an amorphous dielectric material, we use Tauc-Lorentz model for amorphous materials [31] for estimating the electric permittivity at imaginary frequencies, the expression of which is given as follows \[\epsilon_{r}(\iota\omega)=\frac{\omega^{2}+(1+g_{0})\omega_{0}^{ 2}}{\omega^{2}+(1-g_{0})\omega_{0}^{2}}, \tag{11}\] where the parameters \(g_{0}=0.588\) and \(\omega_{0}=0.005\) are the SiN\({}_{x}\)'s response functions [31]. ### Dynamic polarizabilities We have already reported \(\alpha_{d}(\iota\omega)\) values of the Li to Cs alkali atoms in our previous works [28; 32]. Here, we give \(\alpha_{d}(\iota\omega)\) values of the Fr atom and the \(\alpha_{q}(\iota\omega)\) values of all the alkali-metal atoms by evaluating them as given in the following procedures. Total electron correlation contributions to \(\alpha_{d}(\iota\omega)\) and \(\alpha_{q}(\iota\omega)\) of atomic states of the alkali atoms can be expressed as [33] \[\alpha_{l}(\iota\omega)=\alpha_{l,core}(\iota\omega)+\alpha_{l, vc}(\iota\omega)+\alpha_{l,val}(\iota\omega), \tag{12}\] where \(l=d\) corresponds to dipole polarizability and \(l=q\) corresponds to quadrupole polarizability. Subscripts \(core\), \(vc\) and \(val\) corresponds to core, valence-core and valence contributions, respectively, to the total polarizability. In the alkali atoms, \(\alpha_{l,val}(\iota\omega)\) contributes predominantly followed by \(\alpha_{l,core}(\iota\omega)\) and contributions from \(\alpha_{l,vc}(\iota\omega)\) are negligibly small. These contributions are estimated in the following way. To begin with, the electronic configuration of alkali atoms is divided into a closed-core and a valence orbital in order to obtain the mean-field Dirac-Fock (DF) wave function of the respective closed-shell (\(|0_{c}\rangle\)) using DF method. The mean-field wave functions of the atomic states of the alkali atoms are then defined by appending the respective valence orbital \(v\) as \[|\phi_{v}\rangle=a_{v}^{\dagger}|0_{c}\rangle. \tag{13}\] Using these mean-field DF wave functions, we calculated the \(vc\) contributions to the dipole and quadrupole polarizability using the following formula \[\alpha_{l,vc}(\iota\omega)=\frac{2}{(2L+1)(2J_{v}+1)}\] \[\times\sum_{m}^{N_{c}}\frac{(\mathcal{E}_{m}-\mathcal{E}_{v})| \langle\psi_{v}||\mathbf{O}_{L}||\psi_{m}\rangle_{DF}|^{2}}{(\mathcal{E}_{m}- \mathcal{E}_{v})^{2}+\omega^{2}}. \tag{14}\] where \(J_{v}\) corresponds to total angular momentum of the state. Similarly, the core contributions can be given by \[\alpha_{l,core}(\iota\omega) = \frac{2}{(2L+1)} \tag{15}\] \[\times\sum_{a}^{N_{c}}\sum_{m}^{B}\frac{(\mathcal{E}_{m}- \mathcal{E}_{a})|\langle\psi_{a}||\mathbf{O}_{L}||\psi_{m}\rangle|^{2}}{( \mathcal{E}_{m}-\mathcal{E}_{a})^{2}+\omega^{2}},\] where the first sum for core orbitals is restricted from \(a\) to total core orbitals \(N_{c}\), second sum is restricted by involving intermediate states \(m\) up to allowed bound states \(B\) using the respective dipole and quadrupole selection rules, \(L=1\) and \(\mathbf{O}_{1}=\mathbf{D}\) is for the dipole operator to give \(\alpha_{d}\), \(L=2\) and \(\mathbf{O}_{2}=\mathbf{Q}\) is for quadrupole operator to give \(\alpha_{q}\) and \(\mathcal{E}_{i}\) is the DF energy of the state. We have adopted the random phase approximation (RPA) to evaluate the above expression to account for the core correlations [34]. The major contributions to the total dipole and quadrupole polarizabilities are provided by the \(val\) contributions, hence it is important to calculate the same with accurate methods. We divide the \(val\) contributions into two parts - main and tail. The main part corresponds to polarizability contributions by the low-lying dominant transitions responsible for very large polarizability contributions. For the evaluation of the main part of the \(val\) contribution of total polarizability, we have employed the AO method to evaluate the accurate wave functions. These wave functions \(|\psi_{v}\rangle\), with \(v\) denoting the valence orbital, are represented using singles and doubles (SD) approximation of AO method as [39] \[|\psi_{v}\rangle_{SD}=\left[1+\sum_{ma}\rho_{ma}a_{m}^{\dagger}a_{ a}+\frac{1}{2}\sum_{mrab}\rho_{mrab}a_{m}^{\dagger}a_{r}^{\dagger}a_{b}a_{a}\right.\] \[\left.+\sum_{m\neq v}\rho_{mre}a_{m}^{\dagger}a_{v}+\sum_{mla} \rho_{mrea}a_{m}^{\dagger}a_{r}^{\dagger}a_{a}a_{v}\right]|\phi_{v}\rangle, \tag{16}\] where \(a^{\dagger}\) and \(a\) represent the second-quantization creation and annihilation operators, respectively, whereas excitation coefficients are denoted by \(\rho\). The subscripts \(m,r\) and \(a,b\) refer to the virtual and core orbitals, respectively. \(\rho_{ma}\) and \(\rho_{mv}\) are the single whereas \(\rho_{mrab}\) and \(\rho_{mrva}\) are the double excitation coefficients. To take into account the important experimental contributions these _ab initio_ wave functions are modified by changing the valence excitation coefficient with modified \(\rho_{mv}\) using the scaling procedure such that \[\rho^{\prime}_{mv}=\rho_{mv}\frac{\delta E_{v}^{expt}}{\delta E_{v}^{theory}}. \tag{17}\] After obtaining wave functions of the considered states of alkali-metal atoms using AO method, we determine matrix elements with \(k\) as intermediate state using the following expression [40] \[O_{L,vk}=\frac{\langle\psi_{v}|\mathbf{O}_{L}|\psi_{k}\rangle}{\sqrt{\langle \psi_{v}|\psi_{v}\rangle\langle\psi_{k}|\psi_{k}\rangle}}, \tag{18}\] where \(O_{L,vk}\) corresponds to either dipole E1 or quadrupole E2 matrix elements depending on \(\mathbf{D}\) or \(\mathbf{Q}\) operators, respectively. Using these matrix elements, the final expression for the main part of the \(val\) contribution to either the E1 or E2 polarizability at imaginary frequency is then given as \[\alpha_{l,Main}(\omega)=\frac{2}{(2L+1)(2J_{v}+1)}\] \[\times\sum_{m>N_{c},m\neq v}\frac{(E_{m}-E_{v})|\langle\psi_{v} ||\mathbf{O}_{L}||\psi_{m}\rangle|^{2}}{(E_{m}-E_{v})^{2}+\omega^{2}}. \tag{19}\] where the sum now restricted by entailing the intermediate states \(m\) after \(N_{c}\) and up to \(I\). We have considered 10 - 12 E1 and E2 matrix elements for the dominant transitions of considered atoms using the AO method. For precise calculations, we use experimental energies (\(E_{i}\)) from the National Institute of Science and Technology (NIST) database [41]. Contributions from the remaining high-lying states are referred as tail part and are evaluated as \[\alpha_{l,Tail}(\omega)=\frac{2}{(2L+1)(2J_{n}+1)}\] \[\times\sum_{m>I}\frac{(\mathcal{E}_{m}-\mathcal{E}_{n})|\langle \psi_{n}||\mathbf{O}_{L}||\psi_{m}\rangle_{DF}|^{2}}{(\mathcal{E}_{m}- \mathcal{E}_{n})^{2}+\omega^{2}}, \tag{20}\] where \(m>I\) means that states included in the main contribution evaluation are excluded here. Since the tail contributions are much smaller in comparison to the main part, we calculate them using the DF method. ## IV Results the \(C_{3}\) coefficients only for Fr with a number of material walls. We give the static \(\alpha_{d}(0)\) value along with reduced E1 matrix elements of Fr and compare with the other high-precision calculations in Table 1. We have taken the experimental values of E1 matrix elements of the dominant dipole transitions of Fr [38]. Other E1 matrix elements are calculated using the method given in Sec. III. Our value is in excellent agreement with value given by Derevianko _et al._ who used high precision experimental values for E1 matrix elements for the principal transitions \begin{table} \begin{tabular}{l l l l l l l l l} \hline \hline & \multicolumn{1}{c}{Li} & \multicolumn{3}{c}{Na} & \multicolumn{3}{c}{K} \\ \hline Contribution & E2 & \(\alpha_{q}(0)\) & Contribution & E2 & \(\alpha_{q}(0)\) & Contribution & E2 & \(\alpha_{q}(0)\) \\ Main & & & Main & & Main & & & Main \\ \(2S_{1/2}\) - \(3D_{3/2}\) & 17.340 & 421.9 & \(3S_{1/2}\) - \(3D_{3/2}\) & 19.79 & 589 & \(4S_{1/2}\) - \(3D_{3/2}\) & 30.68 & 1919 \\ \(2S_{1/2}\) - \(4D_{3/2}\) & 7.281 & 64 & \(3S_{1/2}\) - \(4D_{3/2}\) & 7.783 & 77.0 & \(4S_{1/2}\) - \(4D_{3/2}\) & 4.04 & 26 \\ \(2S_{1/2}\) - \(5D_{3/2}\) & 4.301 & 21 & \(3S_{1/2}\) - \(5D_{3/2}\) & 4.466 & \(23.64\) & \(4S_{1/2}\) - \(5D_{3/2}\) & 0.45 & 0.3 \\ \(2S_{1/2}\) - \(6D_{3/2}\) & 2.956 & 9 & \(3S_{1/2}\) - \(6D_{3/2}\) & 3.021 & 10 & \(4S_{1/2}\) - \(6D_{3/2}\) & 0.32 & 0.1 \\ \(2S_{1/2}\) - \(7D_{3/2}\) & 2.207 & 5 & \(3S_{1/2}\) - \(7D_{3/2}\) & 2.234 & 6 & \(4S_{1/2}\) - \(7D_{3/2}\) & 0.49 & 0.3 \\ \(2S_{1/2}\) - \(8D_{3/2}\) & 1.74 & 3 & \(3S_{1/2}\) - \(8D_{3/2}\) & 1.746 & 4 & \(4S_{1/2}\) - \(8D_{3/2}\) & 0.51 & 0.3 \\ \(2S_{1/2}\) - \(3D_{5/2}\) & 21.237 & 632.8 & \(3S_{1/2}\) - \(3D_{5/2}\) & 24.23 & 884 & \(4S_{1/2}\) - \(3D_{5/2}\) & 37.58 & 2879 \\ \(2S_{1/2}\) - \(4D_{5/2}\) & 8.917 & 95.3 & \(3S_{1/2}\) - \(4D_{5/2}\) & 9.532 & 115.4 & \(4S_{1/2}\) - \(4D_{3/2}\) & 4.94 & 39 \\ \(2S_{1/2}\) - \(5D_{5/2}\) & 5.27 & 31 & \(3S_{1/2}\) - \(5D_{5/2}\) & 5.470 & 35.46 & \(4S_{1/2}\) - \(5D_{5/2}\) & 0.55 & 0.4 \\ \(2S_{1/2}\) - \(6D_{5/2}\) & 3.620 & 14 & \(3S_{1/2}\) - \(6D_{5/2}\) & 3.700 & 16 & \(4S_{1/2}\) - \(6D_{5/2}\) & 0.39 & 0.2 \\ \(2S_{1/2}\) - \(7D_{5/2}\) & 2.703 & 8 & \(3S_{1/2}\) - \(7D_{5/2}\) & 2.737 & 8 & \(4S_{1/2}\) - \(7D_{5/2}\) & 0.60 & 0.5 \\ \(2S_{1/2}\) - \(8D_{5/2}\) & 2.126 & 5 & \(3S_{1/2}\) - \(8D_{5/2}\) & 2.139 & 5 & \(4S_{1/2}\) - \(8D_{5/2}\) & 0.62 & 0.5 \\ Tail & & 114 & Tail & & & & & & 98 \\ Core & & 0.112 & Core & & & 1.52 & Core & & & 16.27 \\ vc & & 0 & vc & & 0 & vc & & 0 \\ Total & & 1424 & Total & & & 1879 & Total & & & 4980 \\ Others & & 1423 [37] & Others & & & 1895 [37] & Others & & & 4962 [37] \\ & & & & & & 1800 [19] & & & & \\ \hline \hline \multicolumn{1}{c}{Rb} & \multicolumn{3}{c}{Cs} & \multicolumn{3}{c}{Fr} \\ \hline Contribution & E2 & \(\alpha_{q}(0)\) & Contribution & E2 & \(\alpha_{q}(0)\) & Contribution & E2 & \(\alpha_{q}(0)\) \\ Main & & & Main & & Main & & & & \\ \(5S_{1/2}\) - \(4D_{3/2}\) & 32.94 & 2461 & \(6S_{1/2}\) - \(5D_{3/2}\) & 33.33 & 3362 & \(7S_{1/2}\) - \(6D_{3/2}\) & 32.91 & 2930 \\ \(5S_{1/2}\) - \(5D_{3/2}\) & 0.31 & 0.2 & \(6S_{1/2}\) - \(6D_{3/2}\) & 12.56 & 306 & \(7S_{1/2}\) - \(7D_{3/2}\) & 8.09 & 119 \\ \(5S_{1/2}\) - \(6D_{3/2}\) & 2.23 & 8 & \(6S_{1/2}\) - \(7D_{3/2}\) & 7.91 & 106 & \(7S_{1/2}\) - \(8D_{3/2}\) & 5.79 & 53 \\ \(5S_{1/2}\) - \(7D_{3/2}\) & 2.08 & 6 & \(6S_{1/2}\) - \(8D_{3/2}\) & 5.442 & 46.7 & \(7S_{1/2}\) - \(9D_{3/2}\) & 4.169 & 26.4 \\ \(5S_{1/2}\) - \(8D_{3/2}\) & 1.75 & 4 & \(6S_{1/2}\) - \(9D_{3/2}\) & 4.047 & 24.9 & \(7S_{1/2}\) - \(10D_{3/2}\) & 3.172 & 14.6 \\ \(5S_{1/2}\) - \(9D_{3/2}\) & 1.47 & 3.0 & \(6S_{1/2}\) - \(10D_{3/2}\) & 3.173 & 15.0 & \(7S_{1/2}\) - \(11D_{3/2}\) & 2.521 & 9 \\ \(5S_{1/2}\) - \(4D_{5/2}\) & 40.37 & 3696 & \(6S_{1/2}\) - \(5D_{5/2}\) & 41.23 & 5111 & \(7S_{1/2}\) - \(6D_{5/2}\) & 40.98 & 4487 \\ \(5S_{1/2}\) - \(5D_{5/2}\) & 0.33 & 0.2 & \(6S_{1/2}\) - \(6D_{5/2}\) & 14.76 & 423 & \(7S_{1/2}\) - \(7D_{5/2}\) & 8.73 & 138 \\ \(5S_{1/2}\) - \(6D_{5/2}\) & 2.69 & 11 & \(6S_{1/2}\) - \(7D_{5/2}\) & 9.432 & 150 & \(7S_{1/2}\) - \(8D_{5/2}\) & 6.516 & 67 \\ \(5S_{1/2}\) - \(7D_{5/2}\) & 2.51 & 9 & \(6S_{1/2}\)- \(8D_{5/2}\) & 6.525 & 67.2 & \(7S_{1/2}\) - \(9D_{3/2}\) & 4.755 & 33.8 \\ \(5S_{1/ and other E1 values by SD method [35]. The value given by Safronova _et al._ is evaluated using SD method which deviates from our value by around 1% [36]. Though the method opted by us is same as of Refs. [35; 36] to calculate the dipole polarizability of Fr but we have also scaled the E1 matrix elements using experimental energies as explained in Sec. III. Recently, Smialkowski _et al._ used a molecular MOLPRO package to evaluate the dipole polarizability of Fr which is overestimated and diverges by \(\sim\)3% from our value [37]. Using reduced E1 matrix elements given in the same table, we have estimated the dynamic \(\alpha_{d}(\iota\omega)\) values and used them to estimate different contributions to \(C_{3}\) as given in Table 2. The dominant contributor of \(C_{3}\) coefficients is the main part followed by core, tail and \(vc\). The total value of \(C_{3}\) coefficient differs from material to material. Consequently, the various contributions have been added up to provide a final value of \(C_{3}\) coefficient. The core is providing 27%-38% of the share of total value of \(C_{3}\) which is in accordance with the work by Derevianko _et al._[35] where they emphasized the sensitivity of core towards dipole \(C_{3}\) values. On the other hand, the tail contribution is \(\sim 1\%\) of the total value. ### Quadrupole polarizabilities To evaluate the \(C_{5}\) coefficients, we require quadrupole polarizability of alkali atoms. In Table 3, we present the static values of quadrupole polarizability \(\alpha_{q}(0)\) of the ground states of alkali-metal atoms and compared our resulted values with the available literature. We calculated the static polarizability by putting \(\omega=0\) in the Eq. 12. Since E2 matrix elements are required for calculation of dynamic polarizability, therefore we have provided the matrix elements of the dominant E2 transitions in Table 3 for all the alkali atoms. The breakdown of polarizability into the main, tail, core and \(vc\) polarizabilities are also presented. The main part of valence polarizability provides the dominant contribution followed by tail and core polarizability. The \(vc\) contributions for Li, Na and K are zero due to non-availability of \(D\) orbitals in the core of these atoms whereas very insignificant contributions have been encountered for Rb, Cs and Fr. For the final value of total static polarizability, we have added the core polarizability values from RPA. We did not find experimental \(\alpha_{q}(0)\) results for any alkali atom to compare our theoretical values with. However, in the same table, we have compared our results with the most recent work by Smialkowski _et al._ where they calculated the static quadrupole polarizability of alkali atoms using MOLPRO package of _ab initio_ programs [37]. Our static value of quadrupole polarizability deviates from the values reported by Smialkowski _et al._ by less than 1% for Li to Cs alkali atoms whereas for Fr, the discrepancy is about 8%. In another work [19], Jiang _et al._ evaluated the dynamic quadrupole polarizability of Na and Cs using oscillator method. We believe that our values are much more reliable than the compared ones due to accurate calculations of the matrix elements evaluated using the AO method. Using the E2 matrix elements, dynamic quadrupole polarizability \(\alpha_{q}(\iota\omega)\) of the alkali atoms over a range of frequencies has been calculated as presented in Fig. 1. Since our static values are accurate, we believe that the dynamic values are also reliable. We find the static RPA and DF values for the core contributions are quite close, so we have estimated the dynamic values of core polarizabilities using the DF method without losing much accuracy. As the frequency increases the polarizability value decreases and reaches a small value beyond \(\omega=1\) a.u.. This trend is seen for every atom considered in the present work. Since these dynamic polarizability values can be important for experimental purposes, we have inferred these values at a particular frequency by providing a fitting model. In our previous work [42], we gave the fitting formula for dipole polarizability of alkali atoms at imaginary frequencies. Here, we have fitted the quadrupole polarizabilities of all the alkali atoms using the following fitting formula \[\alpha_{q}(\iota\omega)=\frac{A}{1+B\omega+C^{2}\omega}, \tag{21}\] where \(A,B\) and \(C\) are the fitting parameter given in Table 4. ### \(C_{5}\) dispersion coefficients Table 5 presents the calculated dispersion coefficients for all considered atoms due to \(C_{5}\) contributions of polarizability in total potential interacting with different material walls. Using the resulted dynamic polarizability of considered atoms and dynamic permittivity values of material walls at imaginary frequencies, we have obtained the \(C_{5}\) vdW dispersion coefficients by using Eq. (7). We Figure 1: Calculated dynamic quadrupole polarizability \(\alpha_{q}\) (in a.u.) at imaginary frequencies of the alkali-atom metals. have used an exponential grid for solving the integration of the mentioned equation. In Table 5, core, \(vc\), main and tail contributions of dispersion coefficients are given which are explicitly based on the corresponding contribution of polarizability. The final value of \(C_{5}\) coefficient has been given by adding up all the contributions. The increasing size of the alkali atoms increases the values of individual contribution and total dispersion coefficients which is due to increasing polarizability of atom for any particular material wall. Among the various contributions, the main part is the dominant contributor toward the total \(C_{5}\) dispersion coefficient value, followed by tail, core and \(vc\). In Ref. [35], Derevianko _et al._ emphasized that \(C_{3}\) coefficients are sensitive towards the core contribution. However, for the case of \(C_{5}\) coefficients, the core \(C_{5}\) coefficients are much smaller and contribute almost 5% towards total \(C_{5}\) value whereas the tail \(C_{5}\) contributions are prominent. The zero value of \(vc\) contribution of \(C_{5}\) coefficient for Li, Na and K is due to zero value of quadrupole polarizability. After comparing the presented \(C_{5}\) coefficients with \(C_{3}\) coefficients that are already reported in our previous work [28; 32], it can be observed that \(C_{3}\) values are at least 25 times smaller than \(C_{5}\) values for any particular system. The reason for this difference solely depends on the larger quadrupole polarizability of alkali atoms as compared to their dipole polarizability. Comparing the materials considered in the present work, the largest \(C_{5}\) values have been observed for metal - Au, followed by semiconductor - Si and then dielectrics - sapphire, YAG, SiN\({}_{x}\) and SiO\({}_{2}\). We have also compared our values with the available theoretical vaues of \(C_{5}\) coefficients. Jiang _et al._ reported \(C_{5}\) coefficients for Na and Cs with different materials including Au [19]. The reported value for Na-Au system is in close agreement with our value. But for the case of Cs-Au system, our value deviated from reported value by 35%. The reason behind the discrepancies can be the method used for the calculation of dynamic polarizability and permittivity of atom and material, respectively. The oscillator method has been used to calculate the quadrupole polarizability of Na and Cs. This method overestimates the polarizability values, especially when the systems become heavier [43]. It can also be seen from Table that the \(\alpha_{d}(0)\) for Cs is not a reliable one. This could be one of the reasons behind the overestimated value of \(C_{5}\) coefficient reported by Jiang _et al._. The dynamic polarizability values of Na and Cs by Jiang _et al._ have been evaluated by using oscillator method which gave a little deviated results from our values whereas the dynamic permittivity of Au has been evaluated using single frequency Lorentzian approximation, as a result of which the \(C_{5}\) coefficients reported by them are exaggerated. In another report, Tao _et al._ calculated the \(C_{5}\) coefficients for Li, Na and K with Au using \(ab\)_initio_ DFT+vdW method [15]. Though our values support the values reported in Ref. [15], the deviation of our values from the reported ones start increasing with increase in size of atom. As it is commonly known that the exchange correlation functional and nonlocal correlation energies are not treated properly in DFT method, we believe our values are accurate and more reliable than the values given by Tao _et al._. ### Total vdW potentials The primary findings of the present work have been given in this section. Fig. 2 presents the potential curves due to dipole and quadrupole effects evaluated using Eqs. (2) and (3), respectively for alkali-metal atoms interacting with SiN\({}_{x}\) system. Most of the experiments have been conducted with SiN\({}_{x}\) diffraction grating [44; 45; 46; 47; 48], so for demonstration purposes, we have chosen SiN\({}_{x}\) wall to observe the total potential curves with alkali atoms. The total potential curve has been obtained till the first higher-order interaction of atom-wall system within the framework of Lifshitz theory. The individual dipole and quadrupole potential curve have also been plotted in the same graphs. We have evaluated \(U_{d}\) using our previous value of dipole polarizability [28; 32]. It can be observed that quadrupole contribution gives very small contribution towards total potential. If we scrutinize these graphs at a very short separation distances, i.e., from \(z=1\) to \(z=10\) a.u., as presented in the insets of graphs 2(a)-2(f), one can observe the overwhelming contribution provided by quadrupole contribution of the atom-wall potential. The quadrupole contribution is more dominant than dipole from 1 nm to 6 nm for all the alkali-metal atoms. As the separation distance increases, the long range dispersion interaction is completely imparted by dipole effect of polarizability of the atom as depicted in the figures. These results suggest that for a particular \begin{table} \begin{tabular}{l c c c c c c} \hline \hline \multicolumn{2}{c}{Parameter} & \multicolumn{5}{c}{Atom} \\ & Li & Na & K & Rb & Cs & Fr \\ \hline \(A\) & 1425.04 & 1879.5 & 4980.82 & 6470.66 & 10420.9 & 8514.71 \\ \(B\) & 0.1296 & 0.1193 & 0.2097 & 0.3640 & 1.1656 & 0.9549 \\ \(C\) & 42.0983 & 49.7813 & 97.0513 & 115.692 & 163.194 & 138.994 \\ \hline \hline \end{tabular} \end{table} Table 4: Fitting parameters for the dynamic quadrupole polarizabilities of the alkali-metal atoms at imaginary frequencies. \begin{table} \begin{tabular}{l l l l l l l l} \hline \hline & & \multicolumn{6}{c}{**Li**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ \hline Core & 0.001 & 0.004 & 0.003 & 0.003 & 0.005 & 0.005 & 0.006 \\ vc & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Main & 17.657 & 14.870 & 7.292 & 10.808 & 10.827 & 11.181 & 11.320 \\ Tail & 2.046 & 1.694 & 0.861 & 1.243 & 1.288 & 1.338 & 1.365 \\ Total & 19.710 & 8.156 & 16.568 & 12.054 & 12.120 & 12.524 & 12.691 \\ Ref. [15] & 19.15 & & & & & \\ \hline & & \multicolumn{6}{c}{**Na**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ Core & 0.077 & 0.048 & 0.034 & 0.036 & 0.054 & 0.059 & 0.064 \\ vc & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Main & 22.610 & 19.081 & 9.308 & 13.846 & 13.801 & 14.242 & 14.401 \\ Tail & 1.762 & 1.464 & 0.738 & 1.072 & 1.103 & 1.144 & 1.165 \\ Total & 24.449 & 20.594 & 10.080 & 14.954 & 14.958 & 15.445 & 15.631 \\ Ref. [19] & 25.2 & & & & & \\ Ref. [15] & 22.48 & & & & & \\ \hline & & \multicolumn{6}{c}{**K**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ Core & 0.702 & 0.465 & 0.308 & 0.347 & 0.486 & 0.530 & 0.571 \\ vc & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ Main & 47.036 & 39.962 & 19.117 & 28.784 & 28.151 & 28.972 & 29.146 \\ Tail & 1.873 & 1.539 & 0.792 & 1.132 & 1.188 & 1.236 & 1.264 \\ Total & 49.611 & 41.967 & 20.217 & 30.263 & 29.825 & 30.739 & 30.981 \\ Ref. [15] & 47.48 & & & & & \\ \hline & & \multicolumn{6}{c}{**Rb**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ Core & 1.435 & 0.973 & 0.631 & 0.728 & 0.992 & 1.076 & 1.154 \\ vc & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) \\ Main & 55.231 & 46.980 & 22.377 & 33.765 & 32.876 & 33.818 & 33.973 \\ Tail & 3.743 & 3.106 & 1.569 & 2.274 & 2.345 & 2.433 & 2.479 \\ Total & 60.410 & 51.059 & 24.577 & 36.766 & 36.213 & 37.327 & 37.606 \\ \hline & & \multicolumn{6}{c}{**Cs**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ Core & 3.277 & 2.287 & 1.442 & 1.712 & 2.252 & 2.431 & 2.591 \\ vc & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) \\ Main & 73.938 & 62.855 & 29.809 & 45.015 & 43.596 & 44.834 & 44.941 \\ Tail & 9.070 & 7.605 & 3.761 & 5.538 & 5.594 & 5.785 & 5.867 \\ Total & 86.184 & 72.748 & 35.011 & 52.266 & 51.442 & 53.050 & 53.399 \\ Ref. [19] & 117 & & & & & \\ \hline \hline & & \multicolumn{6}{c}{**Fr**} \\ \hline & Au & Si & SiO\({}_{2}\) & SiN\({}_{x}\) & YAG & oSap & eSap \\ Core & 4.418 & 3.126 & 1.943 & 2.341 & 3.026 & 3.257 & 3.462 \\ vc & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) & \(\sim 0\) \\ Main & 64.978 & 55.310 & 26.252 & 39.656 & 38.457 & 39.548 & 39.669 \\ Tail & 7.169 & 5.987 & 2.985 & 4.369 & 4.448 & 4.606 & 4.680 \\ Total & 76.566 & 64.423 & 31.181 & 46.365 & 45.931 & 47.411 & 47.812 \\ \hline \hline \end{tabular} \end{table} Table 5: Tabulated \(C_{5}\) coefficients for the alkali-metal atoms with different material walls. Final results are compared with the previously available theoretical values. Figure 2: The vdW potential curves for interactions of the alkali-metal atoms with SiN\({}_{x}\) for \(z=1-100\) nm. The insets of the graphs presents the same potential curves at \(z=1-10\) nm. material the multipole effects can be quite significant if the separation distance is very small. Also, the multipole effect is much more effective and can be realized over larger separation when the atom or molecule considered is profoundly polarized. Similar curves can be obtained for the other materials that have been considered in this work. ## V Conclusion We have investigated the quadrupole polarization effects of alkali atoms in the total atom-wall van der Waals interaction potentials. We probed the range of separation distance at which these quadrupole effects are dominant. For this, we considered both dipole and quadrupole induced interactions of atoms with various material walls within the framework of Lifshitz theory. The potential curves depict that quadrupole polarization effects of alkali atoms in total atom-wall potential are quite significant when the separation distance between atom and material wall is ranging from 1 - 10 nm. Beyond this range, the quadrupole contributions start declining, resulting in an attractive potential entirely due to the dipole polarization effects. Also, at significantly shorter distances, the attraction due to quadrupole polarization of the alkali atom increase with increase in the size of the atom suggesting quadrupole effects can be dominant when an atom has more tendency to get polarized. The obtained results could be useful in high precision experiments for studying van der Waals interactions at smaller distances very close to the surfaces. ## VI Acknowledgement Research at Perimeter Institute is supported in part by the Government of Canada through the Department of Innovation, Science and Economic Development and by the Province of Ontario through the Ministry of Colleges and Universities.
2310.20301
Revolutionizing Global Food Security: Empowering Resilience through Integrated AI Foundation Models and Data-Driven Solutions
Food security, a global concern, necessitates precise and diverse data-driven solutions to address its multifaceted challenges. This paper explores the integration of AI foundation models across various food security applications, leveraging distinct data types, to overcome the limitations of current deep and machine learning methods. Specifically, we investigate their utilization in crop type mapping, cropland mapping, field delineation and crop yield prediction. By capitalizing on multispectral imagery, meteorological data, soil properties, historical records, and high-resolution satellite imagery, AI foundation models offer a versatile approach. The study demonstrates that AI foundation models enhance food security initiatives by providing accurate predictions, improving resource allocation, and supporting informed decision-making. These models serve as a transformative force in addressing global food security limitations, marking a significant leap toward a sustainable and secure food future.
Mohamed R. Shoaib, Heba M. Emara, Jun Zhao
2023-10-31T09:15:35Z
http://arxiv.org/abs/2310.20301v1
Revolutionizing Global Food Security: Empowering Resilience through Integrated AI Foundation Models and Data-Driven Solutions ###### Abstract Food security, a global concern, necessitates precise and diverse data-driven solutions to address its multifaceted challenges. This paper explores the integration of AI foundation models across various food security applications, leveraging distinct data types, to overcome the limitations of current deep and machine learning methods. Specifically, we investigate their utilization in crop type mapping, cropland mapping, field delineation and crop yield prediction. By capitalizing on multispectral imagery, meteorological data, soil properties, historical records, and high-resolution satellite imagery, AI foundation models offer a versatile approach. The study demonstrates that AI foundation models enhance food security initiatives by providing accurate predictions, improving resource allocation, and supporting informed decision-making. These models serve as a transformative force in addressing global food security limitations, marking a significant leap toward a sustainable and secure food future. keywords: Food Security, Foundation Models, Crop Type Classification, Field Delineation, Deep Learning, Cropland Mapping, Machine Learning, Remote Sensing, Crop Yield Prediction. + Footnote †: journal: ## 1 Introduction The main goal of ensuring food security is to maintain a sufficient provision of healthy food for global sustenance. This objective was emphasized by the United Nations (UN) in 2015 through the establishment of 17 Sustainable Development Goals (SDGs) to achieve by 2030, aiming to promote global well-being and preserve the Earth's ecosystems [1]. Among these goals, the second SDG seeks to eradicate hunger by enhancing food security and nutrition and promoting sustainable agricultural practices. Meeting this objective, however, presents a significant challenge because of the complex nature of food security phenomena. Addressing this complexity necessitates the integration of heterogeneous data spanning different themes, timeframes, and geographical regions. Consequently, the application of techniques that amalgamate these diverse variables becomes imperative to enhance forecasting accuracy. In recent times, there has been a notable increase in the accessibility of earth observation data, which includes information on various biophysical and climatic aspects. This influx of information, combined with ground-level data such as crop yields, cropland details, crop types, and field delineation, has catalyzed the adoption of Machine Learning (ML) models. These models are employed to automatically distill pertinent features from vast and disparate datasets, enabling high-frequency food security predictions on a global scale [2; 3]. Leveraging these pertinent features, ML models assume the role of monitoring data and forecasting key aspects of food security [4; 5; 6; 7]. In the realm of food security, ML models acquire expertise from diverse and intricate observational data [8; 9]. They meticulously sift through this data, retaining only the salient features that contribute to accurate predictions. In this context, powerful tools like Long Short-Term Memory networks (LSTM) and Convolutional Neural Networks (CNN) have gained prominence for their application in Deep Learning (DL) methodologies [10; 11]. These techniques excel at capturing high-dimensional features and appropriately modeling temporal dynamics, resulting in significantly improved prediction accuracy [12; 13]. In the rapidly evolving landscape of artificial intelligence and machine learning, a pivotal development in recent years has been the emergence of what is known as a "foundation model." Foundation models, at their core, represent a class of powerful and versatile machine learning models that serve as the bedrock for a wide array of applications across various domains [14]. These AI foundation models are typically pre-trained on massive and diverse datasets containing text, images, or both. They leverage deep neural network architectures and extensive computational resources to learn intricate patterns, structures, and semantic relationships present in the input data. The training process involves exposing the model to an extensive range of linguistic and visual contexts, allowing it to develop an understanding of natural language, visual content, and even the nuances of human expression [15]. One of the most notable exemplars of foundation models is the Generative Pre-trained Transformer (GPT) series, with GPT-3 being a prominent example. GPT-3, developed by OpenAI, has gained considerable recognition for its impressive capac ity to produce coherent and contextually appropriate text, rendering it a versatile instrument for tasks related to natural language understanding and generation. Its proficiency extends from answering questions and language translation to content generation and text completion [16]. The significance of foundation models in the context of food security and related domains lies in their capacity to comprehend and process large volumes of heterogeneous data. By fine-tuning these foundation models on specialized datasets relevant to food security, researchers and organizations can harness their latent capabilities to extract valuable insights, predict food security trends, and offer data-driven policy recommendations. For instance, when applied to the realm of food security, foundation models can be utilized to analyze vast datasets that include textual reports on crop conditions, satellite images of agricultural regions, weather data, economic indicators, and more. These models excel at distilling actionable information from such data by recognizing patterns and correlations that might elude traditional analysis methods [17]. Furthermore, their natural language understanding capabilities enable them to parse and summarize reports, academic papers, and policy documents, thereby assisting experts in staying updated with the latest research and policy developments in the field of food security [18]. In summary, AI foundation models represent a transformative tool for enhancing the field of food security. Their innate ability to process, interpret, and generate information from diverse datasets holds the potential to revolutionize the way we address and mitigate food insecurity challenges, ultimately contributing to the achievement of the United Nations' SDGs in the domain of global prosperity and environmental protection. As such, the integration of foundation models into the food security domain exemplifies the convergence of cutting-edge AI technologies with crucial global priorities. Table 1 lists the abbreviations or acronyms used in this article. A comprehensive overview of machine learning methodologies applied in forecasting tasks related to food security. To ensure food security in alignment with the SDGs, a set of four essential subtasks is mandated [19]. These tasks involve the mapping of agricultural land [20], the classification of crop types [21], the prediction of crop yields [22], and the delineation of fields [23]. The schematic depiction of the food security process is elucidated in Figure 1. National Despite the rapid evolution of ML techniques over recent decades, it remains imperative to comprehend the procedural intricacies underpinning these methods and rigorously assess their performance. Such scrutiny is vital for enhancing decision-making processes within the domain of food security. The datasets employed in this study exhibit differences in both spatial and temporal resolutions. To optimize the performance of the machine learning models, the relevant data were meticulously gathered via thorough ground-level observations. These observations serve a dual purpose: validating the resultant model predictions and furnishing essential input variables such as crop yield observations and cropland data. ### Food Security Tasks #### 2.1.1 Cropland Mapping Cropland classification is essential for global food security support, offering valuable insights for monitoring food security [4]. The abundance of earth observation data enables changes in crop mapping through machine learning (ML) classification techniques [24; 25]. These techniques utilize diverse data from various sensors, incorporating parameters like vegetation indices, soil characteristics, moisture content, and climate factors. ML techniques typically leverage these complex datasets along with data obtained from ground-based observations, as demonstrated in previous studies [26; 27]. \begin{table} \begin{tabular}{l l} \hline **Abbreviations** & **Description** \\ \hline UN & United Nations \\ SDGs & Sustainable Development Goals \\ ML & Machine Learning \\ DL & Deep Learning \\ CNN & Convolutional Neural Networks \\ GPT & Generative Pre-trained Transformer \\ EO & Earth observation \\ RF & Random Forest \\ SVM & Support Vector Machine \\ GDD & Growing Degree Days \\ NLP & Natural Language Processing \\ GRU & Gated Recurrent Unit \\ LSTM & Long Short-Term Memory networks \\ \hline \end{tabular} \end{table} Table 1: The Abbreviations and/or Acronyms #### 2.1.2 Crop Type Mapping Crop-type mapping is a critical task for ensuring food security monitoring. Precise and timely prediction of crop types greatly enhances the efficiency of agricultural decision support systems [28; 29]. ML techniques, which excel in handling heterogeneous and intricate data, are widely employed in this task, operating at regional and global scales [30; 31]. #### 2.1.3 Crop Yield Prediction In order to meet food security objectives and maintain an uninterrupted food supply, it is crucial to make precise predictions of crop yields, particularly on regional and national scales. These predictions can aid decision-makers in devising strategies Figure 1: Food Security Process. for imports and support [32; 33]. This task is inherently intricate, influenced by various data types, including soil properties and climate variables. The integration of multispectral and multi-temporal input data, along with ground-observed data, is instrumental in leveraging ML and DL techniques for accurate crop yield prediction [5; 6]. #### 2.1.4 Field Delineation The delineation of field borders is essential for improving the accuracy of crop yield forecasting and supporting initiatives for monitoring food security [34; 35]. Earth observation (EO) data, known for its extensive coverage and high spatiotemporal resolutions, proves particularly effective in this context for field delineation in agriculture. Supervised classification methods are frequently utilized to derive and establish spatial patterns at various scales, including local, regional, and global levels [34]. Moreover, several DL methods have exhibited exceptional proficiency in diverse classification and segmentation tasks, including the demarcation of agricultural fields. ### Recent Applications of ML and DL in Food Security In recent years, the fusion of ML and DL has significantly transformed the food security landscape, profoundly influencing decision-making processes [36; 37]. ML techniques have emerged as crucial tools in this area, necessitating substantial amounts of training data. Traditionally, food security assessments heavily relied on surveys conducted by skilled professionals [38; 39]. However, these conventional data collection methods are limited in scope and often involve significant expenses. Earth observation data, climate data, and land use and land cover data are commonly utilized in food security applications. The integration of these diverse data types within ML models has demonstrated promising results. Recently, DL models, including Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM) networks, have seen significant use in the field of food security [40; 41; 42]. ML techniques encompass three primary learning directions: supervised, unsupervised, and semi-supervised learning. These approaches enable the automatic extraction of features that maximize model performance, allowing the direct training of raw data. However, it is essential to note that these models might be vulnerable to overfitting, especially when dealing with comparatively restricted training datasets. 1. Supervised Learning: In supervised learning, the training process depends on labeled data. This data consists of data points, each associated with a specific label or target data. The learning model's goal is to identify the relationship between the input data and the output labels on the unlabeled dataset [43]. This relationship allows the model to classify new, unlabeled input data into predefined classes determined during training. In the context of food security applications, the input data might include various types of information like earth observation data, climatic data, and biophysical data, while the output labels may correspond to survey food data and observed ground data, such as cropland and crop type [44]. 2. Unsupervised Learning: Unsupervised learning takes a different approach, as the training data comprises solely unlabeled data, devoid of specified outputs. Consequently, these techniques are tasked with uncovering hidden structures within the data [45]. The training process involves identifying concealed patterns that can categorize the data into subsets with shared characteristics. After identifying and training these subgroups in the training phase, new input data can be classified into implicitly defined categories during the learning process, usually in the testing phase [46]. 3. Semi-supervised Learning: becomes relevant in scenarios with scarce labeled training datasets and abundant unlabeled data [47]. In this scenario, a supervised model is initially trained using the labeled data, subsequently generating additional training data. An unsupervised model is subsequently trained using both the unlabeled data and the labeled data generated by the supervised model. Semi-supervised learning utilizes this combined strategy to address the constraints present in purely supervised or unsupervised methods [48]. ### ML and DL methods for ensuring food security Recent advancements in food security have seen the extensive use of both ML and DL techniques. ML methods, such as Support Vector Machines (SVMs), Random Forests (RF), and decision trees, have been widely employed for forecasting tasks in food security. At the same time, DL approaches have demonstrated their efficacy in handling diverse and complex datasets, encompassing both structured and unstructured data. The utilization of ML methods encompasses multiple steps, including data collection, pre-processing, feature selection, model selection, hyperparameter tuning, training, validation, and testing. This inclusive process streamlines the generation of extensive, varied, and data-rich datasets, providing abundant prospects for expediting research and implementing solutions driven by data [49]. Various studies in the literature have leveraged ML techniques to monitor food security, with each of the four primary food security tasks demonstrating significant viability when processed using ML methods. DL techniques have gained prominence due to their accuracy, especially when dealing with heterogeneous data. However, DL models require extensive computational resources, and their results can be challenging to interpret [50]. Various CNN architectures, including ResNet and R2U-Net, have played a crucial role in handling tasks related to food security. In conclusion, DL techniques are the preferred choice for effectively analyzing intricate and diverse data, enabling accurate forecasting of the four key food security tasks due to their high precision and ability to handle extensive, unorganized datasets. ## 3 Datasets Utilized for Food Security Tasks The area of food security relies on a range of datasets, such as earth observation data, climatic data, biophysical data, and observed ground data. These datasets form the basis for utilizing machine learning (ML) techniques in activities related to food security. In the realm of ML, various techniques have been developed, each possessing distinct strengths and weaknesses [91]. Random Forest (RF) is known for its swift learning, ease of implementation, and computational efficiency, making it particularly advantageous when dealing with limited labeled data. However, its conservative approach to loss optimization and potential for subpar performance due to underlying assumptions should be considered. Meanwhile, XGBoost offers simplicity in implementation, computational speed, and adaptability to smaller datasets, though it may incur longer computation times and be susceptible to higher error rates and overfitting. Support Vector Machine (SVM), with its ease of use, rapid learning, dataset size independence, and proficiency with moderate-sized datasets, provides a compelling option, yet it may involve lengthier computations, a risk of elevated error rates, and a propensity for overfitting. K-means, though straightforward to \begin{table} \begin{tabular}{|c|c|c|} \hline **Techniques** & **Pros** & **Class** \\ \hline LSTR [4, 11] & Flexible, High accuracy, Large dataset processing & Difficult implementation, Stacking to data amount, Long training time, High computational time \\ \hline XGBoost [36] & Easy implementation, Computational speed, Small data & Fast performance, knowledge dependency \\ \hline RF [42] & Fast Learning, Limited labeled data & Low, Classic innovation \\ \hline SVM [32] & Single implementation, Fast learning, Independent of data amount, Good with small-world dataset & Low computational time, High error, Age to overfitting \\ \hline CMNet [33] & Easy implementation, Predictive, Robot & Low computational time, Structured data \\ \hline K-means [36] & Single implementation, Training with small data, Predictive & Structured data, Sensitive to data amount \\ MLP [36] & Single implementation, Low computational time & Negative structured data, Age to overfitting \\ \hline CNN [32, 1, 33] & Single implementation, Training with small data, Predictive, Good performance & Complex, Sensitive to data amount, Difficult to incorrect results \\ \hline CNN [32, 1, 33] & Efficient data handling, High performance & Large processing time, Consistency \\ \hline CNN [32, 10] & Structural/structural data, Efficient for multi-class problems, High accuracy & High computational time, Hard to interpret models \\ \hline K-NN [32] & Fast implementation, Robust & Sensitive to kernel parameters, Power performance on big data \\ \hline \end{tabular} \end{table} Table 2: Pros and Cons of ML and DL Techniques \begin{table} \begin{tabular}{l|l|l|l|l|l|l} \hline **Figure** & **Methodology** & **Convolutionary** & **Advantages** & **Disadvantages** & **Disadvantages** & **Disadvantages** & **Disadvantages** \\ \hline \begin{table} \begin{tabular}{l|l|l|l|l|l} \hline **Figure** & **Methodology** & **Conceptualization** & **Abategories** & **Disadvantage \& **Disadvantage \& **Disadvantage \& **Distributed** \\ \hline 2021 [7] & **Disult, random forest model, multiple of latent** & **Partial implement, exhibit sensitivity to kernel function parameters and are less suited for extensive datasets [92]. Finally, CNN is praised for its simplicity, flexibility, strong performance, and automated feature selection. Nevertheless, they may encounter challenges with structured data, demonstrate sensitivity to data volume, and yield results that pose interpretation difficulties. Selecting the most appropriate ML technique for a given task necessitates careful consideration of the dataset's characteristics and the project's objectives and constraints. Food security datasets often contain diverse information, including thematic, structural, and spatio-temporal data. Thematic data encompasses aspects such as Earth, climate, economic, and soil characteristics, while structural data can consist of survey values, vectors, time series, and rasters [93]. Spatio-temporal information varies in terms of spatial resolutions (e.g., region, commune, country, or pixel) and temporal resolutions (e.g., year, month, or week). Integration of these datasets typically requires harmonization techniques to ensure uniformity across different scales. Previous studies have employed various data processing methods to forecast food security at different temporal or spatial levels. Table 6 provides a comprehensive breakdown of essential data categories and variables for monitoring and assessing food security. It is organized into four main categories: "Remote Sensing Data," which includes data on vegetation health, temperature, and precipitation; "Climate Data," encompassing information on rainfall patterns, temperature records, and climate anomalies; "Biophysical Data," consisting of data related to crop types, health, soil properties, and yields; and "Ground-Observed Data," which covers on-the-ground surveys, market prices, food consumption, livestock information, and water resources [94]. Each variable within these categories serves as a critical component in evaluating and understanding the factors affecting food security in a region. This structured approach to data collection and analysis is instrumental in supporting decision-making processes aimed at improving food access, availability, and overall food security for vulnerable populations [95]. ## 4 Challenges and Prospects for the Future Although ML techniques have demonstrated promising outcomes in tackling food security issues, it is essential to address several challenges and future considerations in this regard [96]. \begin{table} \begin{tabular}{|p{85.4pt}|p{85.4pt}|p{85.4pt}|} \hline **Category** & **Variables** & **Description** \\ \hline \multicolumn{4}{|c|}{**Remote Sensing Data**} \\ \hline NDVI & Normalized Difference Vegetation Index & Measures vegetation health and crop growth. \\ EVI & Enhanced Vegetation Index & Provides improved sensitivity in dense vegetation areas. \\ LST & Land Surface Temperature & Monitor temperature variations affecting crops. \\ NDWI & Normalized Difference Water Index & Identifies water bodies and soil moisture levels. \\ Land Cover & Land Use Classification & Classifies land types (cropland, forests, etc.). \\ Rainfall & Rainfall Estimates & Monitors precipitation patterns. \\ Soil Moisture & Soil Moisture Content & Provide soil moisture levels for agriculture. \\ \hline \multicolumn{4}{|c|}{**Climate Data**} \\ \hline Rainfall Temperature & Rainfall Patterns & Historical and current rainfall data. \\ Temperature & Temperature Data & Historical and real-time temperature records. \\ Evaportranspirated & Evaportranspiration Data & Measures water loss from soil and plants. \\ Climate Anomalies & Abnormal Climate Patterns & detect climate anomalies affecting crops. \\ Growing Degree Days & Growing Degree Days (GDD) & Predicts crop development stages. \\ Climate Indices & Climate Indices (e.g., PDSI, SPI) & Assess drought severity and precipitation patterns. \\ \hline \multicolumn{4}{|c|}{**Biophysical Data**} \\ \hline Crop Type & Crop Type and Distribution & Information on types of crops planted. \\ Crop Health & Crop Health Data & Data on crop diseases, pests, and overall condition. \\ Soil & Soil Data & Properties such as texture, pH, and nutrient levels. \\ Crop Yield & Crop Yield Data & Historical and real-time crop yield information. \\ Vegetation Cover & Vegetation Cover & Percentage of land covered by vegetation. \\ \hline \multicolumn{4}{|c|}{**Ground-Observed Data**} \\ \hline Crop Surveys & Crop Surveys & Assess crop health, growth, and yield estimates. \\ Market Prices & Market Prices & Local prices for food items to gauge affordability. \\ Food Consumption & Food Consumption Surveys & Data on household food consumption and dietary diversity. \\ Livestock Data & Livestock Data & Information on livestock health and conditions. \\ Water Resources & Water Resources & Availability and quality of water sources for agriculture. \\ \hline \end{tabular} \end{table} Table 6: Input Variables for Food Security Monitoring ### Data Challenges #### 4.1.1 Data Complexities Food security data come in various formats, such as earth observations, climate data, biophysical changes, and ground-level information. Prior to employing ML techniques, data must undergo collection, cleaning, and validation [49]. The heterogeneous nature of these data, along with limitations in data processing methods, presents challenges. Integrating data from various sources and structures, such as yield production, crop types, and cropland mapping, remains a challenge. #### 4.1.2 Sample Size A sufficient number of samples are required to ensure generalizability and avoid overfitting. In food security applications, establishing an adequate training set, including yield production samples or cropland maps, is vital [49]. The sample size depends on the study's scope, and data collection's diversity and heterogeneity add to the challenge. Choosing an initial sample size that is sufficiently large to avoid overfitting and improve accuracy remains an ongoing challenge. #### 4.1.3 Data Availability ML approaches are heavily reliant on the accessibility and accuracy of data. Nevertheless, numerous studies concerning food security are constrained by small datasets, frequently confined to local or regional contexts, resulting in challenges such as absent or deficient records. This hampers accurate forecasting [96]. Food security data, including cropland mapping and yield estimation, often suffer from incomplete records. Expanding datasets globally and addressing missing data challenges through methods like data imputation is necessary. ### Analyzing the Challenges in Computation #### 4.2.1 Selecting the Optimal Machine Learning Model Determining the most appropriate ML model frequently requires a thorough assessment of the distinctive attributes of current models [96]. Simplicity and ease of use are preferred, especially when computational resources are limited. Models like Decision Trees and Random Forests are favored for their efficiency, ability to provide valuable feedback, and avoidance of overly complex designs. The choice of model should align with available computing resources. Model interpretability also holds a crucial position in the selection process, where deep learning (DL) models, such as neural networks, are specifically employed for tasks demanding comprehensive scrutiny of extensive datasets, such as remote sensing data captured at frequent intervals and daily climatic station data. #### 4.2.2 Feature Extraction and Selection Feature selection and extraction represent crucial stages in the implementation of ML algorithms. The availability of features is imperative for accurate forecasting [96]. Integration of feature selection with resampling techniques is common practice, especially when dealing with training datasets. Notably, certain features, such as vegetation and climate data, contain vital information relevant to applications in food security. The selection of specific data features markedly influences the outcomes of the forecasting process. #### 4.2.3 Hyperparameters Selection ML techniques are dependent on hyperparameters that require optimization for individual datasets to maximize precision [96]. End-users fine-tune these parameters to improve the predictive capabilities of the ML algorithm. Employing tools such as grid search facilitates the optimization of hyperparameters, while the integration of nested resampling methods is vital in preventing overfitting within the ML model. #### 4.2.4 Model Complexity ML techniques face challenges with big data due to computational complexity. As data scale increases, basic operations consume significant time and memory resources [96]. For instance, traditional ML techniques become computationally infeasible with large datasets, such as fine spatial resolution cropland mapping. The computational time required for these models grows exponentially with data size. Deep learning models prove more efficient than traditional ML as data volume expands. #### 4.2.5 Cutting-Edge Machine Learning Approaches To tackle the obstacles associated with data heterogeneity and limited sample size, it is imperative to delve into the latest developments in ML techniques. Multi-task learning, transfer learning, and active learning are strategies that can enhance ML methods [96]. These approaches promote knowledge sharing between tasks, leveraging unlabeled data, and involving experts in model building. They hold promise for improving forecasting accuracy in food security applications. ### Difficulties in Interpretability and Assessment #### 4.3.1 Evaluation Metrics and Uncertainty ML techniques leverage diverse parameters to minimize disparities between model predictions and training data. Assessing the effectiveness of complex models necessitates the careful consideration of evaluation metrics. The selection of suitable evaluation metrics is contingent upon the nature of the data, the problem statement, and the specific ML models employed. Given the inherent uncertainty in training data and labels, the application of statistical techniques becomes imperative in evaluating performance discrepancies under uncertainty [96]. Commonly used evaluation metrics in food security ML applications include R, R2, RMSE, MAE, and BIAS, each emphasizing different aspects of model performance. #### 4.3.2 Reproducibility and Replicability Ensuring reproducibility and replicability in data-driven food security research has garnered significant attention. Challenges arise when applying DL technology and large datasets across diverse geographic locations for tasks like yield production and cropland delineation. Ensuring consistency in datasets, methods, and workflows is essential for addressing these challenges [96]. ## 5 Large AI Models Various types of large AI models are revolutionizing the approach to food security challenges worldwide. These expansive AI systems, ranging from CNNs to RNNs and transformers, are finding application across the entire food supply chain. In agriculture, they're harnessed to forecast crop yields, analyze climate patterns, and aid farmers in making data-driven decisions [97][98]. Reinforcement learning models are optimizing resource allocation, and ensuring judicious use of water and fertilizers, thereby promoting sustainable and efficient farming practices. Meanwhile, generative adversarial networks and natural language processing models enhance food distribution and logistics, curbing spoilage and waste. The diverse capabilities of these large AI models collectively advance food security, facilitating better access to nourishing and dependable food sources for a larger global population. Foundation models, a pivotal class of large-scale AI models, represent a core element in the realm of NLP [98]. These models are characterized by their pre-training on extensive text corpora, which equips them with an understanding of the intricate dynamics of human language, context, and semantics. At their core, foundation models, exemplified by GPT-3 and BERT, rely on transformer architectures, featuring multi-layered encoders that enable them to capture intricate word, phrase, and sentence relationships [97]. The pre-trained models can subsequently undergo fine-tuning for specific NLP tasks, making them adaptable for tasks like machine translation, sentiment analysis, and question-answering. The mathematical foundation of these models often denoted as ELMo, GPT, or BERT, has revolutionized NLP, effectively enhancing natural language understanding and generation through their comprehensive representation learning capabilities. The central mathematical component of foundation models is the transformer architecture, which includes self-attention mechanisms. The concept of self-attention can be described as illustrated in Zhao et al. (2023) [99]: \[\text{Attention}(Q,K,V)=\text{softmax}\left(\frac{QK^{T}}{\sqrt{d_{k}}}\right)V \tag{1}\] Where \(Q\), \(K\), and \(V\) are the query matrix, the key matrix, and the value matrix, respectively, and \(d_{k}\) is the dimension of the key vectors. This mechanism permits the model to allocate distinct significance to various words within the input sequence, aiding context-aware learning. The model then uses these learned weights to generate contextualized embeddings. Pre-training on vast text corpora helps in learning the optimal weights for this self-attention mechanism. Subsequently, for fine-tuning, a task-specific loss function is applied to adapt the model to specific NLP tasks. This fine-tuning process can be described by [100]: \[\mathcal{L}(\theta)=\mathcal{L}_{\text{task-specific}}(f_{\text{task-specific }}(h,\theta)) \tag{2}\] Where \(\mathcal{L}(\theta)\) the loss function for the overall model with parameters \(\theta\), \(\mathcal{L}_{\text{task-specific}}\) the loss function specific to the NLP task. \(f_{\text{task-specific}}(h,\theta)\) the fine-tuned model for the task with parameters \(\theta\) In this way, foundation models are not only mathematically profound but also highly adaptable for a wide spectrum of NLP tasks, making them transformative in the field of artificial intelligence. ### Utilizing Foundation Models for Food Security Applications Food security is a pressing global concern, with far-reaching implications for human well-being and sustainable development. Land-use mapping, enabled by the integration of foundation models, has emerged as a vital tool in addressing food security challenges. Figure 2 illustrates the block diagram of the proposed foundational model for food security applications. Figure 2: Block diagram of the proposed foundation model for Food Security applications. ### Foundation Model Structures for Text Classification Foundation models for text classification are neural networks designed to process raw text data and produce classifications. The use of these models involves a blend of structural elements to attain cutting-edge outcomes in diverse NLP assignments. 1. **Input Layer**: The input to the model is raw text, which undergoes tokenization into subword or word-level tokens. Each token is represented as an embedding vector to create a dense, fixed-dimensional input representation. 2. **Embedding Layer**: The embedding layer transforms tokenized input into dense vectors, capturing semantic word meanings and contextual relationships. This layer employs advanced embedding techniques such as Word2Vec and subword embeddings. 3. **Transformer Architecture**: Foundation models rely on the transformer architecture, a fundamental structure for NLP. It comprises multiple transformer layers, typically stacked hierarchically. Each transformer layer comprises two sub-layers: 1. **Multi-Head Self-Attention Layer**: This sub-layer computes weighted representations of all words in the input text using self-attention mechanisms. For a given input sequence \(X\). 2. **Position-wise Feedforward Layer**: Following self-attention, a position-wise feedforward neural network is applied to each position independently to capture local text patterns. For an input sequence \(X\), this can be represented as [101]: \[\text{FeedForward}(X)=\sigma(W_{2}\text{ReLU}(W_{1}X+b_{1})+b_{2})\] (3) Where \(W_{1}\), \(W_{2}\), \(b_{1}\), and \(b_{2}\) are weight matrices and bias terms. 4. **Classification Head**: At the top of the model, a classification head is attached. It consists of fully connected layers that map transformer representations to class probabilities. The final output can be represented as [102]: \[P(Y|X)=\text{softmax}(WX+b)\] (4) where \(Y\) is the class label, \(X\) is the transformer output, and \(W\) and \(b\) are weight and bias parameters. 5. **Training Objective**: Foundation models for text classification are trained using a cross-entropy loss function, quantifying the difference between predicted class probabilities and ground truth labels. The loss can be represented as [102]: \[\mathcal{L}=-\sum_{i}y_{i}\log(p_{i})\] (5) Where \(y_{i}\) is the ground truth label, \(p_{i}\) is the predicted probability, and the sum is taken over all classes. Foundation models, exemplified by BERT and its derivatives, have revolutionized NLP tasks by capitalizing on the transformer's self-attention mechanism. Their ability to effectively capture contextual information renders them powerful tools for a spectrum of text classification applications. ### Foundation Model Structures for Structured Data Classification Foundation models, celebrated for their versatility in various applications, extend their prowess to structured data classification. This elucidation delves into the intricacies of their architecture, unveiling the essential components and processes that render them particularly suitable for structured data. 1. **Input Data Representation**: Structured data classification revolves around tabular data, where rows represent instances, and columns denote attributes. These attributes span a diverse array of types, including numerical, categorical, and textual data. 2. **Model Architecture**: 1. **Embedding Layer**: Foundation models initiate similarly to their NLP counterparts with an embedding layer. In this layer, categorical attributes are transformed into continuous vectors, while numerical features remain unaltered. Textual features can be embedded using techniques such as Word2Vec or pre-trained word embeddings. 2. **Transformer Layers**: Foundation models leverage the transformer architecture, customized for structured data: 1. **Multi-Head Self-Attention**: Self-attention mechanisms identify relationships between rows and features within tabular data. The model autonomously learns the relevance of specific features and their dynamic interactions. 2. **Position-wise Feedforward Networks**: Following self-attention, position-wise feedforward networks capture localized patterns and interactions within structured data. 3. **Classification Head**: A classification head sits atop the model, orchestrating the prediction of class labels or target variables. This head typically incorporates fully connected layers, culminating in an output layer equipped with activation functions. 4. **Training and Objective**: Foundation models for structured data classification undergo rigorous training, with the choice of a loss function tailored to the classification task. Binary classification often employs cross-entropy loss, while multi-class problems necessitate categorical cross-entropy. For regression tasks, mean squared error may be adopted as the loss function. These models offer numerous advantages, including the automatic learning of features, interpretability, and the potential for transfer learning across diverse domains. By harnessing the powerful capabilities of transformers, they unveil intricate feature interactions within tabular data, providing an innovative approach to solving classification challenges across various domains. ### Foundation Model Structures for Image Classification Foundation models have demonstrated exceptional performance in image classification tasks, equipped with a structured architecture optimized for the analysis of visual data. This explanation offers a systematic breakdown of the intricate components and scientific foundations governing the operation of foundation models for image classification. 1. **Input Data Representation**: Image classification focuses on visual data in the form of multi-dimensional arrays of pixel values, with channels representing color information. 2. **Model Architecture**: Foundation models for image classification consist of fundamental architectural components: 1. **Convolutional Neural Networks (CNNs)**: Central to image classification models, CNNs comprise layers like convolutional, pooling, and fully connected layers, which hierarchically learn features from input images. 2. **Pre-trained Models**: Foundation models often integrate pre-trained architectures (e.g., VGG, ResNet, Inception) that have acquired rich image features from extensive datasets. These models serve as a foundational knowledge base and can be fine-tuned for specific classification tasks. 3. **Fine-Tuning**: Fine-tuning is a critical process to adapt pre-trained models to image classification tasks. It customizes the model's learned weights to align with the specific dataset and classification objectives. 4. **Transfer Learning**: Transfer learning is extensively used, enabling the transfer of knowledge learned from one task or dataset to another. Pretrained model representations often serve as valuable features for image classification. 3. **Training and Objective**: Image classification models are trained through supervised learning. The choice of loss function depends on the number of classes and the nature of the problem. For multi-class classification, categorical cross-entropy is commonly used to quantify the disparity between predicted class probabilities and true class labels. 4. **Prediction and Activation Functions**: Prediction in image classification models is facilitated by activation functions, such as softmax, applied to the final layer's output. The softmax function converts raw scores into class probabilities. For an image \(I\) and class \(j\), the probability \(P(j|I)\) is computed using the softmax function. Foundation models for image classification epitomize the forefront of computer vision, providing a versatile and robust solution for tasks that span object recognition, medical image analysis, and beyond. Table 7 illustrates the distinctive characteristics of foundation model structures when applied to text, structured data, and image \begin{table} \begin{tabular}{|c|c|c|c|} \hline **Aspect** & **Text** & **Structured Data** & **Image Data** \\ \hline **Input Data** & Raw text & Tabular data & Images \\ \hline **Embedding Layer** & Embeds tokens & Converts categorical attributes & Utilizes pre-trained CNNs \\ \hline **Transformer Layers** & Multi-head self-attention & Self-attention and feedforward & Pre-trained CNN layers \\ \hline **Pre-training** & On extensive text corpora & Pre-training not common & On large image datasets \\ \hline **Fine-Tuning** & Common for task-specific adaptation & Fine-tuning is typical & Fine-tuning for specific tasks \\ \hline **Training Objective** & Cross-entropy loss & Variable loss functions & Cross-entropy or MSE \\ \hline **Model Architecture** & Transformer-based & Utilizes pre-trained models & CNN-based \\ \hline **Activation Function** & Softmax for classification & Depends on the task & Softmax for classification \\ \hline **Data Types** & Textual data & Tabular data (numerical, categorical) & Image data (pixel values) \\ \hline \end{tabular} \end{table} Table 7: Comparison of Foundation Model Structures for Text, Structured Data, and Image Data data. In the context of input data, text models process raw text, structured data models handle tabular data with numerical and categorical attributes, while image models work with pixel values from images. These models utilize different embedding techniques: text models embed tokens, structured data models transform categorical attributes, and image models rely on pre-trained CNNs. Furthermore, the training goals differ, with text models employing cross-entropy, structured data models using different loss functions, and image models utilizing cross-entropy for classification along with Mean Squared Error (MSE) for regression. The differences in model architecture, fine-tuning practices, and activation functions highlight the adaptability of foundation models to diverse data types. ### Enhancing Efficiency in Fine-Tuning Fine-tuning involves customizing pre-existing foundation models for specific NLP tasks, including sentiment analysis, text classification, and question-answering [11]. Without fine-tuning, foundation models might not achieve optimal performance on task-specific data, but the process can be resource-intensive due to the need to update a large number of model parameters. Parameter efficiency is crucial in this context, as it involves achieving high task performance with a minimal increase in the number of model parameters. The goal of parameter-efficient fine-tuning is to make the most of as few model parameters as possible while maintaining high task performance. Several techniques support this approach: Gradual unfreezing entails freezing most pre-trained model layers and unfreezing only a small subset, such as the last few layers or task-specific layers, reducing the number of parameters requiring adjustment. Knowledge distillation is the process of training a compact, specialized model to replicate the performance of a larger, pre-trained model, resulting in a reduction in the overall model size. Pruning techniques remove unimportant connections and parameters from the model using methods like magnitude-based pruning, preserving performance while decreasing parameter count. Architectural modifications, such as replacing self-attention layers with more efficient variants like sparse attention mechanisms, can optimize the foundation model's architecture for specific tasks. Quantization involves reducing the precision of model weights and activations, significantly decreasing model size and memory requirements while maintaining performance. The benefits of parameter-efficient fine-tuning are substantial. By fine-tuning only a fraction of model parameters, you can execute models on less powerful hardware or deploy them on edge devices. Smaller models fine-tuned with parameter-efficient methods typically offer faster inference times, making them suitable for real-time applications. Furthermore, these techniques contribute to reducing the en ergy consumption and carbon footprint associated with operating large models in data centers, aligning with environmental and efficiency goals. ## 6 Conclusions The application of foundation models marks a pivotal advancement in the sphere of food security, effectively tackling the intricate challenges of the world's food systems. The study has shed light on the adaptability of foundation models in diverse food security domains, spanning from crop type classification to cropland mapping, field delineation, and crop yield prediction. By leveraging various data types, such as remote sensing data, meteorological records, and historical databases, foundation models offer remarkable flexibility. They present a potent avenue to surmount the limitations associated with traditional deep learning and machine learning techniques, ensuring precise, granular insights that empower decision-makers. Built upon extensive datasets and pre-trained on various domains, foundation models have ushered in a new era of food security applications. They bring enhanced accuracy, optimized resource allocation, and streamlined data processing. Furthermore, their capacity to adapt to a multitude of data sources and predictive tasks leads to the generation of invaluable insights, pivotal for the sustainable future of food production and distribution. In the face of mounting global challenges in food security, encompassing factors like population growth and climate variations, foundation models are poised as a transformative catalyst. They equip policymakers, researchers, and farmers with the means to arrive at more informed decisions, manage resources effectively, and ensure the world's food supply. Through fostering interdisciplinary collaboration, embracing emerging technologies, and persistently refining these models, a path is forged toward a future in which food security is a reality, not a concern. This paper stands as a crucial milestone in realizing the potential of foundation models in the collective pursuit of global food security.
2309.13870
New cases of the Strong Stanley Conjecture
We make progress towards understanding the structure of Littlewood-Richardson coefficients $g_{\lambda,\mu}^{\nu}$ for products of Jack symmetric functions. Building on recent results of the second author, we are able to prove new cases of a conjecture of Stanley in which certain families of these coefficients can be expressed as a product of upper or lower hook lengths for every box in each of the partitions. In particular, we prove that conjecture in the case of a rectangular union, i.e. for $g_{\mu,\bar \sigma}^{\mu \cup m^n}$ where $\bar \sigma$ is the complementary partition of $\sigma = \mu \cap m^n$ in the rectangular partition $m^n$. We give a formula for these coefficients through an explicit prescription of such choices of hooks. Lastly, we conjecture an analogue of this conjecture of Stanley holds in the case of Shifted Jack functions.
Per Alexandersson, Ryan Mickler
2023-09-25T04:45:46Z
http://arxiv.org/abs/2309.13870v3
# New cases of the strong Stanley conjecture ###### Abstract. We make progress towards understanding the structure of Littlewood-Richardson coefficients \(g_{\lambda,\mu}^{\mu\cup m^{h}}\) for products of Jack symmetric functions. Building on recent results of the second author, we are able to prove new cases of a conjecture of Stanley in which certain families of these coefficients can be expressed as a product of upper or lower hook lengths for every box in each of the partitions. In particular, we prove that conjecture in the case of a rectangular union, i.e. for \(g_{\mu,\sigma}^{\mu\cup m^{h}}\) where \(\bar{\sigma}\) is the complementary partition of \(\sigma=\mu\cap m^{n}\) in the rectangular partition \(m^{n}\). We give a formula for these coefficients through an explicit prescription of such choices of hooks. Lastly, we conjecture an analogue of this conjecture of Stanley holds in the case of shifted Jack functions. ## 1. Preliminaries ### Young diagrams, hooks and the star product Given an integer partition \(\lambda=(\lambda_{1},\ldots,\lambda_{\ell})\), we associate to it the _Young diagram_ as the set of coordinates \((i,j)\) where \(0\leq j<\ell\) and \(0\leq i\leq\lambda_{j-1}\) (French notation). Each such coordinate \((i,j)\) is called a _box_, and is pictorially illustrated by the square with the four corners \(\{(i,j),(i+1,j),(i,j+1),(i+1,j+1)\}\), see (1). The bottom left hand corner of the diagram has coordinate \((0,0)\). Given any box \(s=(x,y)\), we let \(s^{+}\) and \(s^{-}\) denote the boxes (coordinate) \((x+1,y+1)\) and \((x-1,y-1)\), respectively. In the diagram below, we have the marked the box \(s=(2,1)\). \[(7,4,2,2,1)\qquad\longleftrightarrow\qquad\includegraphics[]{figures/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/s/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/boxs/ For example, with \(m=7\), \(n=6\) and \(\mu=75421\) we have \(\bar{\mu}=76532\), as seen below: _Given a box \(s\), we let \(R^{s}_{\lambda}\) denote the set of boxes in \(\lambda\) in the same row as \(s\). Similarly, \(C^{s}_{\lambda}\) is the set of boxes in the same column. Note that this definition makes sense even when \(s\notin\lambda\)._ Let \(\lambda\) and \(\mu\) be Young diagrams. Since these are sets of boxes, we may define their intersection, \(\lambda\cap\mu\) and union \(\lambda\cup\mu\), and these are the Young diagrams for some integer partitions. ### Jack symmetric functions and structure constants There are two popular versions of the Jack symmetric functions; the Jack \(P\) functions, and the _integral form_ Jack \(J\) functions--the latter has integer coefficients in the monomial basis. We shall only use the latter functions. The book [10] is recommended as a reference on this topic. For example, the \(J_{\lambda}(\mathbf{x};\alpha)\) for \(\lambda\,\vdash\,3\), expanded in the monomial basis are as follows: \[J_{3}(\mathbf{x};\alpha) =(1+3\alpha+2\alpha^{2})m_{3}+(3+3\alpha)m_{21}+6m_{111}\] \[J_{21}(\mathbf{x};\alpha) =(2+\alpha)m_{21}+6m_{111}\] \[J_{111}(\mathbf{x};\alpha) =6m_{111}.\] The expansions in the power-sum basis are \[J_{3}(\mathbf{x};\alpha) =2\alpha^{2}p_{3}+3\alpha p_{21}+p_{111}\] \[J_{21}(\mathbf{x};\alpha) =-\alpha p_{3}+(\alpha-1)p_{21}+p_{111}\] \[J_{111}(\mathbf{x};\alpha) =2p_{3}-3p_{21}+p_{111}.\] Let \(g^{\gamma}_{\mu\nu}(\alpha)\in\mathbb{Q}(\alpha)\) be the Jack Littlewood-Richardson constants, determined via the relation \[J_{\lambda}J_{\mu}=\sum_{\nu}g^{\nu}_{\lambda\mu}(\alpha)J_{\nu}. \tag{3}\] We shall often omit the dependence on \(\alpha\), and just write \(g^{\nu}_{\lambda\mu}\). Observe that these are closely related to the classical Littlewood-Richardson coefficients for Schur functions, denoted \(c^{\nu}_{\lambda\mu}\). The Jack polynomials form an orthogonal basis with respect to the \(\alpha\)-deformed Hall inner product, \(\langle\,\cdot\,,\,\cdot\,\rangle\). This is the inner product on symmetric functions with the property that \[\langle p_{\lambda},p_{\mu}\rangle=\begin{cases}z_{\lambda}\alpha^{\ell( \lambda)}&\text{if }\lambda=\mu\\ 0&\text{otherwise,}\end{cases}\] and where \(n!/z_{\mu}\) is the number of permutations of size \(n\) with cycle type \(\mu\). **Theorem 1.2** (Jack norm formula, see [12, 5.8]).: \[\|J_{\lambda}\|^{2}=\langle J_{\lambda},J_{\lambda}\rangle=\ \prod_{b\in \lambda}h^{\vee}_{\lambda}(b)h^{\perp}_{\lambda}(b).\] (4) ### The Stanley Conjectures In [12], several remarkable conjectures were stated regarding the structure of the Jack Littlewood-Richardson coefficients. These conjectures are stated in terms of the quantities \(\langle J_{\mu}J_{\nu},J_{\lambda}\rangle=g^{3}_{\mu\nu}\|J_{\lambda}\|^{2}\), herein referred to as _Stanley structure coefficients_, but can be easily restated in terms of the Littlewood-Richardson coefficients using (4). **Conjecture 1.3** (Stanley conjecture, see [12, 8.3]).: _The Stanley structure coefficients are non-negative integer polynomials in \(\alpha\), that is,_ \[\langle J_{\mu}J_{\nu},J_{\lambda}\rangle\in\mathbb{Z}_{\geq 0}[\alpha].\] In general, we use the terminology of a _strong_ form of the Stanley conjecture to refer to any conjecture that proposes an explicit form for \(\langle J_{\mu}J_{\nu},J_{\lambda}\rangle\) which is _manifestly_ a non-negative polynomial in \(\alpha\). Stanley conjectured such a form in the case \(c^{\lambda}_{\mu\nu}=1\). **Conjecture 1.4** (Strong Stanley Conjecture, see [12, 8.5]).: _If \(c_{\mu\nu}^{\lambda}=1\), then the corresponding Stanley structure coefficient has the form_ \[\langle J_{\mu}J_{\nu},J_{\lambda}\rangle=\left(\prod_{b\in\mu}\tilde{h}_{\mu}(b )\right)\left(\prod_{b\in\nu}\tilde{h}_{\nu}(b)\right)\left(\prod_{b\in\lambda} \tilde{h}_{\lambda}(b)\right),\] _where \(\tilde{h}_{\sigma}(b)\) is a choice of either \(h_{\sigma}^{\textsc{U}}(b)\) or \(h_{\sigma}^{\textsc{L}}(b)\) for each box \(b\). Moreover, one chooses \(h^{\textsc{L}}(b)\) and \(h^{\textsc{U}}(b)\) exactly \(|\lambda|\) times each in the above expression._ Stanley proved Conjecture 1.4 in an important special case. **Theorem 1.5** (Jack Pieri rule, see [12, 6.1]).: _Let \(\lambda/\mu\) be a horizontal \(r\)-strip, then the Stanley structure coefficient is given by the following expression_ \[\langle J_{\mu}J_{(r)},J_{\lambda}\rangle=\left(\prod_{b\in\mu}A_{\mu\lambda} (b)\right)\left(\prod_{b\in(r)}h_{(r)}^{\textsc{U}}(b)\right)\left(\prod_{b \in\lambda}A_{\mu\lambda}^{\prime}(b)\right),\] _where_ \[A_{\mu\lambda}(b)\coloneqq\begin{cases}h_{\mu}^{\textsc{L}}(b)&\text{ if } \lambda/\mu\text{ does not contain a box in the same column as }b\\ h_{\mu}^{\textsc{U}}(b)&\text{ otherwise,}\end{cases}\] _and \(A^{\prime}\) is given by the same formula as \(A\), except with upper and lower looks exchanged._ In 2013, another case of Conjecture 1.4 was proven using a technique involving vertex operators. **Theorem 1.6** (Strong Stanley conjecture, rectangular case, see [1, 4.7]).: _Let \(\mu\subseteq m^{n}\). The Stanley structure coefficient in the rectangular case has the following expression as a product of lower/upper hook lengths:_ \[\langle J_{\mu}J_{\bar{\mu}},J_{m^{n}}\rangle=\left(\prod_{b\in\mu}h_{\mu}^{ \textsc{L}}(b)\right)\left(\prod_{b\in\bar{\mu}}h_{\bar{\mu}}^{\textsc{U}}(b )\right)\left(\prod_{b\in m^{n}}D_{\mu,m^{n}}(b)\right), \tag{5}\] _where, for \(b\in m^{n}\), we have defined_ \[D_{\mu,m^{n}}(b)\coloneqq\begin{cases}h_{m^{n}}^{\textsc{U}}(b)&\text{ if }(b_{1},n-1-b_{2})\in\mu\\ h_{m^{n}}^{\textsc{U}}(b)&\text{ otherwise.}\end{cases} \tag{6}\] For example, \(\langle J_{211}J_{221},J_{333}\rangle\) is computed using the following hook assignments In the paper [1], the authors provide formula (5) and note that the answer must be symmetric under the exchange of \(\mu\leftrightarrow\bar{\mu}\), however that symmetry is not explicitly demonstrated in this formula. In this paper, we provide an explicit demonstration for the symmetry property in formula (5), by re-proving it using a new technique. Furthermore, we prove a new case of Conjecture 1.4, the _rectangular union_ case, which generalises the rectangular case (Thm. 1.6), and provide an explicit prescription of the required hook choices. In particular, let \(\mu\) be a partition and \(m^{n}\) be _any_ rectangular partition. Let \(\sigma=\mu\cap m^{n}\), and let \(\bar{\sigma}\) be the complementary partition of \(\sigma\) in \(m^{n}\). Note that this reduces to the rectangular case when \(\mu\subset m^{n}\). **Theorem 1.7** (Strong Stanley conjecture, rectangular union case, see Cor. 3.4).: _The strong Stanley conjecture holds for \(\langle J_{\mu}J_{\bar{\sigma}},J_{\mu\cup m^{n}}\rangle\). The assignment of lower and upper hooks can be read off from Figure 17._ For example, \(\langle J_{42211}J_{211},J_{43331}\rangle\) is computed using the following hook assignments The recent paper by Matveev-Wei [14] proves Conjecture 1.4 in the special case when we have \(c^{\lambda}_{\mu\nu}=K_{\mu,\lambda-\mu}=1\), where the \(K_{\mu,\lambda-\mu}\) is a Kostka coefficient. This required equality between Kostka coefficients and Littlewood-Richardson coefficients holds in the rectangular case, i.e., the setup in Theorem 1.6 is covered by their work. We also understand the proof therein to not be constructive, that is, no explicit prescription for a choice of upper or lower hooks is presented for each box in the three partitions. However, our Theorem 1.7 is in general _not_ covered by the Matveev-Wei result, as we sometimes have the strict inequality \(c^{\lambda}_{\mu\nu}<K_{\mu,\lambda-\mu}\) in the rectangular union case. In the example above, we have \(c^{43331}_{42211,211}=1\) but \(K_{42211,43331-211}=K_{42211,22231}=3\), as evidenced by the three semistandard Young tableax ### A formula involving Jack structure constants For any multiset1 of boxes \(\Gamma\), we define the rational function2\(T_{\Gamma}(u):\mathbb{R}\to\mathbb{R}\) by Footnote 1: A collection with multiplicities. Footnote 2: This also depends on \(\alpha\). \[T_{\Gamma}(u)\coloneqq\prod_{b\in\Gamma}T_{1}(u-[b])\text{ where }T_{1}(u) \coloneqq\frac{(u-[0,0])(u-[1,1])}{(u-[1,0])(u-[0,1])}. \tag{7}\] We give some examples of such functions in Example 1.9. For any two Young diagrams \(\lambda,\mu\), we define the _star product_\(\lambda\star\mu\) as the multiset of boxes \[\lambda\star\mu\coloneqq\bigsqcup_{\begin{subarray}{c}\kappa\in\lambda\\ t\in\mu\end{subarray}}s+t.\] For example, \(31\star 221\) is computed as \[\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{\xy(0,0)*{ \xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy((0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0)*{\xy(0 )*{\xy((0)*{\xy(0)*{\xy((0)*{\xy( 0)*{\xy((((((((((((({{{}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}} where the number inside a box indicates its multiplicity (if \(>1\)). Lastly, we define \(\varpi_{\mu}\coloneqq\prod_{b\in\mu^{\times}}[b]\). With these tools introduced, we can now state the main motivating result that is used in this work. **Theorem 1.8** ([3]).: _Let \(\mu,\nu\) be partitions and \(n=|\mu|+|\nu|\), then the following'sum-product' identity of rational functions (in a complex variable \(u\)) holds, involving the Jack Littlewood-Richardson coefficients defined earlier in (3):_ \[\sum_{\begin{subarray}{c}\gamma\vdash n\\ \mu,\nu\subseteq\gamma\end{subarray}}g_{\mu\nu}^{\gamma}(\alpha)\frac{\varpi _{\gamma}}{\varpi_{\mu}\varpi_{\nu}}\left(\sum_{s\in\gamma(\mu\cup\nu)}\frac{ 1}{u-[s]}\right)=T_{\mu\star\nu}(u)-1. \tag{8}\] For ease of notation, we set \[\hat{g}_{\mu\nu}^{\gamma}\coloneqq g_{\mu\nu}^{\gamma}(\alpha)\frac{\varpi_{ \gamma}}{\varpi_{\mu}\varpi_{\nu}}.\] ### Properties of \(T_{\mu\star\nu}\) To make use of (8), we will need to produce some structural results about the right hand side. By definition, we have \[T_{\mu\star\nu}(u)=\prod_{b\in\mu\star\nu}T_{1}(u-[b])=\prod_{ \begin{subarray}{c}s\in\mu\\ t\in\nu\end{subarray}}T_{1}(u-[s+t]).\] Following from Definition 7, we may view \(T_{1}(u)\) as the rational function with zeros at coordinates \((0,0)\) and \((1,1)\), and poles in \((1,0)\) and \((0,1)\). These coordinates are the four vertices of the _box_\((0,0)\). This allows us to use Young diagrams to visualize the rational function \(T_{\mu}(u)\) or \(T_{\mu\star\nu}(u)\), as we see in the following example. **Example 1.9**.: _With \(\mu=331\), we have that_ \[T_{\mu}(u)=\frac{(u-[0,0])(u-[1,3])(u-[3,2])}{(u-[0,3])(u-[1,2])(u-[3,0])}.\] _We can illustrate such a rational function by marking the zeros and poles in the plane. Let us also set \(\nu=44311\). Then \(T_{\mu}(u)\), \(T_{\nu}(u)\) and \(T_{\mu\star\nu}(u)\) can be illustrated as_ _where \(\bullet\) represents a simple pole, and \(\times\) and \(\times^{2}\) represent a zero of order one and order \(2\), respectively. Note that the number of poles and the number of zeros (counting multiplicity) must be the same in every row (and every column) of the diagram since any \(T_{\Gamma}\) is a product of \(T_{1}(u-[b])\) over \(b\in\Gamma\)._ Note that the poles of \(T_{\mu}\) are precisely the elements in \(\mathcal{O}_{\mu}\), while the zeros are given by \(\mathcal{I}_{\mu}^{+}\cup\{(0,0)\}\). In other words, \[T_{\mu}(u)=u\cdot\frac{\prod_{s\in\mathcal{I}_{\mu}^{+}}(u-[s])}{\prod_{t\in \mathcal{O}_{\mu}}(u-[t])}. \tag{9}\] **Lemma 1.10**.: _For any partitions \(\mu\), \(\nu\), the order of the pole at \(u=[s]\) of the function \(T_{\mu\star\nu}(u)\) is given by the difference_ \[\operatorname*{ord}_{u=[s]}T_{\mu\star\nu}(u)=\left|\{b\in\nu:s-b\in\mathcal{O} _{\mu}\}\right|-\left|\{b\in\nu:s-b\in(\mathcal{I}_{\mu}^{+}\cup(0,0))\}\right|. \tag{10}\] Proof.: We note that \(T_{\mu\star\nu}(u)=\prod_{b\in\nu}T_{\mu}(u-[b])\). By (9), the poles of \(T_{\mu}(u-[b])\) are at \(t+b\) for \(t\in\mathcal{O}_{\mu}\) and similarly its zeros are at \(t^{*}+b\), for \(t^{*}\in\mathcal{I}_{\mu}^{+}\) and an extra zero at \(u=[b]\). From these observations we can deduce (10). **Lemma 1.11**.: _Fix \(\mu\) and some \(s\in\mathbb{Z}^{2}\). The orders of the poles of \(T_{\mu\star\nu}(u)\) are bounded by_ \[1-\max(|\mathcal{O}_{\mu}|,|\mathcal{O}_{\nu}|)\leq\operatorname*{ord}_{u=[s] }T_{\mu\star\nu}(u)\leq 1.\] _Furthermore, \([s^{\scalebox{0.6}{$-$}}]\) is a simple pole of \(T_{\mu\star\nu}(u)\) if and only if \(s^{\scalebox{0.6}{$-$}}\notin\nu\) and \(\lambda\subseteq\nu\), where \(\lambda^{R}\) is the shape between \(\mu\) and \(s\)._ Proof.: According to Lemma 1.10, in order for \(T_{\mu\star\nu}(u)\) to have a pole \([s^{\scalebox{0.6}{$-$}}]\), we must include an outer corner \(t\in\mathcal{O}_{\mu}\) in \(\nu^{R}\coloneqq s^{\scalebox{0.6}{$-$}}-\nu\). The smallest shape that \(\nu^{R}\) can be so that it contains \(t\) and \(s^{\scalebox{0.6}{$-$}}\) is the rectangle of boxes between \(t\) and \(s\), denoted \(\operatorname*{Rect}(t,s)\). However, \(\operatorname*{Rect}(t,s)\) also includes the two (shifted) inner corners \(m_{1},m_{2}\in\mathcal{I}_{\mu}^{+}\) adjacent to \(t\), as seen in Figure 5. Thus according to Lemma 1.10, \(T_{\mu\star\operatorname*{Rect}(0,s-t)}(u)\) will have a _zero_ of order \(1\) at \(u=[s^{\scalebox{0.6}{$-$}}]\). We repeat this inclusion of rectangles for each inner corner of \(\mu\) that is inside \(\operatorname*{Rect}(s)\). This process yields \(\lambda^{R}\coloneqq\cup_{t\in(\mathcal{O}_{\mu}\cap\operatorname*{Rect}(s)) }\operatorname*{Rect}(t,s)\), which is precisely the shape between \(\mu\) and \(s\). It is clear from Figure 6 that \(\lambda^{R}\) contains one more box from \(\mathcal{O}_{\mu}\) than from \(\mathcal{I}_{\mu}^{+}\), and thus \(T_{\mu\star\lambda}(u)\) has a pole at \([s^{\scalebox{0.6}{$-$}}]\). Thus, if \(\nu^{R}\supset\lambda^{R}\), we should have a pole of degree \(1\). However, this pole is annihilated if \(\nu^{R}\) also includes the zero coming from \((0,0)\in\mu\), that is, if \(s^{\scalebox{0.6}{$-$}}\in\nu\). To describe the corners to the left (or right) of a particular corner \(s=(s_{1},s_{2})\), we introduce the notation \[\mathcal{I}_{\sigma}^{<s}\coloneqq\{t=(t_{1},t_{2})\in\mathcal{I}_{\sigma}:t_{1 }<s_{1}\},\] and mutatis mutandis for \(\mathcal{I}_{\sigma}^{>s}\), \(\mathcal{O}_{\sigma}^{<s}\) and \(\mathcal{O}_{\sigma}^{>s}\). One can check directly the following reinterpretations of differences of corners as hook vectors. **Lemma 1.12**.: _For fixed \(s\in\mathcal{I}_{\sigma}\), the following hold:_ _For \(t\in\mathcal{I}_{\sigma}^{<s}\), we have \(s^{*}-t^{*}=\mathbf{h}_{\sigma\setminus s}^{\mathrm{U}}(s\curlyeq t)\)._ _For \(t\in\mathcal{O}_{\sigma}^{<s}\), we have \(s^{*}-t=\mathbf{h}_{\sigma}^{\mathrm{U}}(s\curlyeq t)\)._ _For \(t\in\mathcal{I}_{\sigma}^{>s}\), we have \(s^{*}-t^{*}=-\mathbf{h}_{\sigma\setminus s}^{\mathrm{U}}(s\curlyeq t)\)._ _For \(t\in\mathcal{O}_{\sigma}^{>s}\), we have \(s^{*}-t=-\mathbf{h}_{\sigma}^{\mathrm{L}}(s\curlyeq t)\)._ The proof of Lemma 1.12 is straighforward and we simply refer to Figure 7. **Lemma 1.13** (Expansion Lemma).: _Let \(\sigma\) be a partition, and pick \(s\in\mathcal{I}_{\sigma}\). Then \(T_{\sigma}(u)\) can be expanded3 Footnote 3: Alternatively, one may shift the argument to write the simpler expression, \[T_{\sigma}(u+[s^{*}])=u(u+[s^{*}])\cdot\frac{\prod_{b\in R^{s}_{\sigma^{1}\setminus s }}\left(u+h^{\mathrm{U}}_{\sigma^{1}\setminus s}(b)\right)}{\prod_{b\in R^{s }_{\sigma}}\left(u+h^{\mathrm{U}}_{\sigma^{1}\setminus s}(b)\right)}\cdot \frac{\prod_{b\in C^{s}_{\sigma^{1}\setminus s}}\left(u-h^{\mathrm{U}}_{\sigma ^{1}\setminus s}(b)\right)}{\prod_{b\in C^{s}_{\sigma}}\left(u-h^{\mathrm{U}}_{ \sigma^{1}\setminus s}(b)\right)}.\] as a row-column product relative to \(s\) as \[T_{\sigma}(u)=u\cdot(u-[s^{*}])\cdot\frac{\prod_{b\in R^{s}_{\sigma^{1} \setminus s}}\left(u-[s^{*}-\mathbf{h}^{\mathrm{U}}_{\sigma^{1}\setminus s} (b)]\right)}{\prod_{b\in R^{s}_{\sigma}}\left(u-[s^{*}-\mathbf{h}^{\mathrm{U} }_{\sigma}(b)]\right)}\cdot\frac{\prod_{b\in C^{s}_{\sigma^{1}\setminus s}} \left(u-[s^{*}+\mathbf{h}^{\mathrm{U}}_{\sigma^{1}\setminus s}(b)]\right)}{ \prod_{b\in C^{s}_{\sigma}}\left(u-[s^{*}+\mathbf{h}^{\mathrm{U}}_{\sigma}(b)] \right)}.\] _Each of the four factors in the product precisely captures the zeros and poles in each of the four quadrants with a corner in \(s\), see Figure 8._ Proof.: Recall from (9) that \[T_{\sigma}(u) =u\frac{\prod_{t\in\mathcal{I}_{\sigma}}(u-[t^{*}])}{\prod_{t\in \mathcal{O}_{\sigma}}(u-[t])}\] \[=u\frac{\prod_{t\in\mathcal{I}_{\sigma}}(u-[s^{*}-(s^{*}-t^{*})] )}{\prod_{t\in\mathcal{O}_{\sigma}}(u-[s^{*}-(s^{*}-t)])}.\] Using Lemma 1.12, we split this according to \(\mathcal{I}_{\sigma}=\mathcal{I}_{\sigma}^{<s}\cup\{s\}\cup\mathcal{I}_{ \sigma}^{>s}\), as \[=u(u-[s^{*}])\frac{\prod_{t\in\mathcal{I}_{\sigma}^{<s}}\left(u-[s^{*}- \mathbf{h}^{\mathrm{U}}_{\sigma^{1}\setminus s}(s\curlyeq\iota)]\right)}{ \prod_{t\in\mathcal{O}_{\sigma}^{<s}}\left(u-[s^{*}-\mathbf{h}^{\mathrm{U}}_{ \sigma^{1}\setminus s}(s\curlyeq\iota)]\right)}\frac{\prod_{t\in\mathcal{I}_{ \sigma}^{>s}}\left(u-[s^{*}+\mathbf{h}^{\mathrm{U}}_{\sigma^{1}\setminus s}(s \curlyeq\iota)]\right)}{\prod_{t\in\mathcal{O}_{\sigma}^{>s}}\left(u-[s^{*}+ \mathbf{h}^{\mathrm{U}}_{\sigma}(s\curlyeq\iota)]\right)}.\] Note that in the first fraction, \(s\curlyeq\iota\) is always a box in the same row as \(s\), while in the second fraction, \(s\curlyeq\iota\) is a box in the same column as \(s\). Next, we claim the that following holds for the first ratio of products: \[\frac{\prod_{t\in\mathcal{I}_{\sigma}^{<s}}(u-h^{\mathrm{U}}_{\sigma^{1} \setminus t}(s\curlyeq\iota))}{\prod_{t\in\mathcal{O}_{\sigma}^{<s}}(u-h^{ \mathrm{U}}_{\sigma}(s\curlyeq\iota))}=\frac{\prod_{b\in R^{s}_{\sigma^{1} \setminus s}}(u-h^{\mathrm{U}}_{\sigma^{1}\setminus s}(b))}{\prod_{b\in R^{s}_ {\sigma}}(u-h^{\mathrm{U}}_{\sigma}(b))}. \tag{11}\] That is, the ratio of products which only receives contributions from corners can be extended to include all boxes in the row of \(s\). This holds because for two adjacent boxes \(b,b+(1,0)\) in \(R^{s}_{\sigma}\), with \(b\) not in the same column as an inner corner, we have \[\mathbf{h}^{\mathrm{U}}_{\sigma^{1}\setminus s}(b)=\mathbf{h}^{\mathrm{U}}_{ \sigma}(b+(1,0)).\] Thus there is pairwise cancellation between pairs of boxes in the right hand side of (11), unless \(b\) shares a column with an inner corner, in which case \(b+(1,0)\) shares a column with the adjacent outer corner, see Figure 9. Figure 8: The four quadrants of \(T_{\mu}(u)\) expanded around \(u=[s]\) for \(s\in\mathcal{I}_{\mu}\). We find a similar result for the left hand ratio of products, this time with products over columns: \[\frac{\prod_{t\in\mathcal{I}_{\sigma}^{>}}(u-h_{\sigma\setminus t}^{\mathrm{L}}(s \curlyeq\lambda\ t))}{\prod_{t\in\mathcal{O}_{\sigma}^{>}}(u-h_{\sigma}^{ \mathrm{L}}(s\curlyeq\lambda\ t))}=\frac{\prod_{b\in C_{\sigma\setminus s}^{ \mathrm{s}}}(u-h_{\sigma\setminus s}^{\mathrm{L}}(b))}{\prod_{b\in C_{\sigma} ^{\mathrm{s}}}(u-h_{\sigma}^{\mathrm{L}}(b))}.\] Piecing these two results together this becomes \[=u(u-[s^{\star}])\frac{\prod_{b\in R_{\sigma\setminus s}^{\mathrm{s}}}(u-[s ^{\star}-\mathbf{h}_{\sigma\setminus s}^{\mathrm{U}}(b)])}{\prod_{b\in R_{ \sigma}^{\mathrm{s}}}(u-[s^{\star}-\mathbf{h}_{\sigma}^{\mathrm{U}}(b)])} \frac{\prod_{b\in C_{\sigma\setminus s}^{\mathrm{s}}}(u-[s^{\star}+\mathbf{h} _{\sigma\setminus s}^{\mathrm{L}}(b)])}{\prod_{b\in C_{\sigma}^{\mathrm{s}}}( u-[s^{\star}+\mathbf{h}_{\sigma}^{\mathrm{L}}(b)])}.\] ## 2. Rectangular Case Let \(c_{\mu\nu}^{\lambda}\) denote the Schur Littlewood-Richardson coefficients. One can easily show the following result. **Lemma 2.1**.: _If \(\lambda=m^{n}\) is a rectangular partition, then the (Schur) Littlewood-Richardson coefficient satisfies \(c_{\mu\nu}^{\lambda}\in\{0,1\}\), and takes the value \(1\) if and only \(\mu\subset m^{n}\) and \(\nu=\bar{\mu}\)._ Among all integer partitions \(\lambda\vdash mn\), the rectangular partition \(m^{n}\) is the unique partition containing the box with coordinate \(v^{\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ \scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$ $\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{0.7}{$\scalebox{$ $\scalebox{0.7}{$\scalebox{$\scalebox{0}$\scalebox{$\ddots}${$\scalebox{$ $\ddots$}{$\scalebox{$\ddots${$\ddots$}${$\scalebox{$\ddots$$${$ $\ddots$}${$\scalebox{$$$\ddots$$}{$\scalebox{$$$${$ $\ddots$}${\scalebox{$$$$$$}${\ddots$$$$$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}$}\)\)\)}\)\).\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, \,\ **Lemma 2.4** (Flip rules).: _For \(t\in\mathcal{I}_{\sigma}\), we have_ \[\frac{\prod_{b\in C^{t}_{\sigma\setminus t}}(u-h^{\textsc{\textsc{ U}}}_{\sigma\setminus t}(b))}{\prod_{b\in C^{t}_{\sigma}}(u-h^{\textsc{\textsc{ U}}}_{\sigma\setminus t}(b))} =\frac{1}{(u-[(m,0)-t])}\frac{\prod_{b\in R^{t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma\setminus t}(b))}{\prod_{b\in R^{t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma}(b))}, \tag{12}\] \[\frac{\prod_{b\in R^{t}_{\sigma\setminus t}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma\setminus t}(b))}{\prod_{b\in R^{t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma}(b))} =\frac{1}{(u-[t-(0,n)])}\frac{\prod_{b\in C^{t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma\setminus t}(b))}{\prod_{b\in C^{t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma}(b))}. \tag{13}\] Proof.: From the proof of the expansion Lemma 1.13, we know that the ratio of column products can be expressed as \[\frac{\prod_{b\in C^{t}_{\sigma\setminus t}}(u-h^{\textsc{\textsc{ U}}}_{\sigma\setminus t}(b))}{\prod_{b\in C^{t}_{\sigma}}(u-h^{\textsc{ \textsc{ U}}}_{\sigma\setminus t}(b))}=\frac{\prod_{s\in\mathcal{I}^{>t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma\setminus t}(s\curlyeq t))}{\prod_{s\in\mathcal{O}^{>t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma}(s\curlyeq t))}. \tag{14}\] We now use Lemma 2.3 to flip each of the terms in this product. All the inner corners to the right of \(t\) can be flipped this way, however if \((m,0)\in\mathcal{O}^{>t}_{\sigma}\), that is, if \(\sigma\) runs to the right edge of \(m^{n}\), then there is no corresponding element in \(\mathcal{I}_{\tilde{\sigma}}\). We treat this case first, and then consider case when \((m,0)\notin\mathcal{O}^{>t}_{\sigma}\). We claim that in both cases the following holds: \[\frac{\prod_{s\in\mathcal{I}^{>t}_{\sigma}}(u-h^{\textsc{\textsc{ U}}}_{\sigma\setminus t}(s\curlyeq t))}{\prod_{s\in\mathcal{O}^{>t}_{\sigma}}(u-h^{ \textsc{\textsc{ U}}}_{\sigma}(s\curlyeq t))}=\frac{1}{(u-[(m,0)-t])}\frac{\prod_{\tilde{s}\in \mathcal{O}^{<t}_{\sigma}}(u-h^{\textsc{\textsc{ U}}}_{\tilde{\sigma}\setminus t}(\tilde{s}\curlyeq t))}{\prod_{s\in\mathcal{I}^{<t}_{ \sigma}}(u-h^{\textsc{\textsc{ U}}}_{\tilde{\sigma}\setminus t}(\tilde{s}\curlyeq t))}. \tag{15}\] In the case \((m,0)\in\mathcal{O}^{>t}_{\sigma}\) we have contribution in the LHS denominator coming from \(t\curlyeq(m,0)=(t_{1},0)\). As this term cannot be flipped, it contributes an extra pole on the RHS at \[\mathbf{h}^{\textsc{\textsc{ U}}}_{\sigma}((t_{1},0))=(m-t_{1},-t_{2})=(m,0)-t,\] which appears in (15). In the case where \((m,0)\notin\mathcal{O}^{>t}_{\sigma}\), we must now have \((0,n)\in\mathcal{O}^{<\tilde{t}}_{\sigma}\) and that this corner of \(\tilde{\sigma}\) has no corresponding corner in \(\sigma\). In the RHS numerator of (15) we find a contribution from \(\bar{t}\curlyeq(0,n)=(0,\bar{t}_{2})\), for which \[\mathbf{h}^{\textsc{\textsc{ U}}}_{\tilde{\sigma}+\bar{t}}((0,\bar{t}_{2}))=(\bar{t}_{1}+1,-(n-1-\bar{t}_{2}))=\bar{t}^{ *}-(0,n)=(m,0)-t.\] As this zero in the numerator of the RHS of (15) does not appear as a flipped hook from the left hand side, this extra zero will have to be cancelled by the same pole factor that appeared in first case. Thus (15) holds in both cases, and we can then extend the products in the right hand side out over the full row \(R^{\tilde{t}}_{\tilde{\sigma}}\) (using a formula similar to equation (14)) to recover equation (12). By a similar technique, we find the other flip rule for row products. The extra pole factors entering into the formulas of the flip Lemma 2.4 have an alternate interpretation in terms of hooks in the rectangular partition \(m^{n}\). **Definition 2.5**.: _Recall that we have a fixed rectangular partition \(m^{n}\) in mind. For \(b=(b_{1},b_{2})\) we denote \(\tilde{b}\coloneqq(m-1-b_{1},b_{2})\) and \(\tilde{b}\coloneqq(b_{1},n-1-b_{2})\)._ **Lemma 2.6**.: _We have that_ \[(m,0)-b=\mathbf{h}_{m^{n}}^{\mathrm{\textsc{U}}}(\tilde{b})\quad\text{and} \quad b-(0,n)=\mathbf{h}_{m^{n}}^{\mathrm{\textsc{L}}}(\tilde{b}).\] Proof.: See Figure 11. We now arrive at the main structural result, which expresses the rational function \(T_{\mu*\bar{\mu}}(u)\) as a product involving hook lengths. **Theorem 2.7**.: _The function \(T_{\mu*\bar{\mu}}(u)\) is given by the following expression4,_ Footnote 4: As before, one may shift the argument to write a simplified expression, \[T_{\mu*\bar{\mu}}(u+[v^{\ast}])=\prod_{b\in\mu}\frac{u+[\tilde{b}]}{u+[b]} \prod_{b\in\bar{\mu}}\frac{u+h_{\mu}^{\mathrm{\textsc{L}}}(b)}{u+h_{m^{n}}^{ \mathrm{\textsc{L}}}(\tilde{b})}\prod_{b\in\mu}\frac{u-h_{\mu}^{\mathrm{ \textsc{U}}}(b)}{u-h_{m^{n}}^{\mathrm{\textsc{U}}}(\tilde{b})}.\] _Furthermore, each of the three products in the expression (16) is the component of the LHS explicitly decomposed with respect to quadrants around \(v^{\ast}\)-, as in Fig. 12. Thus, each of these factors, as a function of a partition \(\mu\subset m^{n}\), is manifestly invariant under \(\mu\to\bar{\mu}\)._ Proof.: We prove this by induction on the number of boxes in \(\mu\). We start with the base case \(\mu_{0}=\{1\}\) and so \(\bar{\mu}_{0}=m^{n}\setminus v^{\ast}\), where we find \[T_{\mu_{0}*\bar{\mu}_{0}}(u) = T_{1*(m^{n}\setminus v^{\ast})}(u)\] \[= \frac{u-[0,0]}{u-[m-1,n-1]}\frac{u-[m-1,n]}{u-[0,n]}\frac{u-[m,n- 1]}{u-[m,0]}.\] Of the three terms in this product, the first and last agree with the statement of the theorem, with the product over the the only box \(b=(0,0)\) of \(\mu_{0}=\{1\}\). For the middle factor, we need to check \[\prod_{b\in\bar{\mu}_{0}}\frac{u-[v^{\ast}-\mathbf{h}_{\bar{\mu}_{0}}^{ \mathrm{\textsc{L}}}(b)]}{u-[v^{\ast}-\mathbf{h}_{m^{n}}^{\mathrm{\textsc{L}} }(\tilde{b})]}=\frac{u-[m-1,n]}{u-[0,n]}.\] To show this holds, we first fill out the bottom product from \(\bar{\mu}_{0}\to m^{n}\) \[\prod_{b\in\bar{\mu}_{0}}\frac{u-[v^{\char 60}-\mathbf{h}_{\bar{\mu}_{0}}^{ \mathrm{L}}(b)]}{u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(\widehat{v})]}=(u-[v^{ \char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(\widehat{v}^{\char 60})])\frac{\prod_{b\in\bar{ \mu}_{0}}u-[v^{\char 60}-\mathbf{h}_{\bar{\mu}_{0}}^{\mathrm{L}}(b)]}{\prod_{b\in m^{n} }u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(b)]}.\] Next, we similarly fill out the numerator product by carefully expanding the row/columns containing \(v^{\char 60}\), which yields \[\prod_{b\in\bar{\mu}_{0}}u-[v^{\char 60}-\mathbf{h}_{\bar{\mu}_{0}}^{\mathrm{L}} (b)]=\frac{(u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(v^{\char 60})]) \prod_{b\in m^{n}}(u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(b)])}{(u-[v^{ \char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}((0,n-1))])(u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{ \mathrm{L}}((m-1,0))])}.\] Combining these previous two results, we find \[\prod_{b\in\bar{\mu}_{0}}\frac{u-[v^{\char 60}-\mathbf{h}_{\bar{\mu}}^{ \mathrm{L}}(b)]}{u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(\widehat{b})]}= \frac{(u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(v^{\char 60})])(u-[v^{ \char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(\widehat{v}^{\char 60})])}{(u-[v^{ \char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}((0,n-1))])(u-[v^{ \char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}((m-1,0))])}.\] Now using \(\widehat{v^{\char 60}}=(m-1,0)\), \(\mathbf{h}_{m^{n}}^{\mathrm{L}}(v^{\char 60})=(0,-1)\), and \(\mathbf{h}_{m^{n}}^{\mathrm{L}}((0,n-1))=(m-1,-1)\), we arrive at \[\prod_{b\in\bar{\mu}_{0}}\frac{u-[v^{\char 60}-\mathbf{h}_{\bar{\mu}}^{ \mathrm{L}}(b)]}{u-[v^{\char 60}-\mathbf{h}_{m^{n}}^{\mathrm{L}}(\widehat{b})]}= \frac{u-[v^{\char 60}-(0,-1)]}{u-[v^{\char 60}-(m-1,-1)]}=\frac{u-[m-1,n]}{u-[0,n]}.\] Thus we have shown the base case. Next, for the inductive step, we assume the theorem holds for \(\mu\), and we show it holds for \(\mu+s\). Here, we have \(\overline{\mu+s}=\bar{\mu}-\bar{s}\) where \(s+\bar{s}=v^{\char 60}\). We then find \[T_{\mu+ss\overline{\mu+s}}(u) = T_{\mu+ss\overline{(}\bar{\mu}-\bar{s})}(u)\] \[= \frac{T_{ss\overline{\mu}}(u)}{T_{(\mu+s)s\overline{(}u)}}T_{\mu \star\overline{\mu}}(u)\] \[= \frac{T_{\bar{\mu}}(u-[s])}{T_{\mu+s}(u-[\bar{s}])}T_{\mu\star \overline{\mu}}(u).\] We now simplify this factor in the last line by using the expansion in Lemma 1.13 to expand the numerator at \(\bar{s}\in\mathcal{I}_{\bar{\mu}}\) and the denominator at \(s\in\mathcal{I}_{\mu+s}\): \[T_{\bar{\mu}}(u-[s])=(u-[s])(u-[v])\frac{\prod_{b\in R_{\mu-\bar{s}}^{ \mathrm{L}}(u-[v-\mathbf{h}_{\bar{\mu}-\bar{s}}^{\mathrm{L}}(b)])}}{\prod_{b \in R_{\mu}^{\mathrm{L}}}(u-[v-\mathbf{h}_{\bar{\mu}-\bar{s}}^{\mathrm{L}}(b)] )}\frac{\prod_{b\in C_{\bar{\mu}-\bar{s}}^{\mathrm{L}}(u-[v+\mathbf{h}_{\bar{ \mu}-\bar{s}}^{\mathrm{L}}(b)])}}{\prod_{b\in C_{\bar{\mu}}^{\mathrm{L}}}(u-[v +\mathbf{h}_{\bar{\mu}}^{\mathrm{L}}(b)])}.\] Figure 12: The three quadrants of \(T_{\mu*\bar{\mu}}(u)\) around \(u=[v^{\char 60}]\). Note that each region contains an equal number of poles and zeros We use Lemma 2.4 to flip the blue factors to the following red factors, and use Equation 2, to arrive at the expressions \[T_{\bar{\mu}}(u-[s])=\frac{(u-[s])(u-[v])}{(u-[v+(m,0)-\bar{s}^{\prime}])}\frac{ \prod_{b\in R_{\bar{\mu}}^{2}}(u-[v-\mathbf{h}_{\bar{\mu}-b}^{\mathrm{U}}(b)])} {\prod_{b\in R_{\bar{\mu}}^{2}}(u-[v-\mathbf{h}_{\bar{\mu}-b}^{\mathrm{U}}(b)]) }\frac{\prod_{b\in R_{\bar{\mu}}^{2}}(u-[v+\mathbf{h}_{\mu+s}^{\mathrm{L}}(b)])} {\prod_{b\in R_{\bar{\mu}}^{2}}(u-[v+\mathbf{h}_{\mu+s}^{\mathrm{L}}(b)])},\] \[T_{\mu+s}(u-[\bar{s}])=\frac{(u-[\bar{s}])(u-[v])}{(u-[v-[s^{\prime}-(0,n)])]} \frac{\prod_{b\in C_{\bar{\mu}}^{2}}(u-[v-\mathbf{h}_{\bar{\mu}-b}^{\mathrm{U}} (b)])}{\prod_{b\in C_{\bar{\mu}}^{2}}(u-[v-\mathbf{h}_{\bar{\mu}-b}^{\mathrm{ U}}(b)])}\frac{\prod_{b\in C_{\bar{\mu}}^{2}}(u-[v+\mathbf{h}_{\mu}^{\mathrm{L}}(b)])} {\prod_{b\in C_{\bar{\mu}}^{2}}(u-[v+\mathbf{h}_{\mu+s}^{\mathrm{L}}(b)])}.\] The ratio of these terms then fills out to products over the entire partitions \(\mu\), \(\bar{\mu}\), as the \(s\)-row-column products are the only ones that are changed by adding in \(s\), and so we arrive at the following expression, \[\frac{T_{\bar{\mu}}(u-[s])}{T_{\mu+s}(u-[\bar{s}])} = \frac{(u-[s])}{(u-[\bar{s}])}\frac{(u-[v^{\prime}-(s-(0,n)))]}{(u- [v^{\prime}+(m,0)-\bar{s}])} \tag{17}\] \[\times\frac{\prod_{b\in\bar{\mu}-b}(u-[v-\mathbf{h}_{\bar{\mu}-b}^ {\mathrm{U}}(b)])}{\prod_{b\in\bar{\mu}}(u-[v-\mathbf{h}_{\bar{\mu}}^{\mathrm{ U}}(b)])}\frac{\prod_{b\in\mu+s}(u-[v+\mathbf{h}_{\mu+s}^{\mathrm{L}}(b)])}{ \prod_{b\in\mu}(u-[v+\mathbf{h}_{\mu}^{\mathrm{L}}(b)])}. \tag{18}\] We consider each of these four terms separately, and note their effect on the formula (16). For the first term of formula 17, it correctly updates the contribution from \(s\) in the first term in formula (16), i.e. \[\frac{(u-[s])}{(u-[\bar{s}])}\prod_{b\in\mu}\frac{u-[b]}{u-[\bar{b}]}=\prod_{b \in\mu\cup s}\frac{u-[b]}{u-[\bar{b}]}.\] The second term of formula (17) may be rewritten using Lemma 2.6, as \[\frac{u-[v^{\ast}-\mathbf{h}_{m}^{\mathrm{L}}(\bar{s})]}{u-[v^{\ast}+\mathbf{ h}_{m^{\ast}}^{\mathrm{U}}(\bar{s})]}.\] If we note that \(\hat{\bar{s}}=\check{s}\), we see that this term is precisely what is needed to include the \(s\) contribution (resp. remove the \(\bar{s}\) contribution) in the denominator of the third (resp. second) term in formula (16). The last two terms in formula (17) update the numerators of the last two terms in formula (16) respectively. Thus we see that the factor of \(\frac{T_{\bar{\mu}}(u-[s])}{T_{\mu+s}(u-[\bar{s}])}\) precisely is what is required for the inductive step to work. Note: As the function \(T_{\mu\ast\bar{\mu}}\) is manifestly symmetric under \(\mu\leftrightarrow\bar{\mu}\), it clear that formula (16) is also symmetric under this exchange. As a consequence, each of the three factors in formula (16) must also be symmetric, as they are expressed explicitly as products of poles/zeros (of the form \(u-[v^{\ast}\pm\mathbf{h}^{\mathrm{L}}(b)]\)) of the function \(T_{\mu\ast\bar{\mu}}(u)\) that lie in a distinct quadrant of the \(u\)-plane relative to \([v^{\ast}]\). With this we can provide a new proof of a result first shown by [1, 4.7]. **Corollary 2.8**.: _The strong Stanley conjecture holds in the rectangular case. In particular, the rectangular Littlewood-Richardson coefficient is given by_ \[g_{\mu,\bar{\mu}}^{m^{\ast}}=\frac{\prod_{b\in\mu}h_{\mu}^{\mathrm{L}}(b)\prod_{ b\in\bar{\mu}}h_{\bar{\mu}}^{\mathrm{U}}(b)}{\prod_{b\in m^{\ast}}D_{\mu}^{ \prime}(b)}, \tag{19}\] _where5_ Footnote 5: Note here that the function \(D^{\prime}\) differs from \(D\) given in formula (6) by flipping the hooks \(U\leftrightarrow L\), since we are writing an expression for the LR coefficients rather than the Stanley inner products. \[D_{\mu,m^{\ast}}^{\prime}(b)\coloneqq\begin{cases}h_{m^{\ast}}^{\mathrm{L}}(b )&\text{ if }\hat{b}\in\mu\\ h_{m^{\ast}}^{\mathrm{L}}(b)&\text{ otherwise.}\end{cases}\] Proof.: As stated in Corollary 2.2, we have \[g_{\mu,\bar{\mu}}^{m^{n}}=\frac{\varpi_{\mu}\varpi_{\bar{\mu}}}{\varpi_{m^{n}}} \operatorname{Res}_{u=[v^{\cdot}]}T_{\mu\star\bar{\mu}}(u).\] Using formula 16, we find \[\operatorname{Res}_{u=[v^{\cdot}]}T_{\mu\star\bar{\mu}}(u)=\frac{\prod_{b\in \mu}[\bar{b}]}{\prod_{b\in\mu^{\times}}[b]}\prod_{b\in\bar{\mu}}\frac{h_{\bar{ \mu}}^{\mathrm{L}}(b)}{h_{m^{\mathrm{L}}}^{\mathrm{L}}(\hat{b})}\prod_{b\in\mu }\frac{h_{\mu}^{\mathrm{U}}(b)}{h_{m^{\mathrm{L}}}^{\mathrm{U}}(\hat{b})}. \tag{20}\] As noted earlier, we know that each of the three factors in (20) remains invariant under the switch \(\mu\leftrightarrow\bar{\mu}\), so we are free to switch each of the last two factors. This provides a direct proof that this result is symmetric in \(\mu\) and \(\bar{\mu}\), and gives an alternate choices of hooks, \[\operatorname{Res}_{u=[v^{\cdot}]}T_{\mu\star\bar{\mu}}(u)=\frac{\prod_{b\in \mu}[\bar{b}]}{\prod_{b\in\mu^{\times}}[b]}\prod_{b\in\mu}\frac{h_{\mu}^{ \mathrm{L}}(b)}{h_{m^{\mathrm{L}}}^{\mathrm{L}}(\hat{b})}\prod_{b\in\bar{\mu} }\frac{h_{\bar{\mu}}^{\mathrm{U}}(b)}{h_{m^{\mathrm{-}}}^{\mathrm{U}}(\hat{b})}.\] We group these terms as follows, \[= \frac{\varpi_{m^{n}}/\varpi_{\bar{\mu}}}{\varpi_{\mu}}\frac{\prod_ {b\in\mu}h_{\mu}^{\mathrm{L}}(b)\prod_{b\in\bar{\mu}}h_{\bar{\mu}}^{\mathrm{ U}}(b)}{\left(\prod_{b\in\mu}h_{m^{\mathrm{L}}}^{\mathrm{L}}(\hat{b})\prod_{b\in \bar{\mu}}h_{m^{\mathrm{L}}}^{\mathrm{U}}(\hat{b})\right)}.\] The last thing we need to check is the following expression for the denominator \[\prod_{b\in\mu}h_{m^{\mathrm{L}}}^{\mathrm{L}}(\hat{b})\prod_{b\in\bar{\mu}}h _{m^{\mathrm{L}}}^{\mathrm{U}}(\tilde{b})=\prod_{b\in\mu}h_{m^{\mathrm{L}}}^{ \mathrm{L}}(\hat{b})\prod_{b\in m^{\mathrm{L}}/\mu}h_{m^{\mathrm{L}}}^{ \mathrm{U}}(\tilde{b})=\prod_{b\in m^{\mathrm{L}}}D^{\prime}_{\mu,m^{\mathrm{ L}}}(b),\] where we have used \(\tilde{b}=\hat{b}\). Note, that by swapping \(\mu\to\bar{\mu}\) in formula (20) we have the two representations of this Littlewood-Richardson coefficient (see Figure 14): \[g_{\mu,\bar{\mu}}^{m^{n}}=\frac{\prod_{b\in\mu}h_{\mu}^{\mathrm{L}}(b)\prod_{b \in\bar{\mu}}h_{\bar{\mu}}^{\mathrm{U}}(b)}{\prod_{b\in m^{n}}D^{\prime}_{\mu, m^{n}}(b)}=\frac{\prod_{b\in\mu}h_{\mu}^{\mathrm{U}}(b)\prod_{b\in\bar{\mu}}h_{ \bar{\mu}}^{\mathrm{L}}(b)}{\prod_{b\in m^{\mathrm{L}}}D^{\prime}_{\bar{\mu}, m^{n}}(b)}. \tag{21}\] ## 3. Rectangular Union Case In this section we prove a generalization of the previous results. Fix \(\mu\) and \(m^{n}\), without necessarily assuming that \(\mu\subset m^{n}\) as in the rectangular case. Let \(v=(n,m)\), and \(v^{\ast}=(n-1,m-1)\). Let \(\sigma=\mu\cap m^{n}\). Let \(\bar{\sigma}\) be the conjugation of \(\sigma\) taken with respect to \(m^{n}\), as illustrated in Figure 15. **Lemma 3.1**.: _We have_ \[g^{\mu\cup m^{n}}_{\mu,\bar{\sigma}}=g^{m^{n}}_{\sigma,\bar{\sigma}}\cdot \prod_{b\in\mu/\sigma}T_{\bar{\sigma}}\left([\bar{b}]\right). \tag{22}\] Proof.: First, by Lemma 1.11, we know \(T_{\mu,\bar{\sigma}}(u)\) has a pole at \(u=[v^{\ast}]\). By formula 8, we know that \[\sum_{\gamma}\hat{g}^{\gamma}_{\mu,\bar{\sigma}}\sum_{b\in\gamma}\frac{1}{u-[ b]}=T_{\mu\ast\bar{\sigma}}(u)-1.\] The residue of this equality at \(u=[v^{\ast}]\) is \[\sum_{\gamma:\mu\cup\bar{\sigma}\subseteq\gamma,v^{\ast}\in\gamma}\hat{g}^{ \gamma}_{\mu,\bar{\sigma}}=\operatorname{Res}_{u=[v^{\ast}]}T_{\mu\ast\bar{ \sigma}}(u).\] Because \(\mu\cup m^{n}\) is the only partition of its size that contains \(\mu\) and the box at \(v^{\ast}\), the sum on the LHS contains only the \(\gamma=\mu\cup m^{n}\) term and we conclude \[\hat{g}^{\mu\cup m^{n}}_{\mu,\bar{\sigma}}=\operatorname{Res}_{u=[v^{\ast}]}T _{\mu\ast\bar{\sigma}}(u).\] Next, we note that \[T_{\mu\ast\bar{\sigma}}(u)=T_{(\mu/\sigma)\ast\bar{\sigma}}(u)T_{\sigma\ast \bar{\sigma}}(u),\] and as the second factor has a pole \(u=[v^{\ast}]\), the first factor is regular at \(u=[v^{\ast}]\). So, \[\operatorname{Res}_{u=[v^{\ast}]}T_{\mu\ast\bar{\sigma}}(u)=T_{(\mu/\sigma) \ast\bar{\sigma}}([v^{\ast}])\operatorname{Res}_{u=[v^{\ast}]}T_{\sigma\ast \bar{\sigma}}(u).\] Figure 14. The two choices of hooks for the Littlewood–Richardson coefficients in the rectangular case. Each of the two fractions on either side of the equation are equal to the corresponding fraction on the other side. We color in blue the boxes that don’t have hooks assigned to them. Figure 15. The setup of the rectangular union case. We then use \[T_{\{s\}\preccurlyeq\bar{\sigma}}([v^{\cdot}])=T_{\bar{\sigma}}([v^{\cdot}-s]),\] to arrive at \[\hat{g}^{\mu\cup m^{n}}_{\mu,\bar{\sigma}}=\left(\prod_{b\in\mu/\sigma}T_{\bar{ \sigma}}([v^{\cdot}-b])\right)\cdot\hat{g}^{m^{n}}_{\sigma,\bar{\sigma}}. \tag{23}\] Finally, we notice that \[\frac{\varpi_{\mu}\varpi_{\bar{\sigma}}}{\varpi_{\mu\cup m^{n}}}=\frac{\varpi _{\sigma}\varpi_{\bar{\sigma}}}{\varpi_{m^{n}}},\] so we can drop the hats in equation 23. In order to expand the product term in (22), we produce a formula for the factors appearing therein. **Lemma 3.2**.: _For \(\mu\) as before, and \(s\in\mathcal{O}_{\mu}\) with \(s\) outside of \(m^{n}\), then_ \[T_{\bar{\sigma}}([\bar{s}])=\frac{\prod_{b\in R^{s}_{\mu\cup m^{n}}}h^{{}_{ \cup}}_{\mu\cup m^{n}}(b)}{\prod_{b\in R^{s}_{\mu\cup m^{n}}}h^{{}_{ \cup}}_{(\mu+s)\cup m^{n}}(b)}\times\frac{\prod_{b\in C^{s}_{ \mu\cup m^{n}}}h^{{}_{\mu\cup m^{n}}}_{\mu\cup m^{n}}(b)}{ \prod_{b\in C^{s}_{\mu+s}\cup m^{n}}h^{{}_{\mu \cup m^{n}}}_{(\mu+s)\cup m^{n}}(b)}}\] \[\frac{\prod_{b\in R^{s}_{\mu}}h^{{}_{\mu}}_{\mu}(b)}{\prod_{b\in R^{s}_{\mu+s }}h^{{}_{\mu+s}}_{\mu+s}(b)}\times\frac{\prod_{b\in C^{s}_{\mu}}h^{{}_{\mu}}_{ \mu}(b)}{\prod_{b\in C^{s}_{\mu+s}}h^{{}_{\mu}}_{\mu+s}(b)}. \tag{24}\] Proof.: We start by noting that for any \(\mu\) such that \(\mu\cap m^{n}=\sigma\), we have \[T_{\bar{\sigma}}([n,m]-u)=\frac{T_{\mu\cup m^{n}}(u)}{T_{\mu}(u)}.\] So \[T_{\bar{\sigma}}([\bar{s}])=T_{\bar{\sigma}}([(n,m)-(s^{*})])=\frac{T^{\prime }_{(\mu+s)\cup m^{n}}([s^{*}])}{T^{\prime}_{(\mu+s)}([s^{*}])},\] where \(T^{\prime}\) indicates that we have dropped the zero at \(u=[s^{*}]\). We then use the expansion Lemma 1.13 to write the terms in this ratio as \[T^{\prime}_{(\mu+s)\cup m^{n}}([s^{*}])=-[s^{*}]\times\frac{\prod_{b\in R^{s} _{\mu\cup m^{n}}}h^{{}_{\mu\cup m^{n}}}_{\mu\cup m^{n}}(b)}{\prod_{b\in R^{s}_ {(\mu+s)\cup m^{n}}}h^{{}_{\mu+s}}_{(\mu+s)\cup m^{n}}(b)}\times\frac{\prod_{b \in C^{s}_{\mu\cup m^{n}}}h^{{}_{\mu\cup m^{n}}}_{\mu\cup m^{n}}(b)}{\prod_{b \in C^{s}_{(\mu+s)\cup m^{n}}}h^{{}_{\mu+s}}_{(\mu+s)\cup m^{n}}(b)}.\] \[T^{\prime}_{(\mu+s)}([s^{*}])=-[s^{*}]\times\frac{\prod_{b\in R^{s}_{\mu}}h^{ {}_{\mu}}_{\mu}(b)}{\prod_{b\in R^{s}_{\mu+s}}h^{{}_{\mu+s}}_{\mu+s}(b)}\times \frac{\prod_{b\in C^{s}_{\mu}}h^{{}_{\mu}}_{\mu}(b)}{\prod_{b\in C^{s}_{\mu+s} }h^{{}_{\mu+s}}_{\mu+s}(b)}.\] Taking the ratio of these these two equalities, we arrive at the result. To state the following results, we need the following decomposition of \(\mu=(\mu_{0},\mu_{1},\mu_{2},\mu_{3})\) w.r.t a rectangle \(m^{n}\) as in Figure 16. **Theorem 3.3**.: _The Jack Littlewood-Richardson coefficient \(g^{\mu\cup m^{n}}_{\mu,\bar{\sigma}}\) in the rectangular union case is given by the fraction of hooks in Figure 17. That is, the figure gives a prescription for upper/lower hooks for each box in each of the three partitions._ Proof.: We refer to the decomposition in Fig. 17, according to which the the result (22) becomes \[\hat{g}^{\mu\cup m^{n}}_{\mu,\bar{\sigma}}=\left(\prod_{b\in\mu_{3}}T_{\bar{ \sigma}}([\bar{b}])\right)\left(\prod_{b\in\mu_{1}}T_{\bar{\sigma}}([\bar{b}]) \right)\cdot\hat{g}^{m^{n}}_{\sigma,\bar{\sigma}}. \tag{25}\] We will proceed by beginning with \(\sigma=\mu\cap m^{n}=\mu_{0}\cup\mu_{2}\), and then first extend to include \(\mu_{1}\) by adding one box at a time, and then extending to \(\mu_{3}\) in a similar manner. One of these two extensions will be simpler, depending one which choice of hooks we take for \(g_{\sigma,\tilde{\sigma}}^{m^{n}}\). We start with by choosing the hooks as given by Equation 19, represented by Fig. 18, noting that the technique works similarly if we begin with the alternate choice. Importantly, with this choice, every box in \(\mu_{0}\) that is beneath \(\mu_{1}\) has been assigned lower hooks, in both the \(\sigma\) factor in the numerator and the \(\sigma\cup m^{n}\) factor in the denominator. This will make including the boxes of \(\mu_{1}\) rather straightforward. We now proceed to add in each of the boxes in \(s\in\mu_{1}\). Since \(\mu_{1}\) is in the top-left region above \(m^{n}\), for \(b\in\mu_{1}\) we have \[h_{\mu}^{\textsc{U}/L}(b)=h_{\mu\cup m^{n}}^{\textsc{U}/L}(b).\] Figure 16. The decomposition \(\mu=:\mu_{0}\cup\mu_{1}\cup\mu_{2}\cup\mu_{3}\) corresponding to the intersection with the rectangular partition \(m^{n}\), and the definition of \(\tilde{\sigma}\) as the (reverse of the) boxes that fill the remainder of the rectangle. Note that \(\mu_{0}\) is defined as the union of all boxes under \(\mu_{1}\) with those left of \(\mu_{3}\). Figure 17. The coefficient \(g_{\mu,\sigma}^{m^{n}\cup\mu}\) expressed as a fraction involving the assignment of upper or lower hooks for every box in each of the three partitions. Because of this, and because \(s\in\mu_{1}\) implies \(R^{s}_{\mu_{1}}\subset\mu_{1}\), the factor (formula (24)) according to equation (25) coming from adding of the box \(s\in\mu_{1}\) simplifies to \[T_{\vec{\sigma}}([\vec{s}])=\frac{\prod_{b\in C^{s}_{\mu\cup m^{n}}}h^{\textsc{ L}}_{\mu\cup m^{n}}(b)}{\prod_{b\in C^{s}_{(\mu\cup\cup m^{n}})\cup m^{n}}h^{ \textsc{L}}_{\mu\cup m^{n}}(b)}/\frac{\prod_{b\in C^{s}_{\mu}}h^{\textsc{L}}_{ \mu}(b)}{\prod_{b\in C^{s}_{\mu\cup s}}h^{\textsc{L}}_{\mu\cup s}(b)}.\] We represent this 'new factor' pictorially in the right hand side of Fig. 19. We color in orange those boxes that we are tasked with assigning hooks to, and color in blue those which aren't assigned hooks. Of the terms appearing in this new factor, two will remove the \(L\)-hooks for all \(b\in C^{s}_{\sigma}\) and \(b\in C^{s}_{\sigma\cup m^{n}}\), and the other two terms will add these hooks back in for the larger partitions \(\sigma\cup s\) and \((\sigma\cup m^{n})\cup s\), and thus extending the choice of a \(L\)-hook to the new box in the column, as seen in Fig. 20. By continuing in this way for each box \(b\in\mu_{1}\), each new box will always have all lower hooks beneath it, and thus these factors provide exactly for \(L\)-hooks for all the boxes in \(\mu_{1}\). Next, we proceed to adding the first box \(s\) of \(\mu_{3}\). Unlike the boxes we added in \(\mu_{1}\), \(s\in\mu_{3}\) does not have all \(U\)-hooks to the right of it, and so a more involved process will be required to add in such a box. We observe that for \(b\in\mu_{3}\) we have \[h^{\textsc{U}/L}_{\mu}(b)=h^{\textsc{U}/L}_{\mu\cup m^{n}}(b).\] and thus, similar to before, we find that a box \(s\in\mu_{3}\) implies that \(C^{s}_{\mu_{3}}\in\mu_{3}\), and so the new factor reduces to \[T_{\vec{\sigma}}([\vec{s}])=\frac{\prod_{b\in R^{s}_{\mu\cup m^{n}}}h^{ \textsc{U}}_{\mu\cup m^{n}}(b)}{\prod_{b\in R^{s}_{(\mu\cup\cup m^{n}})\cup m ^{n}}h^{\textsc{U}}_{(\mu\cup s)\cup m^{n}}(b)}/\frac{\prod_{b\in R^{s}_{\mu \cup s}}h^{\textsc{U}}_{\mu\cup s}(b)}{\prod_{b\in R^{s}_{\mu\cup s}}h^{ \textsc{U}}_{\mu\cup s}(b)}.\] This new factor is represented in Figure 21. Figure 19. The new orange box \(s\in\mu_{1}\) being added, and on the left we have the factor \(T_{\vec{\sigma}}([\vec{s}])\). The task is to assign hooks to the added orange box(es), using the hooks coming from the ‘new factor’ on the right, where blue indicates boxes that don’t have assigned hooks. The next step is to swap the row \(R_{\mu}^{s}\) of \(L\)-hooks in \(\mu\) that contains \(s\in\mu_{3}\), with the row of \(U\)-hooks in \(\mu\cup s\) from the new factor, as indicated in Figure 22. Then, we split up the two rows of hooks in the numerator of the new factor according to the staggered row profile of \(L\)-hooks in the denominator of the right hand side, as in Figure 23. We now show that this numerator of the new factor gives precisely the hooks required to flip the indicated hooks in the denominator of the right hand side from \(L\to U\). Let \(\ell_{\lambda}(b)\) be shorthand for \(\operatorname{leg}_{\lambda}(b)\). Let \(b\in\mu\) be such that the row \(R_{\mu}^{b}\) extends all the way to the right boundary of \(m^{n}\) (for example, in the case at hand \(b\) is in the bottom row of \(\mu\)). The claim is \[h_{\mu}^{\iota}(b)=h_{\mu\cup m^{n}}^{\iota}(b+(0,\ell_{\mu\cup m^{n}}(b)-\ell _{\mu}(b))).\] Figure 21. On the left, we show the new factor \(T_{\sigma}([\tilde{s}])\) corresponding to adding the box \(s\in\mu_{3}\) (\(s\) drawn in orange). Figure 22. The result of swapping the first term in the numerator of the new factor with the bottom row of the \(\mu\) factor. One can easily verify this by direct calculation or pictorially. Thus, we use the hooks on the numerator of the new factor to flip all the hooks on the staggered row in the denominator. After this, we are left with the two terms in the denominator of the new factor, as in Figure 24. Now that the entire bottom row of the denominator of the RHS is all upper hooks, we can see that these two remaining rows in the denominator of the new factor are precisely what is required to extend this row of upper hooks to include \(s\in\mu_{3}\). With this, all hooks in the new factor have been used, and all boxes in the triple of partitions have been assigned a choice of hooks, as in Figure 25. Now, we can use the above steps to include the the rest of the boxes inside \(\mu_{3}\). If the newly added box is to the right of a box added previously, the row just extends by \(U\)-hooks. If the new box in \(\mu_{3}\) is added to a new row, then repeat all of the above steps, starting in this new row. Proceeding in this way, we see that all boxes to the left of \(\mu_{3}\) will be assigned U-hooks, and the remaining rectangular region will be assigned hooks just as in Figure 17. As we mentioned at the start of Section 1.3, there is a direct correspondence between the Littlewood-Richardson coefficients and Stanley structure coefficients. Thus, our Theorem 3.3 immediately gives the following main result of the paper. **Corollary 3.4**.: _The strong Stanley conjecture holds in the rectangular union case._ We find that the Stanley structure coefficients are given by Figure 17, except with bringing the denominator up to the numerator with its hooks flipped \(L\leftrightarrow U\) Figure 23. Split up the two rows in the numerator of the new factor according to the profile of lower hooks in the denominator of the left hand side. Figure 24. With the staggered row now flipped to upper hooks, the bottom row of the denominator can now easily be extended. **Example 3.5**.: _We have_ \[\langle J_{42211}J_{211},J_{43331}\rangle=\raisebox{-14.226378pt}{\includegraphics[]{ jc1.eps}}\ polynomials in \(\alpha\) with non-negative integer coefficients. One can then ask if a shifted analog of Stanley's conjecture holds. Let us first set \[H^{U}_{\lambda}\coloneqq\prod_{b\in\lambda}h^{U}_{\lambda}(b),\quad H^{L}_{ \lambda}\coloneqq\prod_{b\in\lambda}h^{L}_{\lambda}(b).\] The definition of the Jack Littlewood-Richardson coefficients \(g^{\lambda}_{\mu\nu}(\alpha)\) are then defined via \[J^{\#}_{\mu}J^{\#}_{\nu}=\sum_{\lambda}g^{\lambda}_{\mu\nu}(\alpha)J^{\#}_{ \lambda}\text{ or equivalently }P^{\#}_{\mu}P^{\#}_{\nu}=\sum_{\lambda}g^{\lambda}_{\mu\nu}(\alpha)\frac{ H^{L}_{\lambda}}{H^{L}_{\mu}\cdot H^{L}_{\mu}}P^{\#}_{\lambda},\] where \(J^{\#}_{\mu}=H^{L}_{\mu}P^{\#}_{\mu}\). When \(|\mu|+|\nu|=|\lambda|\), we recover the "non-shifted" coefficients in (3), but we may have non-zero coefficients also when \(|\mu|+|\nu|>|\lambda|\). The quantity \(c^{\lambda}_{\mu\nu}(\alpha)\coloneqq g^{\lambda}_{\mu\nu}(\alpha)\frac{H^{L} _{\mu}}{H^{L}_{\mu}\cdot H^{L}_{\mu}}\) is then a deformation of the classical Littlewood-Richardson coefficients for Schur functions, and their shifted generalization. The inner product for the shifted Jack polynomials are then given by \[\langle J^{\#}_{\mu}J^{\#}_{\nu},J^{\#}_{\lambda}\rangle=H^{L}_{\lambda}\cdot H ^{U}_{\lambda}\cdot g^{\lambda}_{\mu\nu}(\alpha),\] as in Subsection 1.3. **Conjecture 4.1** (Shifted strong Stanley conjecture).: _If the shifted Littlewood-Richardson coefficient \(c^{\lambda}_{\mu\nu}(1)\) is equal to \(1\), then the (shifted) structure coefficient has the following form:_ \[\langle J^{\#}_{\mu}J^{\#}_{\nu},J^{\#}_{\lambda}\rangle=\left(\prod_{b\in\mu }\tilde{h}_{\mu}(b)\right)\left(\prod_{b\in\nu}\tilde{h}_{\nu}(b)\right) \left(\prod_{b\in\lambda}\tilde{h}_{\lambda}(b)\right),\] _where \(\tilde{h}_{\sigma}(b)\) is a choice of either \(h^{\cup}_{\sigma}(b)\) or \(h^{\cup}_{\sigma}(b)\) for each box \(b\). Moreover, one chooses \(h^{\cup}_{\sigma}(b)\) exactly \(|\lambda|\) times each in the above expression and \(h^{\cup}_{\sigma}(b)\) is chosen \(|\mu|+|\nu|\) times._ **Example 4.2**.: _For \(\lambda=32211\), \(\mu=31\) and \(\nu=321\), we have that_ \[c^{\lambda}_{\mu\nu}(\alpha)=\frac{\alpha^{2}(\alpha+3)^{2}(\alpha+4)(2\alpha +1)^{2}(2\alpha+5)(3\alpha+1)(3\alpha+2)}{(\alpha+1)^{6}(\alpha+2)^{2}(2\alpha +3)^{2}(3\alpha+4)}\] _(which evaluates to \(1\) at \(\alpha=1\)) and_ \[\langle J^{\#}_{\mu}J^{\#}_{\nu},J^{\#}_{\lambda}\rangle=8\alpha^{5}(\alpha+ 3)^{2}(\alpha+4)(2\alpha+1)^{2}(2\alpha+5)(3\alpha+1)(3\alpha+2).\] Figure 26. We can re-write the result for rectangular union in schematic form. The blue sections are not assigned any hooks, and for the beige regions we can choose one of the two solutions to the rectangular case for that subregion. _We can see that this product is obtained from the assignments below:_ _Note that here the choices for all boxes that aren't inner corners are forced._ ## Acknowledgements The authors would like to thank the organizers of the conference _Open Problems in Algebraic Combinatorics 2022_ (OPAC), the occasion of which led to this collaboration. RM would like thank Alexander Moll for first suggesting that there might be a path to make progress on the Stanley conjectures starting from a deeper understanding of the Nazarov-Skylanin Lax operator, and this paper represents the culmination of that idea.
2309.07150
Clark measures on polydiscs associated to product functions and multiplicative embeddings
We study Clark measures on the unit polydisc, giving an overview of recent research and investigating the Clark measures of some new examples of multivariate inner functions. In particular, we study the relationship between Clark measures and multiplication; first by introducing compositions of inner functions and multiplicative embeddings, and then by studying products of one-variable inner functions.
Nell Jacobsson
2023-09-08T16:31:07Z
http://arxiv.org/abs/2309.07150v1
# Clark measures on polydiscs associated to product functions and multiplicative embeddings ###### Abstract. We study Clark measures on the unit polydisc, giving an overview of recent research and investigating the Clark measures of some new examples of multivariate inner functions. In particular, we study the relationship between Clark measures and multiplication; first by introducing compositions of inner functions and multiplicative embeddings, and then by studying products of one-variable inner functions. Key words and phrases:Clark measure, multiplicative embedding, product function 2020 Mathematics Subject Classification: 28A25, 28A35 (primary); 32A10, 30J05 (secondary) ## 1. Introduction Let \[\mathbb{D}^{d}:=\{(z_{1},z_{2},\ldots,z_{d})\in\mathbb{C}^{d}:|z_{j}|<1,\quad j =1,2,\ldots,d\}\] denote the unit polydisc in \(d\) variables, and \[\mathbb{T}^{d}:=\{(\zeta_{1},\zeta_{2},\ldots,\zeta_{d})\in\mathbb{C}^{d}:| \zeta_{j}|=1,\quad j=1,2,\ldots,d\}\] its distinguished boundary. Note that this is only a subset of the boundary \(\partial\mathbb{D}^{d}\). For \(d=2\), the set \(\mathbb{T}^{2}\) defines a two-dimensional torus. For \(z\in\mathbb{D}^{d}\) and \(\zeta\in\mathbb{T}^{d}\), we introduce the Poisson kernel on \(\mathbb{D}^{d}\) as a product of one-variable Poisson kernels: \[P_{z}(\zeta)=P(z,\zeta):=\prod_{j=1}^{d}P_{z_{j}}(\zeta_{j})\quad\text{where} \quad P_{z_{j}}(\zeta_{j}):=\frac{1-|z_{j}|^{2}}{|\zeta_{j}-z_{j}|^{2}}.\] Given a complex Borel measure \(\mu\) on \(\mathbb{T}^{d}\), we define its Poisson integral as \[P[d\mu](z):=\int_{\mathbb{T}^{d}}P(z,\zeta)d\mu(\zeta),\quad z\in\mathbb{D}^{ d}.\] If \(\phi:\mathbb{D}^{d}\to\mathbb{D}\) is a bounded holomorphic function, then \[\Re\bigg{(}\frac{\alpha+\phi(z)}{\alpha-\phi(z)}\bigg{)}=\frac{1-|\phi(z)|^{2 }}{|\alpha-\phi(z)|^{2}}\] is positive and pluriharmonic (i.e. locally the real part of an analytic function) on \(\mathbb{D}^{d}\). Hence, by Herglotz' theorem, there exists a unique positive Borel measure \(\sigma_{\alpha}\) on \(\mathbb{T}^{d}\) such that \[\frac{1-|\phi(z)|^{2}}{|\alpha-\phi(z)|^{2}}=P[d\sigma_{\alpha}](z)=\int_{ \mathbb{T}^{d}}P(z,\zeta)d\sigma_{\alpha}(\zeta).\] We call \(\{\sigma_{\alpha}\}_{\alpha\in\mathbb{T}}\) the _Aleksandrov-Clark measures_ associated to \(\phi\). A measure whose Poisson integral is the real part of an analytic function is generally called an RP-measure or pluriharmonic measure. Recall that for a function \(f:\mathbb{D}\to\mathbb{C}\) and some point \(\zeta\in\mathbb{T}\), we say that \(f(z)\) approaches \(L\in\mathbb{C}\) non-tangentially, denoted \[L:=\angle\lim_{z\to\zeta}f(z),\] ## 1. Introduction Let \(\mathbb{D}^{d}\) be a \(d\)-dimensional space with \(d\geq 2\). We say that \(\mathbb{D}^{d}\) is an \(d\)-dimensional space with \(d\geq 2\). ### Overview First, in Section 2, we introduce some basic theory concerning Clark measures in \(\mathbb{T}^{d}\). Next, we survey some notable properties of Clark measures in one variable. In Section 3, we give an overview of recent progress concerning rational inner functions (RIFs). In particular, we present results from [3] -- for example, we will see that the Clark measures of bivariate RIFs are supported on finite unions of analytic curves, and that one can characterize their behavior along these curves. In Section 4, we consider the multiplicative embedding \[\Phi(z):=\phi(z_{1}z_{2}\cdots z_{d}),\quad z\in\mathbb{D}^{d}\] which produces a multivariate inner function given an inner function \(\phi\) in one variable. First we investigate the case \(d=2\), where we characterize the unimodular level sets of \(\Phi\) and see that these can always be parameterized by -- potentially infinitely many -- antidiagonals in \(\mathbb{T}^{2}\). We conclude this section by presenting a concrete structure formula for the Clark measures associated to \(\Phi\) in \(d\) variables. In Section 5, we turn to product functions \[\Psi(z):=\phi(z_{1})\psi(z_{2}),\quad z\in\mathbb{D}^{2}\] for inner functions \(\phi\) and \(\psi\). We then prove a structure formula for the Clark measures of \(\Psi\) under certain assumptions on \(\phi\) and \(\psi\). Throughout the text, we present examples of bivariate inner functions with explicit characterizations of their Clark measures. Finally, in Section 6, we discuss possible further research tied to our results and raise some open questions. ## 2. Preliminaries ### Elementary Clark theory in polydiscs If \(\phi\) is an inner function with associated Clark measure \(\sigma_{\alpha}\), then \[P[d\sigma_{\alpha}](z)=\frac{1-|\phi(z)|^{2}}{|\alpha-\phi(z)|^{2}}=0\quad\text { $m_{d}$-almost everywhere on $\mathbb{T}^{d}$.}\] Clearly, the numerator goes to zero \(m_{d}\)-almost everywhere. As \(\phi-\alpha\) is bounded, it lies in the Hardy space \(H^{2}(\mathbb{T}^{d})\subset N(\mathbb{T}^{d})\); then Theorem 3.3.5 in [18] states that \(\log(\phi^{*}-\alpha)\) lies in \(L^{1}(\mathbb{T}^{d})\). This in turn implies that \(\phi^{*}-\alpha\) must be non-zero \(m_{d}\)-almost everywhere on \(\mathbb{T}^{d}\). Hence, \(P[d\sigma_{\alpha}](z)=0\)\(m_{d}\)-almost everywhere on \(\mathbb{T}^{d}\), as asserted. A notable consequence of this result is that if \(\phi\) is an inner function, then its Clark measures \(\{\sigma_{\alpha}\}_{\alpha\in\mathbb{T}}\) must be singular with respect to the Lebesgue measure. To see this, we decompose \(\sigma_{\alpha}\) into an absolutely continuous measure \(\tau_{\alpha}^{1}=f_{\alpha}m_{d}\), \(f_{\alpha}\in L^{1}(\mathbb{T}^{d})\), and a \(m_{d}\)-singular measure \(\tau_{\alpha}^{2}\) (see Theorem 6.10, [19]). Then Theorem 2.3.1 in [18] states that the function \[u(z):=P[d\sigma_{\alpha}](z)=P[f_{\alpha}dm_{d}+d\tau_{\alpha}^{2}](z)\] satisfies \(u^{*}(\zeta)=f_{\alpha}(\zeta)\) for \(m_{d}\)-almost every \(\zeta\in\mathbb{T}^{d}\). However, we saw already that \(P[d\sigma_{\alpha}]=0\)\(m_{d}\)-almost everywhere on \(\mathbb{T}^{d}\); hence \(f_{\alpha}=0\)\(m_{d}\)-almost everywhere on \(\mathbb{T}^{d}\). We can thus conclude that \(\sigma_{\alpha}\) is a \(m_{d}\)-singular measure for each \(\alpha\in\mathbb{T}\). Moreover, as asserted in [11], two Clark measures \(\sigma_{\alpha}\) and \(\sigma_{\beta}\) associated to an inner function \(\phi\) are mutually singular whenever \(\alpha\neq\beta\). Observe that since \[\int_{\mathbb{T}^{d}}d\sigma_{\alpha}=\int_{\mathbb{T}^{d}}P(0,\zeta)d\sigma_ {\alpha}=\frac{1-|\phi(0)|^{2}}{|\alpha-\phi(0)|^{2}}<\infty,\] the measure \(\sigma_{\alpha}\) is finite for each \(\alpha\in\mathbb{T}\). In particular, \(\sigma_{\alpha}\) is a probability measure if the associated inner function \(\phi\) satisfies \(\phi(0)=0\). For an inner function \(\phi\) and a constant \(\alpha\in\mathbb{T}\), we define the so called _unimodular level set_ \[\mathcal{C}_{\alpha}(\phi):=\text{\rm Clos}\Big{\{}\zeta\in\mathbb{T}^{d}: \lim_{r\to 1^{-}}\phi(r\zeta)=\alpha\Big{\}},\] where the closure is taken with respect to \(\mathbb{T}^{d}\). The following proposition is a generalization of Lemma 2.1 in [3], where it is proven for rational inner functions. **Proposition 2.1**.: _Let \(\phi:\mathbb{D}^{d}\to\mathbb{C}\) be an inner function, and let \(\alpha\) be a unimodular constant. Then \(\operatorname{supp}(\sigma_{\alpha})\subset\mathcal{C}_{\alpha}(\phi)\)._ With the exception of some details, the proof uses the same arguments as in [3]. We include it for the interested reader. Proof.: Let \(B\subset\mathbb{T}^{d}\) be an open ball such that \(\lim_{r\to 1^{-}}\phi(r\zeta)\neq\alpha\) for all \(\zeta\in B\). Our goal is to show that \(\sigma_{\alpha}(B)=0.\) Recall that the Poisson kernel is positive; hence, \[\int_{B}P(r\zeta,\eta)d\sigma_{\alpha}(\eta)\leq\int_{\mathbb{T}^{d}}P(r\zeta, \eta)d\sigma_{\alpha}(\eta)=\frac{1-|\phi(r\zeta)|^{2}}{|\alpha-\phi(r\zeta)|^ {2}}\] for all \(\zeta\in B\) and every \(0\leq r<1\). We make two observations now: first of all, we note that since \(\phi\) is inner, the right-hand side tends to zero for \(m_{d}\)-almost every \(\zeta\in B\) as \(r\to 1^{-}\). So \[\lim_{r\to 1^{-}}\int_{B}P(r\zeta,\eta)d\sigma_{\alpha}(\eta)=0\quad m_{d} \text{-almost everywhere in }B.\] Secondly, since \(\phi\) is bounded on the unit polydisc and \(\phi(r\zeta)\not\to\alpha\) on \(B\), we have that \[\limsup_{r\to 1^{-}}\int_{B}P(r\zeta,\eta)d\sigma_{\alpha}(\eta)\leq\limsup_{r \to 1^{-}}\frac{1-|\phi(r\zeta)|^{2}}{|\alpha-\phi(r\zeta)|^{2}}<\infty \tag{1}\] for _all_\(\zeta\in B\). Here, we take the limit superior instead of the limit, as the limit of the right-hand side need not exist for every point in \(B\). Now define \[D_{r}(\zeta):=\{\eta:|r\zeta_{j}-\eta_{j}|\leq 2(1-r):\quad j=1,\ldots,d\ \}.\] For \(\eta\in D_{r}(\zeta)\), one can show that \[\left(\frac{1+r}{4(1-r)}\right)^{d}\leq P(r\zeta,\eta)\] (see details on p. 4, [3]), and so \[\left(\frac{1+r}{4(1-r)}\right)^{d}\sigma_{\alpha}(B\cap D_{r}(\zeta))\leq \int_{B\cap D_{r}(\zeta)}P(r\zeta,\eta)d\sigma_{\alpha}(\eta)\leq\int_{B}P(r \zeta,\eta)d\sigma_{\alpha}(\eta),\] which in turn implies that \[\lim_{r\to 1^{-}}\frac{\sigma_{\alpha}(B\cap D_{r}(\zeta))}{(1-r)^{d}}=0\quad m _{d}\text{-almost everywhere in }B \tag{2}\] and, moreover, that the limit superior of this quotient is finite for all \(\zeta\in B\) by (1). In polar coordinates, we can express \(D_{r}(\zeta)\) as \[D_{r}(\zeta)=\left\{\zeta e^{i\theta}:|\theta_{j}|<\cos^{-1}\left(1-\frac{3(1- r)^{2}}{2r}\right)\!,\quad j=1,\ldots,d\right\}\] (details on p. 5, [3]). We observe that this is, as a subset of \(\mathbb{T}^{d}\), a product of \(d\) copies of the same interval. Hence, as \(r\to 1^{-}\), we may estimate the Lebesgue measure of this set as \[|D_{r}(\zeta)|=2^{d}\cos^{-1}\left(1-\frac{3(1-r)^{2}}{2r}\right)^{d}\geq c(d )\sqrt{\frac{3(1-r)^{2}}{2r}}^{d}\geq c^{\prime}(d)(1-r)^{d}\] for constants \(c(d),c^{\prime}(d)\) dependent on \(d\). Together with (2), this shows that \[\lim_{r\to 1^{-}}\frac{\sigma_{\alpha}(B\cap D_{r}(\zeta))}{|D_{r}(\zeta)|}=0 \quad m_{d}\text{-almost everywhere in }B,\] and that the limit superior must be finite for all \(\zeta\in B\). Note that per definition, \(D_{r}(\zeta)\) is a \(d\)-dimensional cube with volume tending to zero as \(r\to 1^{-}\) for every \(\zeta\in B\). We now claim that \[\limsup_{r\to 1^{-}}\frac{\sigma_{\alpha}(B\cap D_{r}(\zeta))}{|D_{r}(\zeta)|}=0 \quad\text{for every }\zeta\in B. \tag{3}\] To prove this, suppose there exists some \(z\in B\) such that the limit superior in (3) is nonzero. Since \(\sigma_{\alpha}\) is a finite measure, we have that \(\sigma_{\alpha}(B\cap D_{r}(z))<\infty\). Together with the fact that the denominator tends to zero, this would imply that the limit, and hence limit superior, is infinite for \(z\in B\), which is a contradiction by our previous arguments. Hence (3) holds. Since \(|D_{r}(\zeta)|\to 0\) as \(r\to 1^{-}\), the limit in (3) implies that the \(n\)-dimensional upper density of the restriction measure \((\sigma_{\alpha})_{|B}\), defined as \((\sigma_{\alpha})_{|B}(A):=\sigma_{\alpha}(B\cap A)\), is zero for every point in \(\mathbb{T}^{d}\) (see e.g. Proposition 2.2.2 in [16]). Thus, \((\sigma_{\alpha})_{|B}\) is equal to zero, which in turn implies that \(\sigma_{\alpha}(B)=0\). The inclusion in Proposition 2.1 is not necessarily strict. In fact, for the classes of functions studied in this text, it follows from our structure formulas (Theorem 3.4, 3.5 and Corollary 4.4.1) that \(\text{supp}(\sigma_{\alpha})=\mathcal{C}_{\alpha}\). Finally, we record a fact which will be used in several proofs down the line: **Lemma 2.2**.: _The linear span of Poisson kernels \(\mathcal{M}:=\text{span}\{P_{z}:z\in\mathbb{D}^{d}\}\) is dense in \(C(\mathbb{T}^{d})\)._ The proof is a straight-forward generalization of the proof of Proposition 1.17 in [13]. ### Clark measures in one variable Before getting into examples of Clark measures in higher dimensions, we quote some results from Clark theory in one variable. To formulate the main result, we must first introduce the concept of angular derivatives. **Theorem 2.3**.: _For an analytic function \(f\) on \(\mathbb{D}\) and \(\zeta_{0}\in\mathbb{T}\), the following are equivalent:_ 1. _The non-tangential limits_ \[f(\zeta_{0})=\angle\lim_{z\to\zeta_{0}}f(z)\quad\text{and}\quad\angle\lim_{z \to\zeta_{0}}\frac{f(z)-f(\zeta_{0})}{z-\zeta_{0}}\] _exist;_ 2. _The derivative function_ \(f^{\prime}\) _has a non-tangential limit at_ \(\zeta_{0}\)_._ _Under the equivalent conditions above,_ \[\angle\lim_{z\to\zeta_{0}}\frac{f(z)-f(\zeta_{0})}{z-\zeta_{0}}=\angle\lim_{z \to\zeta_{0}}f^{\prime}(z).\] Proof.: See Theorem 2.19 in [13]. **Definition 2.4**.: _Assuming the conditions of the theorem, we call_ \[f^{\prime}(\zeta_{0}):=\angle\lim_{z\to\zeta_{0}}\frac{f(z)-f(\zeta_{0})}{z- \zeta_{0}}=\angle\lim_{z\to\zeta_{0}}f^{\prime}(z)\] _the angular derivative of \(f\) at \(\zeta_{0}\). Furthermore, if \(f\) maps \(\mathbb{D}\) to itself, we say that \(f\) has an angular derivative in the sense of Caratheodory at \(\zeta_{0}\in\mathbb{T}\) if \(f\) has an angular derivative at \(\zeta_{0}\) and \(f(\zeta_{0})\in\mathbb{T}\)._ We now have the machinery needed to state the following proposition, which will be extremely useful to us in later sections: **Proposition 2.5**.: _Let \(\phi:\mathbb{D}\to\mathbb{C}\) be an inner function and let \(\alpha\in\mathbb{T}\). Then the associated Clark measure \(\sigma_{\alpha}\) has a point mass at \(\zeta\in\mathbb{T}\) if and only if_ \[\phi^{*}(\zeta)=\lim_{r\to 1^{-}}\phi(r\zeta)=\alpha\] _and \(\phi\) has a finite angular derivative in the sense of Caratheodory at \(\zeta\). In this case,_ \[\sigma_{\alpha}(\{\zeta\})=\frac{1}{|\phi^{\prime}(\zeta)|}<\infty\quad\text{ and}\quad\phi^{\prime}(\zeta)=\frac{\alpha\overline{\zeta}}{\sigma_{\alpha}(\{\zeta\})}.\] Proof.: See Proposition 11.2 in [13]. **Example 2.6**.: Let \(B(z)\) be a non-constant finite Blaschke product of order \(n\), and let \(\alpha\in\mathbb{T}\). Then \(B(z)\) is an inner function and analytic on \(\mathbb{T}\), and \(B(\zeta)=\alpha\) has precisely \(n\) distinct solutions; denote these by \(\eta_{1}^{\alpha},\ldots,\eta_{n}^{\alpha}\). Moreover, from properties of finite Blaschke products, its derivative is non-zero on \(\mathbb{T}\). By Proposition 2.5, the associated Clark measure then satisfies \[\sigma_{\alpha}=\sum_{k=1}^{n}\frac{1}{|B^{\prime}(\eta_{k}^{\alpha})|}\delta_ {\eta_{k}^{\alpha}}.\] **Example 2.7**.: The function \[\phi(z):=\exp\biggl{(}-\frac{1+z}{1-z}\biggr{)},\quad z\in\mathbb{D},\] is inner, and \(\phi^{*}(\zeta)\) exists everywhere on \(\mathbb{T}\); this is because \[\biggl{|}\exp\biggl{(}-\frac{1+z}{1-z}\biggr{)}\biggr{|}=\exp\biggl{(}\Re \biggl{(}-\frac{1+z}{1-z}\biggr{)}\biggr{)}=\exp\biggl{(}-\frac{1-|z|^{2}}{|1- z|^{2}}\biggr{)},\] from which we can see that \(\phi^{*}(1)=0\). Observe that every point \(\zeta\neq 1\) on the unit circle solves the equation \(\phi^{*}=\alpha\) for some \(\alpha\in\mathbb{T}\), so \(\phi^{*}(\mathbb{T}\setminus\{1\})=\mathbb{T}\). Moreover, the solutions accumulate in the limit point \(\zeta=1\) for every \(\alpha\)-value. Since the unimodular level sets are closed by definition, this implies that \(1\in\mathcal{C}_{\alpha}(\phi)\) for all \(\alpha\in\mathbb{T}\). Now let \(\alpha=1\) for simplicity. As seen in Example 11.3(ii) in [13], the solutions to \(\phi^{*}(\zeta)=1\) are given by \[\eta_{k}=\frac{2\pi k-i}{2\pi k+i},\quad k\in\mathbb{Z},\] and \[\frac{1}{|\phi^{\prime}(\eta_{k})|}=\frac{8}{1+4\pi^{2}k^{2}}.\] By Proposition 2.5, the Clark measure of \(\phi\) associated to \(\alpha=1\) may thus be expressed as \[\sigma_{1}=\sum_{k\in\mathbb{Z}}\frac{8}{1+4\pi^{2}k^{2}}\delta_{\eta_{k}}.\] We will revisit variations of this example in later sections. By Theorem 4 in [4], RP-measures on \(\mathbb{T}^{d}\), \(d\geq 2\), cannot be supported on sets of Hausdorff dimension less than one, and in particular, they cannot possess any point masses. One can therefore not hope for an analogous result to Proposition 2.5 for \(d\geq 2\). However, the proposition will still be useful in determining the density of certain Clark measures. ## 3. Rational inner functions There has been significant progress in Clark theory for multivariate rational inner functions, see [3] and [5]. We already saw a one-variable RIF in Example 2.6, where we could describe the Clark measures in a straight-forward manner -- largely because Blaschke products are in fact analytic on \(\mathbb{T}\). The main issue when studying RIFs in higher dimensions arises from dealing with potential singularities. However, it turns out that in two variables, the support of any associated Clark measure is actually a finite union of graphs, and that we can explicitly calculate its weights along these. We aim to give an overview of these results in this section. We will first need some terminology specific to rational inner functions. We say that a polynomial \(p\in\mathbb{C}[z_{1},\ldots,z_{d}]\) is _stable_ if it has no zeros in \(\mathbb{D}^{d}\), and that it has _polydegree_\((n_{1},\ldots,n_{d})\in\mathbb{N}^{d}\) if \(p\) has degree \(n_{j}\) when viewed as a polynomial in \(z_{j}\). By Theorem 5.2.5 in [18], any rational inner function in \(\mathbb{D}^{d}\) can be written as \[\phi(z)=e^{ia}\prod_{j=1}^{d}z_{j}^{k}\frac{\tilde{p}(z)}{p(z)}\] where \(a\in\mathbb{R}\), \(k_{1},\ldots,k_{d}\in\mathbb{N}\), \(p\) is a stable polynomial of polydegree \((n_{1},\ldots,n_{d})\), and \[\tilde{p}(z):=z_{1}^{n_{1}}\cdots z_{d}^{n_{d}}\ \overline{p}\bigg{(}\frac{1}{ \overline{z_{1}}},\ldots,\frac{1}{\overline{z_{2}}}\bigg{)}\] is its _reflection_. Note that any zero of \(p\) will be a zero of \(\tilde{p}\) and vice versa, and that \(p\) and \(\tilde{p}\) have the same polydegree. For simplicity, we will always assume that \(\phi(z)=\frac{\tilde{p}}{p}\), where \(p\) and \(\tilde{p}\) are so called _atoral_ -- a concept explored in [1] and [6]. In the context of this text, atoral simply means that \(p\) and \(\tilde{p}\) share no common factors, and that in two dimensions in particular, \(p\) and \(\tilde{p}\) have finitely many common zeros on \(\mathbb{T}^{2}\) (see Section 2.1, [6]). Hence, a rational inner function \(\phi\) in two variables will have at most finitely many singularities on \(\mathbb{T}^{2}\). Moreover, we define the polydegree of a rational function \(\phi=q/p\) as \((n_{1},\ldots,n_{d})\), where \(p\) and \(q\) have no common factors, and \(n_{j}\) is the maximum of the degrees of \(p\) and \(q\) when viewed as polynomials in variable \(z_{j}\). Thus, the polydegree of \(\phi=\tilde{p}/p\) as defined above agrees with the polydegrees of both its numerator and denominator. A notable fact about RIFs is that their non-tangential limits exist and are unimodular for every \(\zeta\in\mathbb{T}^{d}\) (Theorem C, [15]). In [8], the authors prove the following result, which gives us a straight-forward expression for the level sets of RIFs: **Theorem 3.1**.: _For fixed \(\alpha\in\mathbb{T}\), let_ \[\mathcal{L}_{\alpha}(\phi):=\{\zeta\in\mathbb{T}^{d}:\tilde{p}(\zeta)-\alpha p (\zeta)=0\}.\] _Then \(\mathcal{C}_{\alpha}(\phi)=\mathcal{L}_{\alpha}(\phi)\)._ Proof.: See Theorem 2.6 in [8]. Note that for any zero of \(p\), the equation \(\tilde{p}-\alpha p=0\) is trivially satisfied. This implies that all singularities of \(\phi\) on \(\mathbb{T}^{d}\) are contained in \(\mathcal{C}_{\alpha}(\phi)\). Moreover, observe that \(\{\phi^{*}=\alpha\}\) is in general not a closed set. However, by the theorem above, we may characterize \(\mathcal{C}_{\alpha}(\phi)\) as the zeros of a polynomial when \(\phi\) is a RIF. When \(d=2\), one can find an even nicer characterization of the unimodular level sets: **Lemma 3.2**.: _Let \(\phi=\frac{\tilde{p}}{p}\) be a RIF of bidegree \((m,n)\), and fix \(\alpha\in\mathbb{T}\). For any choice of \(\tau_{0}\in\mathbb{T}\), there exists a finite number of functions \(g_{1}^{\alpha},\ldots,g_{n}^{\alpha}\) defined on \(\mathbb{T}\) and analytic on \(\mathbb{T}\setminus\{\tau_{0}\}\) such that \(\mathcal{C}_{\alpha}(\phi)\) can be written as a union of curves_ \[\{(\zeta,g_{j}^{\alpha}(\zeta)):\zeta\in\mathbb{T}\},\quad j=1,\ldots,n,\] _potentially together with a finite number of vertical lines \(\zeta_{1}=\tau_{1},\ldots,\zeta_{1}=\tau_{k}\), where each \(\tau_{j}\in\mathbb{T}\)._ Proof.: See Lemma 2.5 in [3]. The proof is quite technical and will only be outlined here. Specifically, the authors fix a point \(\tau\in\mathbb{T}\) and construct a parameterization of \(\mathcal{C}_{\alpha}(\phi)\cap(I_{\tau}\times\mathbb{T})\) for a small interval \(I_{\tau}\ni\tau\) in \(\mathbb{T}\). When \(\tau\) is not the \(z_{1}\)-coordinate of a singularity of \(\phi\), one can use properties of RIFs to show that \(\phi(\tau,\cdot)\) satisfies the conditions of the Implicit Function Theorem. Hence the solutions to \(\phi^{*}=\alpha\) can be parameterized by smooth curves on the strip \(I_{\tau}\times\mathbb{T}\). The main issue arises from the fact that the curves \(g_{j}^{\alpha}\) might intersect at singularities of \(\phi\), in which case analyticity is not obvious. One must thus ensure that we can in some sense "pull apart" any crossed curves in \(\mathcal{C}_{\alpha}(\phi)\) and prove that they are each analytic when viewed separately. However, it is shown in [6] that near each singularity of \(\phi\), the level sets actually do consist of smooth curves. Hence, even in the case where \(\tau\) is the \(z_{1}\)-coordinate of a singularity, one obtains a smooth parameterization of \(I_{\tau}\times\mathbb{T}\). In the last step of the proof, the authors glue together the local parameterizations, which yields the final result. The analysis of Clark measures of RIFs must now be divided into two cases; when the unimodular constant \(\alpha\) is _generic_ versus _exceptional_ as defined below. **Definition 3.3**.: _We say that \(\alpha\in\mathbb{T}\) is an exceptional value if \(\phi(\tau,\zeta_{2})\equiv\alpha\) or \(\phi(\zeta_{1},\tau)\equiv\alpha\) for some \(\tau\in\mathbb{T}\). If \(\alpha\) is not exceptional, we say that it is generic._ The different cases stem from the characterization of \(\mathcal{C}_{\alpha}(\phi)\) in Lemma 3.2; if \(\alpha\) is an exceptional value, by the definition above, the level sets will contain lines of the form \(\{\zeta_{1}=\tau\}\) or \(\{\zeta_{2}=\tau\}\). If \(\alpha\) is generic, \(\mathcal{C}_{\alpha}(\phi)\) can be fully described by the graphs of the functions \(g_{1}^{\alpha},\dots,g_{n}^{\alpha}\). **Theorem 3.4**.: _Let \(\phi=\frac{\tilde{p}}{p}\) be a RIF of bidegree \((m,n)\) and \(\alpha\in\mathbb{T}\) a generic value for \(\phi\). Then the associated Clark measure \(\sigma_{\alpha}\) satisfies_ \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\sum_{j=1}^{n}\int_{\mathbb{T }}f(\zeta,g_{j}^{\alpha}(\zeta))\frac{dm(\zeta)}{|\frac{\partial\phi}{ \partial z_{2}}(\zeta,g_{j}^{\alpha}(\zeta))|}\] _for all \(f\in C(\mathbb{T}^{2})\), where \(g_{1}^{\alpha},\dots,g_{n}^{\alpha}\) are the parameterizing functions from Lemma 3.2._ Proof.: See Theorem 3.3 in [3]. **Theorem 3.5**.: _Let \(\phi=\frac{\tilde{p}}{p}\) be a RIF of bidegree \((m,n)\) and \(\alpha\in\mathbb{T}\) an exceptional value for \(\phi\). Then, for \(f\in C(\mathbb{T}^{2})\), the associated Clark measure \(\sigma_{\alpha}\) satisfies_ \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\sum_{j=1}^{n}\int_{\mathbb{T }}f(\zeta,g_{j}^{\alpha}(\zeta))\frac{dm(\zeta)}{|\frac{\partial\phi}{ \partial z_{2}}(\zeta,g_{j}^{\alpha}(\zeta))|}+\sum_{k=1}^{\ell}c_{k}^{\alpha} \int_{\mathbb{T}}f(\tau_{k},\zeta)dm(\zeta),\] _where \(g_{1}^{\alpha},\dots,g_{n}^{\alpha}\) are the parameterizing functions and \(\zeta_{1}=\tau_{1},\dots,\zeta_{1}=\tau_{\ell}\) the vertical lines in \(\mathcal{C}_{\alpha}(\phi)\) from Lemma 3.2, and \(c_{k}^{\alpha}:=1/|\frac{\partial\phi}{\partial z_{1}}(\tau_{k},z_{2})|>0\) are constants._ Proof.: See Theorem 3.8 in [3]. Note that in both the generic and exceptional case, the weights of the Clark measures along level curve components are generally given by one-variable functions. The fine structure of RIF weights is thoroughly analyzed in [3] -- we will only briefly touch upon this here. For \(\phi=\tilde{p}/p\), let \(W_{j}^{\alpha}(\zeta):=|\frac{\partial\phi}{\partial z_{2}}(\zeta,g_{j}^{ \alpha}(\zeta))|^{-1}\) denote the weights from Theorem 3.4 and Theorem 3.5. Then, by Lemma 5.1 in [3], these functions are in \(L^{1}(\mathbb{T})\) and may be expressed as \[W_{j}^{\alpha}(\zeta)=\frac{|p(\zeta,g_{j}^{\alpha}(\zeta)|}{|\frac{\partial \tilde{p}}{\partial z_{2}}(\zeta,g_{j}^{\alpha}(\zeta))-\alpha\frac{\partial p }{\partial z_{2}}(\zeta,g_{j}^{\alpha}(\zeta))|}.\] As we established earlier, if \((\tau,\gamma)\in\mathbb{T}^{2}\) is a singularity of \(\phi\), then \((\tau,\gamma)\in\mathcal{C}_{\alpha}(\phi)\) for every \(\alpha\in\mathbb{T}\). If \((\tau,\gamma)=(\tau,g_{j}^{\alpha}(\tau))\) for some level curve \(g_{j}^{\alpha}\), then \(p(\tau,g_{j}^{\alpha}(\tau))=0\) and one might expect \(W_{j}^{\alpha}\) to be zero at this point -- at least if there is no cancellation from the denominator. However, it could be that there are curve components \(g_{j}^{\alpha}\) in \(\mathcal{C}_{\alpha}(\phi)\) which do not satisfy \(g_{j}^{\alpha}(\tau)=\gamma\). In [3], the authors introduce the notion of _contact order_ and prove the following statement: _For all but finitely many \(\alpha\), if a branch \((\zeta,g_{j}^{\alpha}(\zeta))\) of \(\mathcal{C}_{\alpha}(\phi)\) passes through the singularity \((\tau,\gamma)\), the corresponding weight function \(W_{j}^{\alpha}\) has order of vanishing at \(\tau\) that corresponds to the contact order of the corresponding branch of \(\mathcal{Z}(\tilde{p})\) at \((\tau,\gamma)\)._ Here, \(\mathcal{Z}(\tilde{p})\) denotes the zero set of the polynomial \(\tilde{p}\). The precise statement can be found in Theorem 5.6 in [3]; the gist is that there are constants \(c,C\) such that one can bound \[0<c\leq\frac{W_{j}^{\alpha}(\zeta)}{|\zeta-\tau|^{K_{j}}}\leq C\] for all \(\zeta\) in a neighborhood of \(\tau\), where \(K_{j}\) is the contact order of \(\phi\) at \((\tau,\gamma)=(\tau,g_{j}^{\alpha}(\tau))\) associated with the branch \(g_{j}^{\alpha}\). Consequently, under these conditions, \(W_{j}^{\alpha}\) is a bounded function. The case of RIFs of bidegree \((n,1)\) specifically has been studied in great detail in [5]. For these functions, we obtain a more explicit version of Theorem 3.5. If \(\phi=\tilde{p}/p\) has bidegree \((n,1)\), we may write \[p(z)=p_{1}(z_{1})+z_{2}p_{2}(z_{1})\quad\text{and}\quad\tilde{p}(z)=z_{2} \tilde{p}_{1}(z_{1})+\tilde{p}_{2}(z_{1})\] for reflections \(\tilde{p}_{i}=z_{1}^{n}\overline{p}_{i}(1/\overline{z}_{i})\). In this case, solving \(\phi^{*}=\alpha\) for \(z_{2}\) yields \(z_{2}=\frac{1}{B_{\alpha(z_{1})}}\), where \[B_{\alpha}(z):=\frac{\tilde{p}_{1}(z)-\alpha p_{2}(z)}{\alpha p_{1}(z)-\tilde{ p}_{2}(z)}.\] Moreover, define \[W_{\alpha}(\zeta):=\frac{|p_{1}(\zeta)|^{2}-|p_{2}(\zeta)|^{2}}{|\tilde{p}_{1}( \zeta)-\alpha p_{2}(\zeta)|^{2}}.\] Then, by Theorem 1.2 in [5], we have \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\int_{\mathbb{T}}f(\zeta, \overline{B_{\alpha}(\zeta)})W_{\alpha}(\zeta)dm(\zeta)+\sum_{k=1}^{\ell}c_{k} ^{\alpha}\int_{\mathbb{T}}f(\tau_{k},\zeta)dm(\zeta)\] with \(c_{k}^{\alpha}=1/|\frac{\partial\phi}{\partial z_{1}}(\tau_{k},z_{2})|\) is non-zero if and only if \(\alpha\) is an exceptional value. It is worth noting that for any RIF \(\phi\) of bidegree \((n,1)\), a value \(\alpha\in\mathbb{T}\) is exceptional if and only if it is the non-tangential value of \(\phi\) at some singularity (see Section 3 of [5]). **Example 3.6**.: For an explicit example, we use Example 5.2 from [5]: let \(\phi=\frac{\tilde{p}}{p}\) for \[p(z)=4-z_{2}-3z_{1}-z_{1}z_{2}+z_{1}^{2}\quad\text{and}\quad\ \tilde{p}(z)=4z_{1}^{2}z_{2}-z_{1}^{2}-3z_{1}z_{2}-z_{1}+z_{2}.\] Observe that \(\phi\) has only one singularity, which occurs at \((1,1)\). For each \(\alpha\in\mathbb{T}\), the formulas above yield \[B_{\alpha}(z)=\frac{4z_{1}^{2}-3z_{1}+1+\alpha+\alpha z_{1}}{4\alpha-3z_{1} \alpha+z_{1}^{2}\alpha+z_{1}^{2}+z_{1}}\] and \[W_{\alpha}(\zeta)=\frac{4|\zeta-1|^{4}}{|4\zeta^{2}-3\zeta+1+ \alpha+\alpha\zeta|^{2}}.\] We see that \(\alpha=-1\) is an exceptional value, as \(\phi=-1\) is solved by \((1,z_{2})\) as well as \(\big{(}z_{1},\frac{1}{B_{-1}(z_{1})}\big{)}=(z_{1},1/z_{1})\). Since \(\phi\) only has one singularity, this point gives rise to the only exceptional value and \(\phi^{*}(1,1)=-1\). Hence, for \(\alpha\neq-1\), we have \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\int_{\mathbb{T}}f(\zeta, \overline{B_{\alpha}(\zeta)})\frac{4|\zeta-1|^{4}}{|4\zeta^{2}-3\zeta+1+ \alpha+\alpha\zeta|^{2}}dm(\zeta).\] Moreover, we see that \(W_{-1}(\zeta)=\frac{1}{4}|\zeta-1|^{2}\) and \(\frac{\partial\phi}{\partial z_{1}}(1,z_{2})=-2\), which yields \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{-1}(\xi)=\frac{1}{4}\int_{ \mathbb{T}}f(\zeta,\overline{\zeta})|\zeta-1|^{2}dm(\zeta)+\frac{1}{2}\int_{ \mathbb{T}}f(1,\zeta)dm(\zeta)\] for \(\alpha=-1\). In Figure 1, we have plotted the level curves corresponding to different \(\alpha\)-values. ## 4. Multiplicative embeddings Given an inner function \(\phi\) in one complex variable, we define the multiplicative embedding \[\Phi(z)=\Phi(z_{1},z_{2}):=\phi(z_{1}z_{2}),\quad z\in\mathbb{D}^{2}.\] The function defined by \((z_{1},z_{2})\mapsto z_{1}z_{2}\) maps \(\mathbb{D}^{2}\) to \(\mathbb{D}\), and so \(\phi\) being an inner function implies that \(\Phi\) is inner as well. In the following proposition, we characterize the support set of \(\Phi\) with the help of the original function \(\phi\). **Proposition 4.1**.: _Let \(\phi(z)\) be an inner function in one variable, and \(\alpha\in\mathbb{T}\). Define \(\Phi(z_{1},z_{2}):=\phi(z_{1}z_{2})\). Then_ \[\mathcal{C}_{\alpha}(\Phi)=\bigcup_{\zeta\in\mathcal{C}_{\alpha}( \phi)}\ \{(z,\zeta\overline{z}):z\in\mathbb{T}\}.\] Proof.: First, for ease of notation, define \[\mathcal{C}^{\prime}_{\alpha}(f):=\left\{\zeta\in\mathbb{T}^{d}:\ \lim_{r\to 1^{-}}f(r\zeta)= \alpha\right\}\] for any inner function \(f\), so that \(\mathcal{C}_{\alpha}(f)=\mathrm{Clos}(\mathcal{C}^{\prime}_{\alpha}(f))\). Let \(\zeta\in\mathcal{C}^{\prime}_{\alpha}(\phi)\). Then we know that \(\lim_{r\to 1^{-}}\phi(r\zeta)=\alpha.\) For every \(z\in\mathbb{T}\), \[\lim_{r\to 1^{-}}\Phi(r(z,\zeta\overline{z}))=\lim_{r\to 1^{-}}\phi(r^{2} \zeta z\overline{z})=\lim_{r\to 1^{-}}\phi(r\zeta)=\alpha,\] implying that \((z,\zeta\overline{z})\in\mathcal{C}_{\alpha}(\Phi)\). Thus \[\bigcup_{\zeta\in\mathcal{C}^{\prime}_{\alpha}(\phi)}\ \{(z,\zeta \overline{z}):z\in\mathbb{T}\}\subset\mathcal{C}_{\alpha}(\Phi).\] To extend this to a union over \(\mathcal{C}_{\alpha}(\phi)\), let \(\zeta\in\mathcal{C}_{\alpha}(\phi)\). Then there exists some sequence \((\zeta_{n})_{n\geq 1}\) in \(\mathcal{C}^{\prime}_{\alpha}(\phi)\) that converges to \(\zeta\) as \(n\) tends to infinity. This also implies that for any \(z\in\mathbb{T}\), \((z,\zeta_{n}\overline{z})\to(z,\zeta\overline{z})\in\mathcal{C}_{\alpha}(\Phi)\) as \(n\to\infty\). Hence, \[\bigcup_{\zeta\in\mathcal{C}_{\alpha}(\phi)}\ \{(z,\zeta\overline{z}):z\in \mathbb{T}\}\subset\mathcal{C}_{\alpha}(\Phi).\] Conversely, let \((z_{1},z_{2})\in\mathcal{C}^{\prime}_{\alpha}(\Phi)\), so \[\lim_{r\to 1^{-}}\Phi(r(z_{1},z_{2}))=\lim_{r\to 1^{-}}\phi(r^{2}z_{1}z_{2})=\alpha.\] Then \(\zeta:=z_{1}z_{2}\in\mathcal{C}_{\alpha}(\phi)\). Since \(z_{1},z_{2}\in\mathbb{T}\), we may write \[z_{2}=\frac{\zeta}{z_{1}}=\zeta\overline{z_{1}},\] so \((z_{1},z_{2})=(z_{1},\zeta\overline{z_{1}})\in\{(z,\zeta\overline{z}):z\in \mathbb{T}\}\). Hence, \[\mathcal{C}^{\prime}_{\alpha}(\Phi)\subset\bigcup_{\zeta\in\mathcal{C}_{\alpha }(\phi)}\ \{(z,\zeta\overline{z}):z\in\mathbb{T}\}.\] Now let \((z_{1},z_{2})\in\mathcal{C}_{\alpha}(\Phi)\). Then there is some sequence of \((z_{1,n},z_{2,n})\) in \(\mathcal{C}^{\prime}_{\alpha}(\Phi)\) converging to \((z_{1},z_{2})\) as \(n\to\infty\). But this implies that \(z_{1,n}z_{2,n}\to z_{1}z_{2}\in\mathcal{C}_{\alpha}(\phi)\), and the same argument as above then yields \[\mathcal{C}_{\alpha}(\Phi)\subset\bigcup_{\zeta\in\mathcal{C}_{\alpha}(\phi)} \{(z,\zeta\overline{z}):z\in\mathbb{T}\}.\] As in the RIF case, the unimodular level sets of this class of functions may be expressed as unions of curves. However, as opposed to in Lemma 3.2, the unions need not be finite -- or even countable -- here. **Remark 4.2**.: Observe that by Lemma 2.2 in [3], any positive, pluriharmonic, \(m_{d}\)-singular probability measure defines the Clark measure of some inner function. Hence, there exist Clark measures with significantly more intricate supports than what we have seen so far. A natural next step is to investigate whether we can characterize the density of a given Clark measure \(\tau_{\alpha}\) on the antidiagonals in \(\mathcal{C}_{\alpha}(\Phi)\). We do this in the next result and its subsequent corollary: **Theorem 4.3**.: _Let \(\phi(z)\) be an inner function in one variable, with Clark measure \(\sigma_{\alpha}\) for some unimodular constant \(\alpha\). Let \(\tau_{\alpha}\) be the corresponding Clark measure of \(\Phi(z_{1},z_{2}):=\phi(z_{1}z_{2})\). Then, for any function \(f\in C(\mathbb{T}^{2})\),_ \[\int_{\mathbb{T}^{2}}f(\xi)d\tau_{\alpha}(\xi)=\int_{\mathbb{T}}\bigg{(}\int_ {\mathbb{T}}f(\zeta,x\overline{\zeta})d\sigma_{\alpha}(x)\bigg{)}dm(\zeta).\] Proof.: We first prove this in the case when \(f\) is the product of one-variable Poisson kernels. Fixing \(z_{2}\in\mathbb{D}\), let \[u_{z_{2}}(z_{1}):=\frac{1-|\phi(z_{1}z_{2})|^{2}}{|\alpha-\phi(z_{1}z_{2})|^{2 }}=\int_{\mathbb{T}^{2}}P_{z_{1}}(\xi_{1})P_{z_{2}}(\xi_{2})d\tau_{\alpha}( \xi),\quad z_{1}\in\mathbb{D}.\] As the middle expression is pluriharmonic, \(u_{z_{2}}\) must be harmonic on \(\mathbb{D}\). Since \(z_{1}z_{2}\in\mathbb{D}\) for any \(z_{1}\in\overline{\mathbb{D}}\), and \(\phi\) is analytic (and hence continuous) on \(\mathbb{D}\), we see that \(\Phi(z_{1},z_{2})=\phi(z_{1}z_{2})\) as a function of \(z_{1}\) is continuous on \(\overline{\mathbb{D}}\). Moreover, by the maximum principle, \(|\phi|<1\) on the unit disc, which implies that the denominator will always be non-zero. We conclude that \(u_{z_{2}}\) is continuous in \(\overline{\mathbb{D}}\), and we may thus apply the Poisson integral formula: \[u_{z_{2}}(z_{1})=\int_{\mathbb{T}}\frac{1-|\phi(\zeta z_{2})|^{2}}{|\alpha- \phi(\zeta z_{2})|^{2}}P_{z_{1}}(\zeta)dm(\zeta). \tag{4}\] Moreover, for \(\zeta\in\mathbb{T}\), we see that \[\int_{\mathbb{T}}P_{z}(\zeta,x\overline{\zeta})d\sigma_{\alpha}(x) =\int_{\mathbb{T}}P_{z_{1}}(\zeta)P_{z_{2}}(x\overline{\zeta})d \sigma_{\alpha}(x)\] \[=P_{z_{1}}(\zeta)\int_{\mathbb{T}}\frac{1-|z_{2}|^{2}}{|x \overline{\zeta}-z_{2}|^{2}}d\sigma_{\alpha}(x)\] \[=P_{z_{1}}(\zeta)\int_{\mathbb{T}}\frac{1-|\zeta z_{2}|^{2}}{|x- \zeta z_{2}|^{2}}d\sigma_{\alpha}(x)\] \[=P_{z_{1}}(\zeta)\int_{\mathbb{T}}P_{\zeta z_{2}}(x)d\sigma_{ \alpha}(x)\] \[=P_{z_{1}}(\zeta)\frac{1-|\phi(\zeta z_{2})|^{2}}{|\alpha-\phi( \zeta z_{2})|^{2}},\] where we use the definition of the Clark measure \(\sigma_{\alpha}\) in the last step. By integrating the above and applying (4), we get \[\int_{\mathbb{T}}\bigg{(}\int_{\mathbb{T}}P_{z}(\zeta,x\overline{ \zeta})d\sigma_{\alpha}(x)\bigg{)}dm(\zeta) =\int_{\mathbb{T}}\frac{1-|z_{1}|^{2}}{|\zeta-z_{1}|^{2}}\frac{1- |\phi(\zeta z_{2})|^{2}}{|\alpha-\phi(\zeta z_{2})|^{2}}dm(\zeta)\] \[=\frac{1-|\phi(z_{1}z_{2})|^{2}}{|\alpha-\phi(z_{1}z_{2})|^{2}}\] \[=\int_{\mathbb{T}^{2}}P_{z_{1}}(\xi_{1})P_{z_{2}}(\xi_{2})d\tau_{ \alpha}(\xi).\] Now apply Lemma 2.2 to obtain the final result. **Remark 4.4**.: It is a priori not obvious that \(f(\zeta,x\overline{\zeta})\) is integrable with respect to \(\sigma_{\alpha}\). Integrability is ensured by the fact that \(f(\zeta,x\overline{\zeta})\) is continuous on \(\mathbb{T}\), as it is composed by two functions \(f\) and \(g_{x}(z):=(z,x\overline{z})\) which are continuous there. Since \(\sigma_{\alpha}\) is a finite, positive Borel measure on a compact space, all continuous functions on said space are integrable with respect to \(\sigma_{\alpha}\). In particular, when the Clark measures associated to \(\phi\) are discrete, one gets the following result: **Corollary 4.4.1**.: _Let \(\phi:\mathbb{D}\to\mathbb{C}\) be an inner function with Clark measure \(\sigma_{\alpha}\) for some unimodular constant \(\alpha\), and let \(\tau_{\alpha}\) be the Clark measure of \(\Phi(z_{1},z_{2}):=\phi(z_{1}z_{2})\). If \(\sigma_{\alpha}\) is supported on a countable collection of points \(\{\eta_{k}\}_{k\geq 1}\subset\mathcal{C}_{\alpha}(\phi)\), then_ \[\int_{\mathbb{T}^{2}}f(\xi)d\tau_{\alpha}(\xi)=\sum_{k\geq 1}\int_{\mathbb{T}}f (\zeta,\eta_{k}\overline{\zeta})\frac{dm(\zeta)}{|\phi^{\prime}(\eta_{k})|}\] _for all \(f\in C(\mathbb{T}^{2})\)._ Proof.: By Proposition 2.5, \(\sigma_{\alpha}\) having a point mass at some \(\eta_{k}\) implies that \(\sigma_{\alpha}(\{\eta_{k}\})=1/|\phi^{\prime}(\eta_{k})|\). Then, following the steps in the proof of Theorem 4.3, \[\int_{\mathbb{T}}P_{z}(\zeta,x\overline{\zeta})d\sigma_{\alpha}(x)=P_{z_{1}}( \zeta)\int_{\mathbb{T}}\Bigg{(}\sum_{k\geq 1}\frac{1}{|\phi^{\prime}(\eta_{k})|}P_{ \zeta z_{2}}(x)\Bigg{)}d\delta_{\eta_{k}}(x).\] This then reduces to \[P_{z_{1}}(\zeta)\sum_{k\geq 1}\frac{1}{|\phi^{\prime}(\eta_{k})|}P_{\zeta z_{2}} (\eta_{k})=\sum_{k\geq 1}\frac{1}{|\phi^{\prime}(\eta_{k})|}P_{z_{1}}(\zeta)P_{ z_{2}}(\eta_{k}\overline{\zeta}).\] Hence, \[\int_{\mathbb{T}}P_{z}(\zeta,x\overline{\zeta})d\sigma_{\alpha}(x)=\sum_{k \geq 1}\frac{1}{|\phi^{\prime}(\eta_{k})|}P_{z}(\zeta,\eta_{k}\overline{\zeta}).\] Integrating over this and applying Theorem 4.3 then shows that \[\int_{\mathbb{T}^{2}}P_{z}(\xi)d\tau_{\alpha}(\xi)=\int_{\mathbb{T}}\Bigg{(} \sum_{k\geq 1}\frac{1}{|\phi^{\prime}(\eta_{k})|}P_{z}(\zeta,\eta_{k}\overline {\zeta})\Bigg{)}dm(\zeta)=\sum_{k\geq 1}\int_{\mathbb{T}}P_{z}(\zeta,\eta_{k} \overline{\zeta})\frac{dm(\zeta)}{|\phi^{\prime}(\eta_{k})|},\] where we have used positivity of the summands in the last step. The result now follows from Lemma 2.2. It is interesting to compare the above result to the corresponding theorems, Theorem 3.4 and Theorem 3.5, for rational inner functions. In the RIF case, we saw that the weights of Clark measures along the curves in the unimodular level sets were one-variable functions. Corollary 4.4.1 shows that for the multiplicative embeddings, the weights are simpler than their RIF counterparts -- they are constant along each curve in the level sets. This implies that given any univariate inner function \(\phi\), regardless of its complextiy, the associated Clark measures of \(\phi(z_{1}z_{2})\) will always be very "well-behaved", in the sense that they are supported on straight lines and -- when the Clark measures of \(\phi\) are discrete -- have constant density along each such line. **Example 4.5**.: Recall the function \[\phi(z):=\exp\biggl{(}-\frac{1+z}{1-z}\biggr{)},\quad z\in\mathbb{D},\] from Example 2.7. We saw there that the solutions to \(\phi^{*}(\zeta)=1\) are given by \[\eta_{k}=\frac{2\pi k-i}{2\pi k+i},\quad k\in\mathbb{Z},\] \[\frac{1}{|\phi^{\prime}(\eta_{k})|}=\frac{8}{1+4\pi^{2}k^{2}}.\] Now consider \(\Phi(z_{1},z_{2}):=\phi(z_{1}z_{2})\), which appears in e.g. Example 13.1 in [7]. Applying Corollary 4.4.1 for the Clark measure \(\tau_{1}\) of \(\Phi\) then results in \[\int_{\mathbb{T}^{2}}f(\xi)d\tau_{1}(\xi)=\sum_{k\in\mathbb{Z}}\frac{8}{1+4\pi ^{2}k^{2}}\int_{\mathbb{T}}f(\zeta,\eta_{k}\overline{\zeta})dm(\zeta)\] for \(f\in C(\mathbb{T}^{2})\). This marks our first example of a non-rational bivariate function, for which we can explicitly characterize the Clark measures. Moreover, this is our first example of an inner function whose unimodular level sets consist of infinitely many curves, as opposed to the RIF case. We now extend this theory to \(d\) variables. For an inner function \(\phi\) in one variable, define the multiplicative embedding \[\Phi(z):=\phi(z_{1}z_{2}\cdots z_{d}),\quad z\in\mathbb{D}^{d}.\] By the same argument as for two variables, this is an inner function. In the next result, we prove a \(d\)-dimensional version of Theorem 4.3: **Theorem 4.6**.: _Let \(\phi(z):\mathbb{D}\to\mathbb{C}\) be an inner function with Clark measure \(\sigma_{\alpha}\) for some unimodular constant \(\alpha\), and let \(\tau_{\alpha}\) be the Clark measure of \(\Phi(z_{1},\dots,z_{d}):=\phi(z_{1}z_{2}\cdots z_{d})\). Then_ \[\int_{\mathbb{T}^{d}}f(\xi)d\tau_{\alpha}(\xi)=\int_{\mathbb{T}}\int_{ \mathbb{T}}\cdots\int_{\mathbb{T}}f(\zeta_{1},\dots,\zeta_{d-1},x\overline{ \zeta_{1}\cdots\zeta_{d-1}})d\sigma_{\alpha}(x)dm(\zeta_{d-1})\cdots dm(\zeta _{1}).\] Proof.: We prove the result by induction, where Theorem 4.3 marks our base case. As usual, we prove the formula for Poisson kernels first. We begin by introducing some notation: for \(n<m\), let \[\mathbf{z}_{n}^{m}:=z_{n}\cdots z_{m}\] for \(z_{j}\in\mathbb{D},n\leq j\leq m\). Note that \(\Phi(z)=\phi(\mathbf{z}_{1}^{d})\) per definition. Suppose the formula holds for \(\phi(z_{1}\cdots z_{d-1})=\phi(\mathbf{z}_{1}^{d-1})\), in which case \[\frac{1-|\phi(\mathbf{z}_{1}^{d-1})|^{2}}{|\alpha-\phi(\mathbf{z}_{1}^{d-1})|^ {2}}=\int_{\mathbb{T}}\int_{\mathbb{T}}\cdots\int_{\mathbb{T}}P_{z_{1}}(\zeta _{1})\cdots P_{z_{d-1}}(x\overline{\zeta_{1}\zeta_{2}\cdots\zeta_{d-2}})d \sigma_{\alpha}(x)dm(\zeta_{d-2})\cdots dm(\zeta_{1}).\] We now want to show the result for \(d\) variables. Fix \(z_{2},\dots,z_{d}\) and define the one-variable function \[u(z_{1}):=\frac{1-|\phi(\mathbf{z}_{1}^{d})|^{2}}{|\alpha-\phi(\mathbf{z}_{1}^ {d})|^{2}}=\frac{1-|\phi(z_{1}\cdot\mathbf{z}_{2}^{d})|^{2}}{|\alpha-\phi(z_{1 }\cdot\mathbf{z}_{2}^{d})|^{2}},\quad z_{1}\in\mathbb{D}.\] By the same argument as in the proof of Theorem 4.3, this function is harmonic on the unit disc and continuous on its closure. Hence, we may apply the Poisson integral formula: \[\frac{1-|\phi(\mathbf{z}_{1}^{d})|^{2}}{|\alpha-\phi(\mathbf{z}_{1}^{d})|^{2} }=\int_{\mathbb{T}^{d}}P_{z_{1}}(\zeta_{1})\frac{1-|\phi(\mathbf{z}_{2}^{d} \cdot\zeta_{1})|^{2}}{|\alpha-\phi(\mathbf{z}_{2}^{d}\cdot\zeta_{1})|^{2}} \tag{5}\] For fixed \(\zeta_{1}\in\mathbb{T}\), define \(\phi_{\zeta_{1}}(\mathbf{z}_{2}^{d}):=\phi(\mathbf{z}_{2}^{d}\cdot\zeta_{1})\). Then, by our induction assumption, it holds that \[\frac{1-|\phi_{\zeta_{1}}(\mathbf{z}_{2}^{d})|^{2}}{|\alpha-\phi_ {\zeta_{1}}(\mathbf{z}_{2}^{d})|^{2}} =\int_{\mathbb{T}}\int_{\mathbb{T}}\cdots\int_{\mathbb{T}}P_{z_{2 }}(\zeta_{2})\cdots P_{\zeta_{1}z_{d-1}}(x\overline{\zeta_{2}\cdots\zeta_{d-1} })d\sigma_{\alpha}(x)dm(\zeta_{d-1})\cdots dm(\zeta_{2})\] \[=\int_{\mathbb{T}}\int_{\mathbb{T}}\cdots\int_{\mathbb{T}}P_{z_{ 2}}(\zeta_{2})\cdots P_{z_{d-1}}(x\overline{\zeta_{1}\zeta_{2}\cdots\zeta_{d-1} })d\sigma_{\alpha}(x)dm(\zeta_{d-1})\cdots dm(\zeta_{2}).\] Finally, by (5), we arrive at \[\frac{1-|\phi(\mathbf{z}_{1}^{d})|^{2}}{|\alpha-\phi(\mathbf{z}_{1}^{d})|^{2}}= \int_{\mathbb{T}}\int_{\mathbb{T}}\cdots\int_{\mathbb{T}}P_{z_{1}}(\zeta_{1}) \cdots P_{z_{d-1}}(x\overline{\zeta_{1}\cdots\zeta_{d-1}})d\sigma_{\alpha}(x)dm (\zeta_{d-1})\cdots dm(\zeta_{1}),\] as desired. Application of Lemma 2.2 yields the final result. Similarly to in the two-variable case, this gives us a sense of the geometry of \(\operatorname{supp}\{\tau_{\alpha}\}\). For example, for \(d=3\) and \(x=e^{i\nu}\in\mathbb{T}\), the set \[\{(\zeta_{1},\zeta_{2},x\overline{\zeta_{1}\zeta_{2}}):\zeta_{1},\zeta_{2}\in \mathbb{T}\}=\{(e^{is},e^{it},e^{i(\nu-s-t)}):-\pi\leq s,t\leq\pi\}\] has logarithmic coordinates \((s,t,\nu-s-t)\), which defines a plane in \(\mathbb{T}^{3}\). As in the case of \(d=2\), we get the following consequence when \(\sigma_{\alpha}\) is discrete: **Corollary 4.6.1**.: _Let \(\phi(z):\mathbb{D}\to\mathbb{C}\) be an inner function with Clark measure \(\sigma_{\alpha}\) for some unimodular constant \(\alpha\), and let \(\tau_{\alpha}\) be the Clark measure of \(\Phi(z_{1},\dots,z_{d}):=\phi(z_{1}z_{2}\cdots z_{d})\). If \(\sigma_{\alpha}\) is supported on a countable collection of points \(\{\eta_{k}\}_{k\geq 1}\subset\mathcal{C}_{\alpha}(\phi)\), then_ \[\int_{\mathbb{T}^{d}}f(\xi)d\tau_{\alpha}(\xi)=\sum_{k\geq 1}\int_{\mathbb{T}} \cdots\int_{\mathbb{T}}f(\zeta_{1},\dots,\zeta_{d-1},\overline{\zeta_{1} \cdots\zeta_{d-1}}\eta_{k})dm(\zeta_{d-1})\cdots\frac{dm(\zeta_{1})}{|\phi^{ \prime}(\eta_{k})|}.\] ## 5. Product functions Given one-variable inner functions \(\phi\) and \(\psi\), define the product function \[\Psi(z):=\phi(z_{1})\psi(z_{2}),\quad z\in\mathbb{D}^{2}.\] Then \(\Psi\) is an inner function in \(\mathbb{D}^{2}\). The analysis of the Clark measures of \(\Psi\) is not as straightforward as for the multiplicative embeddings. A key argument in the proofs of Theorem 3.4, Theorem 3.5 and 4.3 is the Poisson integral formula. To use this for \(\Psi(z_{1},z_{2})\), we require that for fixed \(z_{2}\in\mathbb{D}\), the function \[u_{z_{2}}(z_{1}):=\frac{1-|\phi(z_{1})\psi(z_{2})|^{2}}{|\alpha-\phi(z_{1}) \psi(z_{2})|^{2}}\] is continuous on the closed unit disc. However, for a general inner function \(\phi\), its non-tangential limits need only exist \(m\)-almost everywhere on \(\mathbb{T}\). Even if they do exist on the entire unit circle, \(\phi^{*}\) need not be continuous. For this reason, we introduce the function \(\Psi_{r}(z):=\phi(rz_{1})\psi(z_{2})\) for \(0<r<1\). This is not an inner function, as \(|\phi(rz_{1})|<1\) on the unit circle. However, since \(\Psi_{r}\to\Psi\) as \(r\to 1^{-}\), we can investigate the Clark measures of \(\Psi\) via \(\Psi_{r}\). **Theorem 5.1**.: _Let \(\Psi(z_{1},z_{2})=\phi(z_{1})\psi(z_{2})\) for one-variable inner functions \(\phi\) and \(\psi\), such that A. \(\psi\) extends to be continuously differentiable on \(\mathbb{T}\) except at a finite set of points, B. the solutions to \(\psi^{*}=\beta\) for \(\beta\in\mathbb{T}\) can be parameterized by functions \(\{g_{k}(\beta)\}_{k\geq 1}\) which are continuous in \(\beta\) on \(\mathbb{T}\) except at a finite set of points, C. for every \(\beta\in\mathbb{T}\), there are no solutions to \(\psi^{*}=\beta\) with infinite multiplicity, and D. the Clark measures of \(\psi\) are all discrete. Then the Clark measures of \(\Psi\) satisfy_ \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\sum_{k\geq 1}\int_{\mathbb{T}} f(\zeta,g_{k}(\alpha\overline{\phi^{*}(\zeta)}))\frac{dm(\zeta)}{|\psi^{ \prime}(g_{k}(\alpha\overline{\phi^{*}(\zeta)}))|}\] _for \(f\in C(\mathbb{T}^{2})\)._ **Remark 5.2**.: The assumptions A-D are most likely excessive, but we impose them here to get an easy guarantee that the right-hand side is finite and integrable. Nevertheless, we will see some interesting examples of product functions and their Clark measures for which Theorem 5.1 can be applied, e.g. when \(\phi(z_{1})=\exp\bigl{(}-\frac{1+z_{1}}{1-z_{1}}\bigr{)}\). **Remark 5.3**.: Observe that there exist examples of inner functions where the Clark measure \(\sigma_{\alpha}\) is discrete for one specific \(\alpha\)-value but \(\sigma_{\beta}\) is singular continuous for \(\beta\in\mathbb{T}\setminus\{\alpha\}\), and vice versa. See Example 1 and 2 in [10]. Proof.: Define \(\Psi_{r}(z_{1},z_{2}):=\phi(rz_{1})\psi(z_{2})\) for \(0<r<1\), and note that for each \(z\in\mathbb{D}^{2}\), \[\frac{1-|\Psi_{r}(z_{1},z_{2})|^{2}}{|\alpha-\Psi_{r}(z_{1},z_{2})|^{2}}\to \frac{1-|\Psi(z_{1},z_{2})|^{2}}{|\alpha-\Psi(z_{1},z_{2})|^{2}}=\int_{\mathbb{ T}^{2}}P_{z}(\xi)d\sigma_{\alpha}(\xi)\] as \(r\to 1^{-}\). Define, for fixed \(z_{2}\in\mathbb{D}\) and fixed \(0<r<1\), \[u^{r}_{z_{2}}(z_{1}):=\frac{1-|\psi(z_{2})\phi(rz_{1})|^{2}}{|\alpha-\psi(z_{2 })\phi(rz_{1})|^{2}},\quad z_{1}\in\mathbb{D}.\] As \(\phi(rz_{1})\) is continuous and satisfies \(|\phi(rz_{1})|<1\) on the unit circle, \(u^{r}_{z_{2}}\) is continuous on \(\overline{\mathbb{D}}\). Moreover, even though \(\Psi_{r}\) is not an inner function, it holds that \[\frac{1-|\Psi_{r}|^{2}}{|\alpha-\Psi_{r}|^{2}}=\Re\bigg{(}\frac{\alpha+\Psi_{ r}}{\alpha-\Psi_{r}}\bigg{)}\] where \((\alpha+\Psi_{r})/(\alpha-\Psi_{r})\) is analytic on \(\mathbb{D}^{2}\). Hence, the left-hand side is pluriharmonic in \(\mathbb{D}^{2}\), which in turn implies that \(u^{r}_{z_{2}}\) is harmonic in \(\mathbb{D}\). By the Poisson integral formula, \[\frac{1-|\psi(z_{2})\phi(z_{1})|^{2}}{|\alpha-\psi(z_{2})\phi(z_{1})|^{2}}= \lim_{r\to 1^{-}}u^{r}_{z_{2}}(z_{1})=\lim_{r\to 1^{-}}\int_{\mathbb{T}}u^{r}_{z_{ 2}}(\zeta)P_{z_{1}}(\zeta)dm(\zeta).\] Observe that \(\Re((\alpha+\Psi_{r}(z_{1},z_{2}))/(\alpha-\Psi_{r}(z_{1},z_{2})))\) is bounded for every \((z_{1},z_{2})\in\overline{\mathbb{D}}\times\mathbb{D}\) and every \(0<r<1\). The dominated convergence theorem then states that we can move the limit into the integral: so, for fixed \(z_{2}\in\mathbb{D}\), \[\frac{1-|\psi(z_{2})\phi(z_{1})|^{2}}{|\alpha-\psi(z_{2})\phi(z_{1})|^{2}}= \int_{\mathbb{T}}\lim_{r\to 1^{-}}u^{r}_{z_{2}}(\zeta)P_{z_{1}}(\zeta)dm( \zeta). \tag{6}\] Moreover, \[\lim_{r\to 1^{-}}u^{r}_{z_{2}}(\zeta)=\lim_{r\to 1^{-}}\frac{1-|\psi(z_{2}) \phi(r\zeta)|^{2}}{|\alpha-\psi(z_{2})\phi(r\zeta)|^{2}}=\frac{1-|\psi(z_{2}) \phi^{*}(\zeta)|^{2}}{|\alpha-\psi(z_{2})\phi^{*}(\zeta)|^{2}}\] for \(m\)-almost every \(\zeta\in\mathbb{T}\). Let \(E\) denote the set of points \(\zeta\) such that \(|\phi^{*}(\zeta)|=1\). By our assumptions, the solutions to \(\psi^{*}=\beta\) can be parameterized by functions \(g_{k}(\beta)\) continuous on \(\mathbb{T}\) except on a finite collection of points. Since we have also assumed that the Clark measures of \(\psi\) consist of point masses, by Proposition 2.5, the measure associated to any \(\beta\in\mathbb{T}\) is given by \(\sum_{k\geq 1}|\psi^{\prime}(g_{k}(\beta))|^{-1}\delta_{g_{k}(\beta)}\) where \(|\psi^{\prime}(g_{k}(\beta))|>0\) for each \(k\). For fixed \(\zeta\in E\), this holds for \(\beta=\alpha\overline{\phi^{*}(\zeta)}\). Hence, for \(\zeta\in E\), we have that \[\frac{1-|\psi(z_{2})\phi^{*}(\zeta)|^{2}}{|\alpha-\psi(z_{2})\phi^{*}(\zeta)|^{ 2}}=\sum_{k\geq 1}\frac{1}{|\psi^{\prime}(g_{k}(\alpha\overline{\phi^{*}(\zeta)})) |}P_{z_{2}}(g_{k}(\alpha\overline{\phi^{*}(\zeta)})). \tag{7}\] To apply the Poisson integral formula, we must first check that the product of the right-hand side with \(P_{z_{1}}(\zeta)\) is integrable. Recall that by Fatou's theorem, \(\phi(r\zeta)\) converges to \(\phi^{*}(\zeta)\) as \(r\to 1^{-}\)\(m\)-almost everywhere on \(\mathbb{T}\) and in \(L^{1}(\mathbb{T})\). Moreover, the curves \(\{g_{k}\}_{k\geq 1}\) are assumed to be continuous on the unit circle except at finitely many points. Hence, the composition \(P_{z}(\zeta,g_{k}(\alpha\overline{\phi^{*}(\zeta)}))\) must be measurable -- indeed, \(f(\zeta,g_{k}(\alpha\overline{\phi^{*}(\zeta)}))\) is measurable for any \(f\in C(\mathbb{T}^{2})\). Similarly, we see that the weights \(|\psi^{\prime}(g_{k}(\alpha\overline{\phi^{*}(\zeta)}))|\) are measurable, as \(\psi\) is assumed to be continuously differentiable on \(\mathbb{T}\) except at finitely many points. Since we are integrating over a compact space, this is enough to ensure integrability. Moreover, for fixed \(\zeta\in E\), the sum \(\sum_{k\geq 1}|\psi^{\prime}(g_{k}(\alpha\overline{\phi^{*}(\zeta)}))|^{-1}\) must be finite, since the Clark measure of \(\psi\) associated to the parameter value \(\alpha\overline{\phi^{*}(\zeta)}\) exists by assumption. As we have excluded the situation where infinitely many of the curves intersect, the weights cannot sum up to infinity as we integrate over \(\mathbb{T}\). The curves could still have infinite intersections at limit points of \(g_{k}(\alpha\overline{\phi^{*}(\zeta)})\), which per definition do not solve \(\psi^{*}=\alpha\overline{\phi^{*}(\zeta)}\). However, by Proposition 2.5, the weights of the Clark measures must be zero for these points. Since equation (7) holds for \(m\)-almost every \(\zeta\in\mathbb{T}\), the integrals of the left- and right-hand side will coincide. By combining this with (6), we see that \[\frac{1-|\psi(z_{2})\phi(z_{1})|^{2}}{|\alpha-\psi(z_{2})\phi(z_{ 1})|^{2}} =\int_{\mathbb{T}}\frac{1-|\psi(z_{2})\phi^{*}(\zeta)|^{2}}{| \alpha-\psi(z_{2})\phi^{*}(\zeta)|^{2}}P_{z_{1}}(\zeta)dm(\zeta)\] \[=\int_{\mathbb{T}}\sum_{k\geq 1}\frac{1}{|\psi^{\prime}(g_{k}( \alpha\overline{\phi^{*}(\zeta)}))|}P_{z}(\zeta,g_{k}(\alpha\overline{\phi^{ *}(\zeta)}))dm(\zeta).\] As the summands are all positive, we may interchange summation and integration. Thus, \[\frac{1-|\Psi(z_{1},z_{2})|^{2}}{|\alpha-\Psi(z_{1},z_{2})|^{2}} =\sum_{k\geq 1}\int_{\mathbb{T}}P_{z}(\zeta,g_{k}(\alpha\overline{ \phi^{*}(\zeta)}))\frac{dm(\zeta)}{|\psi^{\prime}(g_{k}(\alpha\overline{\phi^{ *}(\zeta)}))|},\] i.e. \[\int_{\mathbb{T}^{2}}P_{z}(\xi)d\sigma_{\alpha}(\xi) =\sum_{k\geq 1}\int_{\mathbb{T}}P_{z}(\zeta,g_{k}(\alpha\overline{ \phi^{*}(\zeta)}))\frac{dm(\zeta)}{|\psi^{\prime}(g_{k}(\alpha\overline{\phi^{ *}(\zeta)}))|}.\] Since the span of Poisson kernels is dense in \(C(\mathbb{T}^{2})\), we may conclude that \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi) =\sum_{k\geq 1}\int_{\mathbb{T}}f(\zeta,g_{k}(\alpha\overline{ \phi^{*}(\zeta)}))\frac{dm(\zeta)}{|\psi^{\prime}(g_{k}(\alpha\overline{\phi^ {*}(\zeta)}))|}\] for all \(f\in C(\mathbb{T}^{2})\). Note that the weights of these measures strongly resemble their RIF counterparts from Theorem 3.4. Moreover, as in the case of the multiplicative embeddings, Theorem 5.1 allows for infinite collections of parameterizing functions. **Remark 5.4**.: Let us convince ourselves that there actually exist inner functions \(\psi\) that meet the requirements of Theorem 5.1. For example, finite Blaschke products define one such class. Let \(\psi\) be a non-constant finite Blaschke product of order \(n\). As in Example 2.6, this implies that \(\psi\) is analytic on \(\mathbb{T}\) and \(\psi^{*}(\zeta)=\beta\) has precisely \(n\) distinct solutions for each \(\beta\in\mathbb{T}\), and \(\psi^{\prime}\neq 0\) on \(\mathbb{T}\). By the Implicit Function Theorem, we may thus parameterize the solutions with functions \(\{g_{k}(\beta)\}_{k=1}^{n}\) analytic on the unit circle. Additionally, we saw in Example 2.6 that the Clark measures of \(\psi\) are discrete for every \(\beta\in\mathbb{T}\). Hence, Theorem 5.1 works for any product function \(\Psi(z)=\phi(z_{1})\psi(z_{2})\) where \(\psi\) is a non-constant finite Blaschke product and \(\phi\) is an arbitrary inner function. In the case where both \(\phi\) and \(\psi\) are finite Blaschke products, the theorem reproduces what we know about RIFs, as \[\left|\frac{\partial\Psi}{\partial z_{2}}(\zeta,g_{k}^{\alpha}( \zeta))\right|=|\phi(\zeta)\psi^{\prime}(g_{k}^{\alpha}(\zeta))|=|\psi^{\prime }(g_{k}^{\alpha}(\zeta))|\] Then Theorem 5.1 reduces to Theorem 3.4. Observe that if \(\psi\in C(\mathbb{T})\), it must be a finite Blaschke product (Corollary 4.2, [14]). Similarly, if \(\psi^{\prime}\in H^{1}(\mathbb{T})\), then \(\psi\) is continuous on \(\mathbb{T}\) (Theorem 3.11, [12]) and thus a finite Blaschke product. Hence, to be able to construct varied examples, we need \(\psi^{*}\) to have some discontinuities on the unit circle (see e.g. Example 5.6). In what comes next, we let \(g_{k}^{\alpha}(\zeta):=g_{k}(\alpha\overline{\phi^{*}(\zeta)})\) for ease of notation. **Example 5.5**.: Let \[\Psi(z_{1},z_{2}):=\psi(z_{2})\phi(z_{1})=z_{2}\frac{\lambda-z_{2}}{1- \overline{\lambda}z_{2}}\exp\biggl{(}-\frac{1+z_{1}}{1-z_{1}}\biggr{)}\] for \(\phi\) as in Example 2.7 and some constant \(\lambda\in\mathbb{D}\). Note that \(\Psi^{*}=0\) for \(\zeta_{1}=1\). The equation \(\Psi^{*}=\alpha\) for \(\alpha\in\mathbb{T}\) can be rewritten as \[\zeta_{2}\frac{\lambda-\zeta_{2}}{1-\overline{\lambda}\zeta_{2}}=\alpha\exp \biggl{(}\frac{1+\zeta_{1}}{1-\zeta_{1}}\biggr{)}.\] For \(\alpha=e^{i\nu}\), the solutions to this are given by \(\zeta_{2}=g_{k}^{\alpha}(\zeta_{1})\), \(k=1,2\), where \[g_{k}^{\alpha}(\zeta_{1}) :=\frac{1}{2}\biggl{(}\lambda+\exp\biggl{(}i\nu+\frac{1+\zeta_{1} }{1-\zeta_{1}}\biggr{)}\overline{\lambda}\] \[\pm\sqrt{-4\exp\biggl{(}i\nu+\frac{1+\zeta_{1}}{1-\zeta_{1}} \biggr{)}+\biggl{(}-\lambda-\exp\biggl{(}i\nu+\frac{1+\zeta_{1}}{1-\zeta_{1}} \biggr{)}\overline{\lambda}\biggr{)}^{2}}\,\biggr{)}.\] In Figure 2, we have plotted the level curves for certain parameter values. Let us calculate the weights of the Clark measures. Observe that \[\psi^{\prime}(z_{2})=\frac{\lambda-2z_{2}+z_{2}^{2}\overline{\lambda}}{(1- \overline{\lambda}z_{2})^{2}}.\] Hence, by Theorem 5.1, \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\sum_{k=1}^{2}\int_{\mathbb{T }}f(\zeta,g_{k}^{\alpha}(\zeta))\frac{|1-\overline{\lambda}g_{k}^{\alpha}( \zeta)|^{2}}{|\lambda-2g_{k}^{\alpha}(\zeta)+g_{k}^{\alpha}(\zeta)^{2} \overline{\lambda}|}dm(\zeta)\] for all \(f\in C(\mathbb{T}^{2})\). In Figure 3, we have plotted the weights \[W_{k}^{\alpha}(\zeta):=\frac{|1-\overline{\lambda}g_{k}^{\alpha}(\zeta)|^{2}} {|\lambda-2g_{k}^{\alpha}(\zeta)+g_{k}^{\alpha}(\zeta)^{2}\overline{\lambda}|}.\] **Example 5.6**.: Define \[\Psi(z_{1},z_{2}):=\phi(z_{1})\phi(z_{2})=\exp\biggl{(}-\frac{1+z_{1}}{1-z_{1} }\biggr{)}\exp\biggl{(}-\frac{1+z_{2}}{1-z_{2}}\biggr{)},\] where \(\phi\) again is as in Example 2.7. As \(\phi^{*}\) exists everywhere on \(\mathbb{T}\), we have \(\Psi^{*}(\zeta)=\phi^{*}(\zeta_{1})\phi^{*}(\zeta_{2})\). On the lines \(\{(1,\zeta_{2}):\zeta_{2}\in\mathbb{T}\}\) and \(\{(\zeta_{1},1):\zeta_{1}\in\mathbb{T}\}\) in \(\mathbb{T}^{2}\), we see that \(\Psi^{*}=0\). Otherwise, \(|\Psi^{*}|=1\). Since \(\Psi^{*}\) is well-defined and unimodular on \(\mathbb{T}^{2}\) except on the lines \(\{\zeta_{1}=1\}\cup\{\zeta_{2}=1\}\) where \(\Psi^{*}=0\), we need to solve the equation \(\Psi=\alpha\). We may view this as \[\exp\biggl{(}-\frac{1+\zeta_{1}}{1-\zeta_{1}}-\frac{1+\zeta_{2}}{1-\zeta_{2}} \biggr{)}=e^{i(\nu+2\pi k)},\quad k\in\mathbb{Z},\] i.e. \[-\frac{1+\zeta_{1}}{1-\zeta_{1}}-\frac{1+\zeta_{2}}{1-\zeta_{2}}=i(\nu+2\pi k),\quad k\in\mathbb{Z}.\] Solving for \(\zeta_{2}\) yields \[\zeta_{2}=g_{k}^{\alpha}(\zeta_{1}):=\frac{\nu(\zeta_{1}-1)+2\pi k(\zeta_{1}- 1)+2i}{\nu(\zeta_{1}-1)+2\pi k(\zeta_{1}-1)+2i\zeta_{1}},\quad k\in\mathbb{Z}.\] Note that functions \(g_{k}^{\alpha}\) are continuous on the unit circle; their only singularities occur at points \(\zeta_{1}=\frac{2\pi k+\nu}{\nu+2\pi k+2i}\), which do not have modulus one. Moreover, all \(g_{k}^{\alpha}\) pass through the point \((1,1)\in\mathbb{T}^{2}\), which does not solve \(\Psi^{*}=\alpha\) as \(\Psi^{*}(1,1)=0\). However, since \(\mathcal{C}_{\alpha}(\Psi)\) is closed, the point \((1,1)\) nevertheless lies in the unimodular level set. Hence, \[\mathcal{C}_{\alpha}(\Psi)=\bigcup_{k\in\mathbb{Z}}\{(\zeta,g_{k}^{\alpha}( \zeta)):\zeta\in\mathbb{T}\}\] where \(g_{k}^{\alpha}\) is analytic on \(\mathbb{T}\) for every \(k\). We have plotted some of these curves in Figure 4. Recall that by Lemma 3.2, the unimodular level sets of RIFs can be parameterized by graphs that are analytic on \(\mathbb{T}^{2}\) except possibly at a single point. One might then expect that the Clark measures of a product function which is rational in at least one variable, like in Example 5.5, would be supported on smoother curves than this \(\Psi\). However, we see that in this case, the unimodular level sets are actually parameterized by much more "well-behaved" curves than in our previous example. At first sight, \(\Psi\) does not seem to meet the requirements of Theorem 5.1; there is a point on \(\mathbb{T}^{2}\) where all \(g_{k}\) intersect, as \(g_{k}(1)=1\) for all \(k\in\mathbb{Z}\). However, as noted above, this value does not in fact solve the equation \(\Psi^{*}=\alpha\) since \(\phi^{*}(1)=0\). This point would cause a problem if the Clark measure of \(\phi\) had positive weight there. Fortunately, we are saved by Proposition 2.5; the measure associated to \(\alpha\) has a point mass at \(1\) if and only if \(\phi^{*}(1)=\alpha\), and so \(|\phi^{\prime}(1)|^{-1}=0\). Let us now calculate the weights of the Clark measures associated to \(\Psi\). First note that \[\phi^{\prime}(z_{2})=-\frac{2\exp\bigl{(}-\frac{1+z_{2}}{1-z_{2}}\bigr{)}}{(1-z_{ 2})^{2}}=-\frac{2\phi(z_{2})}{(1-z_{2})^{2}}.\] Then \[\phi^{\prime}(g_{k}^{\alpha}(\zeta_{1}))=-\frac{2\alpha}{\phi(\zeta_{1})(1-g_{ k}^{\alpha}(\zeta_{1}))^{2}}=\frac{2\alpha}{\phi(\zeta_{1})}\frac{(\nu(\zeta_{1}-1)+ 2\pi k(\zeta_{1}-1)+2i\zeta_{1})^{2}}{4(\zeta_{1}-1)^{2}}\] for \(\zeta_{1}\in\mathbb{T}\setminus\{1\}\). When taking moduli, we find \[|\phi^{\prime}(g_{k}^{\alpha}(\zeta_{1}))|=\biggl{|}\frac{\nu(\zeta_{1}-1)+2 \pi k(\zeta_{1}-1)+2i\zeta_{1}}{2(\zeta_{1}-1)}\biggr{|}^{2}\] for \(\zeta_{1}\in\mathbb{T}\setminus\{1\}\). Hence, Theorem 5.1 yields \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\sum_{k\in\mathbb{Z}}\int_{ \mathbb{T}}f(\zeta,g_{k}^{\alpha}(\zeta))\biggl{|}\frac{2(\zeta-1)}{\nu( \zeta-1)+2\pi k(\zeta-1)+2i\zeta}\biggr{|}^{2}dm(\zeta)\] for all \(f\in C(\mathbb{T}^{2})\), where \(\alpha=e^{i\nu}\). Since \(\sum_{k\in\mathbb{Z}}\frac{1}{k^{2}}\) converges, we see that the right-hand side is finite. Note that the weights \[W_{k}^{\alpha}(\zeta):=\biggl{|}\frac{2(\zeta-1)}{\nu(\zeta-1)+2\pi k(\zeta-1 )+2i\zeta}\biggr{|}^{2}\] reduce to zero for \(\zeta=1\), as expected. Moreover, we established earlier that all the level curves pass through the singularity \((1,1)\). Based on this example, it seems that the weights "detect" the singularities of \(\Psi\) -- much like in the case of the rational inner functions in Section 3. Recall our brief discussion on the connection between the order of vanishing of weights at RIF singularities and contact order on page 3. It might be interesting to study if the singularities of general product functions are connected to the density of their Clark measures in some similar way. ## 6. Closing remarks It is important to note that Clark measures of general bivariate inner functions still remain unexplored. In one variable, any singular probability measure on \(\mathbb{T}\) defines the Clark measure of some inner function (pp. 234-235, [13]). In several variables, we need added requirements Figure 4. Level curves \(g_{k}^{1}\) in Example 5.6 for \(k=-1\) (red), \(k=0\) (orange) and \(k=2\) (gray). on a measure for it to be a Clark measure -- as discussed in Remark 4.2, any positive, pluriharmonic, singular probability measure defines the Clark measure of some inner function. The distinction arises from the fact that in several variables, it is not as easy to ensure that a given harmonic function is the real part of an analytic function. By Theorem 2.4.1 in [18], the Poisson integral of a real measure \(\mu\) on \(\mathbb{T}^{d}\) is given by the real part of an analytic function if and only if its Fourier coefficients satisfy \(\hat{\mu}(k)=0\) for every \(k\) outside the set \(-\mathbb{Z}_{+}^{d}\cup\mathbb{Z}_{+}^{d}\), where \(-\mathbb{Z}_{+}^{d}\) denotes the set of points \((k_{1},\ldots,k_{d})\) where every \(k_{j}\leq 0\). Furthermore, the kind of smooth curve-parameterizations that were obtained for the classes of inner functions in this text are certainly not applicable for general inner functions. What we do know is that RP-measures cannot be supported on sets of Hausdorff dimension less than one (Theorem 4, [4]). In two dimensions, we have seen examples of Clark measures supported on curves (i.e. sets of Hausdorff dimension one). In [17], the author constructs an RP-measure whose support has Hausdorff dimension two. However, it is not clear to the author how one would construct an RP-measure with support of Hausdorff dimension \(1<d<2\). For an in-depth discussion about the supports of RP-measures, see [4]. We end with a brief note on Clark embedding operators associated to the classes of inner functions introduced here. In Example 4.2 in [11], it is shown that all \(T_{\alpha}\) are unitary for the simple multiplicative embedding \(\phi(z_{1}z_{2})=z_{1}z_{2}\) where \(\phi(z)=z\), for which the Clark measure \(\sigma_{\alpha}\) satisfies \[\int_{\mathbb{T}^{2}}f(\xi)d\sigma_{\alpha}(\xi)=\int_{\mathbb{T}}f(\zeta, \alpha\overline{\zeta})dm(\zeta),\quad f\in C(\mathbb{T}^{2}).\] For holomorphic monomials \(f\), the functions \(f(\zeta,\alpha\overline{\zeta})\) are dense in \(L^{2}(m)\), which in turn implies that \(A(\mathbb{D}^{2})\) is dense in \(L^{2}(\sigma_{\alpha})\), as desired. It seems plausible that a similar argument can be applied to show that given any \(\Phi(z)\) satisfying the conditions of Corollary 4.6.1, the associated Clark embedding operators are all unitary. In the case of product functions, however, it is not so clear when the operators would be unitary and further analysis is required. ## Acknowledgements The author would like to express her deepest gratitude to Alan Sola, for insightful comments and expert advice. This material has been adapted from the author's Master's thesis in mathematics at Stockholm University in August 2023. Figure 5. Weight curves \(W^{1}_{k}\) in Example 5.6 for \(k=-1\) (red), \(k=0\) (orange) and \(k=2\) (gray).
2301.13707
Topological features in the ferromagnetic Weyl semimetal CeAlSi: Role of domain walls
In the ferromagnetic (FM) Weyl semimetal CeAlSi both space-inversion and time-reversal symmetries are broken. Our quantum oscillation (QO) data indicate that the FM ordering modifies the Fermi surface topology and also leads to an unusual drop in the QO amplitude. In the FM phase, we find a pressure-induced suppression of the anomalous and the loop Hall effects. This cannot be explained based on the electronic band structure or magnetic structure, both of which are nearly pressure independent. Instead, we show that a simplified model describing the scattering of Weyl fermions off FM domain walls can potentially explain the observed topological features. Our study highlights the importance of domain walls for understanding transport in FM Weyl semimetals.
M. M. Piva, J. C. Souza, V. Brousseau-Couture, Sopheak Sorn, K. R. Pakuszewski, Janas K. John, C. Adriano, M. Côté, P. G. Pagliuso, Arun Paramekanti, M. Nicklas
2023-01-31T15:34:21Z
http://arxiv.org/abs/2301.13707v1
# Topological features in the ferromagnetic Weyl semimetal CeAlSi: ###### Abstract In the ferromagnetic (FM) Weyl semimetal CeAlSi both space-inversion and time-reversal symmetries are broken. Our quantum oscillation (QO) data indicate that the FM ordering modifies the Fermi surface topology and also leads to an unusual drop in the QO amplitude. In the FM phase, we find a pressure-induced suppression of the anomalous and the loop Hall effects. This cannot be explained based on the electronic band structure or magnetic structure, both of which are nearly pressure independent. Instead, we show that a simplified model describing the scattering of Weyl fermions off FM domain walls can potentially explain the observed topological features. Our study highlights the importance of domain walls for understanding transport in FM Weyl semimetals. + Footnote †: preprint: APS/123-QED + Footnote †: preprint: APS/123-QED ## I Introduction Topological phases of matter have lately received considerable attention, due to the experimental realization of exotic types of charge carriers. One example is the massless Weyl fermions found in Weyl semimetals (WSMs) [1; 2; 3], which are characterized by remarkable electronic properties, such as surface Fermi arcs, a bulk chiral anomaly, axial-gravitational anomaly, an extremely large magnetoresistance (MR) and an anomalous Hall effect (AHE) [1; 3; 4; 5; 6]. Weyl fermions can be generated by either breaking space-inversion (SI) or time-reversal (TR) symmetry of materials with a Dirac or quadratic band touching points. So far most experimentally studied WSMs break SI symmetry [7; 8; 9; 10; 11; 12; 13; 14; 15]; fewer examples are known for WSMs with broken TR symmetry, i.e. _magnetic_ WSMs [16; 17; 18; 19; 20; 21]. Magnetic WSMs are of fundamental interest since they intertwine topology and strong correlations [22; 23]. They also offer the potential to manipulate the topological phase in a desired way, for instance using a magnetic field to tune the position of Weyl nodes or to control the chirality or geometry of magnetic domain walls, which is important for next-generation spintronics applications [24; 25]. The family of \(Ln\)Al\(Pn\) (\(Ln=\) lanthanides, \(Pn=\) Ge, Si) materials is ideal to host nontrivial topological properties due to their noncentrosymmetric crystalline structure (\(I4_{1}md\)), which is the same as in the TaAs family of WSMs [7; 9; 26; 27; 28; 29]. Multiple Weyl nodes and a large spin Hall effect were predicted to exist in LaAlGe and LaAlSi [30]. Weyl cones were experimentally observed for LaAlGe [31] and a \(\pi\) Berry phase was recently found in LaAlSi [32]. Remarkably, magnetic members of the family host rare-earth moments which can order and additionally break TR symmetry - many of them are predicted to feature Weyl nodes near the Fermi level [33; 34; 35; 36]. Experiments have discovered an anomalous Hall effect (AHE) in PrAlGe\({}_{1-x}\)Si\({}_{x}\)[33], chiral surface Fermi arcs in PrAlGe [37; 38], and a topological magnetic phase and singular angular MR in the semimetal CeAlGe [39; 40; 41]. In addition, Weyl fermions have been found to mediate magnetism in NdAlSi [42] and a \(\pi\) Berry phase was reported for quantum oscillations (QO) in SmAlSi [34]. In this Article, we focus on the ferromagnetic Weyl semimetal CeAlSi. CeAlSi, which hosts an in-plane noncollinear ferromagnetic (FM) order below the Curie temperature \(T_{C}\approx 8\) K with a large anisotropy, the \(c\)-axis being the magnetically hard axis [36]. Ce\({}^{3+}\) spins in adjacent FM planes display an angle of \(\approx 70^{\circ}\)[36]. Recent angle resolved photo emission spectroscopy experiments in the paramagnetic phase of CeAlSi above \(T_{C}\) revealed Fermi arcs and several Weyl nodes lying close to the Fermi energy which stem from the non-centrosymmetric structure [43]. Going below \(T_{C}\), into the FM state, a magnetic field applied parallel to the [100] direction reveals an AHE, while a [001] field leads to an unexplained hysteretic loop Hall effect (LHE) [36]. In addition, CeAlSi may exhibit nontrivial magnetic domain walls [44]; indeed, chiral domain walls were recently detected in this system [45]. Furthermore, magnetoelastic couplings give rise to picometer displacements in the unit cell due to the internal FM field, which can lead to different domain wall spin textures [46]. The presence of this magnetoelastic effect suggests that external pressure may lead to a strong tuning of magnetism and to associated large changes in the AHE and LHE [46]. Hydrostatic pressure has previously been shown to be an effective tool in tuning the electronic structure without introducing any additional disorder and was successfully used to tune Weyl points closer to the Fermi energy in certain topological materials [47; 48; 49; 50]. Furthermore, application of pressure is known to systematically modify the magnetic properties in Ce-based materials [51]. Here, we use hydrostatic pressure as a tool to investigate the origin of the features characteristic of the nontrivial topological behavior in CeAlSi, focusing on longitudinal and Hall transport experiments and on quantum oscillation measurements. We combine these with _ab initio_ density functional theory (DFT) calculations and phenomenological models for scattering of Weyl fermions off magnetic domain walls to shed light on our unusual observations. ## II Methods Single crystals of CeAlSi and LaAlSi were grown by the Al-flux technique similar to [52]. High purity elements with starting composition Ce [La] (99.99%) : Al (99.999+%) : Si (99.999+%), 1 : 20 : 1, were place into an alumina crucible and sealed in an evacuated quartz tube. The samples were heated to 1200\({}^{\circ}\)C, kept at this temperature for 15 hours and cooled down to 720\({}^{\circ}\)C at 2\({}^{\circ}\)C/h. The excess of Al was removed by spinning the tube upside down in a centrifuge. The crystal structure was confirmed by x-ray powder diffraction. Energy dispersive x-ray spectroscopy shows, within the experimental uncertainty, a Ce:Al:Si proportion of \(1:1:1\). Electrical transport experiments were carried out by a four-probe configuration using a low-frequency AC resistance bridge. Temperatures down to 1.8 K and magnetic fields up to 9 T were achieved in a physical property measurement system (PPMS, Quantum Design) and in a liquid helium cryostat (Janis). Magnetization measurements were conducted in a magnetic property measurement system (MPMS, Quantum Design). Pressures up to 2.7 GPa (electrical transport) and 1 GPa (magnetization) were generated using self-contained piston-cylinder-type pressure cells using silicon oil as pressure transmitting medium. A piece of lead (tin) served as manometer. Density functional theory (DFT) calculations were performed with the local density approximation functional (LDA) and projector-augmented wave (PAW) method as implemented in the Abinit software package [53], using Jollet-Torrent-Holzwarth (JTH) pseudopotentials [54]. Spin-orbit coupling (SOC) and non-collinear magnetism are taken into account. An on-site Coulomb interaction with \(U=6\) eV was added for the Ce \(f\) electrons within the LDA+U scheme. We use a \(16\times 16\times 16\) Monkhorst-Pack **k**-point grid and a plane-wave energy cutoff of 25 hartree. The lattice parameters and relevant internal atomic coordinates were optimized at respectively 0 GPa and 3 GPa until all forces on the atoms were below \(10^{-6}\) hartree/bohr\({}^{3}\). At 0 GPa (3 GPa), we obtain \(a=7.926\) bohr (\(a=7.832\) bohr) and \(c=27.397\) bohr (\(c=27.192\) bohr). ## III Results ### Temperature - pressure phase diagram At ambient pressure, the FM ordering transition in CeAlSi is marked by a singular magnetization \(M(T)\) and sharp drop in electrical resistivity as a function of temperature \(\rho(T)\) at \(T_{C}\approx 8\) K. Figure 1 shows the effect of external pressure on the magnetic phase (see Appendix A for additional data). Application of pressure linearly enhances \(T_{C}(p)\) with a slope of 0.62(2) K/GPa, driving \(T_{C}\) from 7.8 K at ambient pressure to 9.4 K at 2.7 GPa (values taken from the resistivity data). More important Figure 1: (a) Magnetization (\(M\)) (left axis), obtained in an applied field of 50 mT along the [100] crystal axis, and electrical resistivity (right axis) as a function of temperature for selected pressures. (b) Temperature–pressure phase diagram. The solid lines are linear fits. (c) Magnetization measurements for several pressures. is our finding that, for different pressures, the in-plane magnetization curves \(M(H)\) at 2 K as a function of the applied magnetic field along the [100] direction lie on top of each other. This result indicates a negligible pressure effect on the non-collinear planar magnetic structure found at ambient pressure [36]. ### Electronic band structure Our DFT calculations at ambient pressure and 3 GPa, which incorporate spin-orbit coupling and non-collinear magnetic order, reveal only a negligible effect of pressure on the electronic band structure and the electronic density of states (DOS) at the Fermi energy [see Fig 2] as well as on the ordered moments and their orientation. The bands contributing to the hole pockets at the Fermi surface (FS) barely display any variation of their intercepts of the Fermi energy in \(\mathbf{k}\)-space, suggesting a negligible variation of the FS area (see Appendix D for further information). The only noticeable modification of the electronic structure is a small shift of the bands associated with Ce \(f\)-electrons to higher energies with respect to the Fermi energy. As these bands lie about 2 eV below the Fermi level, they most likely do not directly contribute to the transport properties. ### Quantum oscillations Next we turn to the results of our QO measurements. Longitudinal conductivity data \(\sigma_{xx}\) well above \(T_{C}\) at \(T=15\) K and in the FM state at \(T=2\) K at several pressures are presented in the inset of Fig. 3(a), where we have subtracted a smooth background yielding \(\Delta\sigma_{xx}\). Within the investigated field range, the \(\Delta\sigma_{xx}\) data as well as its fast Fourier transform (FFT) analysis reveals a single QO frequency \(f\approx 20(5)\) T, which is found to be independent of pressure and temperature (see Appendix B). We notice two main features: i. the amplitude of the oscillations at 15 K is larger than that at 2 K and ii. the amplitude of the oscillations is suppressed by increasing pressure [see Fig. 3(a)]. Generally, the thermal damping of the QO amplitude can be described by the Figure 3: (a) Fast Fourier transformation (FFT) amplitude as a function of temperature at different applied pressures. The solid lines are simulations considering the best fits using the Lifshitz-Kosevich formula. The inset shows longitudinal conductivity for \(H\parallel[001]\) after subtraction of a third order polynomial background \(\Delta\sigma_{xx}\) as a function of \(1/(\mu_{0}H)\) at 15 K (top) and 2 K (bottom) for selected pressures. The curves at 2 K were shifted by \(-10\) kS/m for clarity. (b) and (c) Landau fan diagrams for CeAlSi at 15 and 2 K, respectively. Figure 2: Electronic bands and DOS at ambient pressure (blue) and 3 GPa (red). The hatched region of right panel corresponds to the partial DOS associated with Ce \(f\) states. Lifshitz-Kosevich (LK) formula [55]. However, our FFT signal follows the LK prediction only in the paramagnetic (PM) region above \(T_{C}\) [solid lines in Fig. 3(a)]. In the FM state we observe an unusual reduction of the QO amplitude upon cooling. This remarkable response of the oscillation amplitude as a function of temperature has not been observed in any other members of the \(Ln\)Al\(Pn\) family [32; 34; 36; 42; 56], and was previously reported in just a few materials [57; 58]. In SmSb, for instance, a sudden decrease of the Shubnikov-de Haas oscillations takes place once the material becomes antiferromagnetic, which was conjectured to be due to the presence of a non-trivial Berry phase [58]. To further analyze the QO, Landau fan diagrams are shown in Figs. 3(b) and 3(c). Our analysis indicates a change in the nature of the topological properties between the PM and FM phase. At 15 K in the PM phase the intercept is around \(-5/8\), which suggests the presence of topologically trivial charge carriers [59]. In contrast to that, at 2 K the intercept is \(-9/8\), which for 3D magnetic WSMs can be associated with linear dispersive charge carriers and a nontrivial Berry phase [59]. Our QO data suggest that the momentum space separation between nearby Weyl nodes with opposite topological charges gets enhanced in the FM state leading to a change in FS topology - from one which encloses both Weyl nodes above \(T_{C}\) to a split FS enclosing isolated well-separated Weyl nodes below \(T_{C}\). An enhanced separation of the Weyl nodes in the FM phase has been also previously found in band-structure calculations (SI of Ref. [36]). This can explain the change in the intercept in our Landau fan plot. Such a change in the FS topology could nonetheless preserve the area of certain extremal orbits, so that the observed QO frequency can remain nearly unchanged (see Appendix E for details). If the topological Fermi pockets are only weakly split below \(T_{C}\), the large density of states due to proximity to the Lifshitz transition [60] can lead to an increased scattering rate for states on the extremal orbits, thus enhancing the Dingle temperature and suppressing the QO amplitude for \(T<T_{C}\). We note that our results cannot rule out other possible scenarios for the suppression of the amplitude of the quantum oscillations upon cooling. However, a Lifshitz transition in the FS of CeAlSi can explain both, the suppression of the QO and the phase shift upon entering the FM phase revealed by our measurements. ### Hall effect For magnetic field along the [100] direction and current along [010] [see sketch in Fig. 4(a)] we find a large AHE in CeAlSi in its ferromagnetic state. The AHE signal has been extracted by fitting the Hall resistivity to the form \(\rho_{yz}(H)=R_{0}H+\rho_{\rm AHE}\), where \(R_{0}\) is the ordinary Hall effect coefficient and \(\rho_{\rm AHE}=R_{s}M_{x}\) with \(R_{s}\) being the anomalous Hall coefficient and \(M_{x}\) being the magnetization along [100] (see Appendix C). We confirm that an AHE is absent in the non-magnetic analog LaAlSi [Fig. 4(a)]. We have fitted the longitudinal and ordinary Hall conductivities to a simple two-band model to obtain information on the density of the electron- and hole-like charge carriers and their mobilities (see Appendix C). At low temperatures and ambient pressure we find \(5.9(1)\times 10^{19}\) holes/cm\({}^{3}\) and \(2.5(1)\times 10^{19}\) electrons/cm\({}^{3}\). The application of external pressure suppresses the extracted hole density only slightly, which reaches \(4.6(1)\times 10^{19}\) holes/cm\({}^{3}\) at 2 K and 2.6 GPa, whereas the electrons density remains nearly unchanged. Moreover, the corresponding mobilities at 2 K and ambient pressure are about \(1.4(1)\times 10^{3}\) cm\({}^{2}\)/Vs (\(3.2(1)\times 10^{3}\) cm\({}^{2}\)/Vs) for holes (electrons). These values are nearly unaffected by application of external pressure and are on the same order of magnitude compared with other Weyl semimetals [61; 62]. Application of external pressure suppresses the jump of the AHE (defined as dif Figure 4: (a) Anomalous Hall effect (AHE) at 2 K as a function of magnetic fields \(H\parallel[001]\) for different applied pressures. The top inset shows the AHE jump as a function of pressure and the bottom inset displays a scheme of the circuit used in this measurement. 1 and 2 denote samples 1 and 2, respectively. (b) Loop Hall effect (LHE) at 2 K as a function of magnetic field for \(H\parallel[001]\) for selected applied pressures. The left inset shows the Hall resistivity measured upon increasing (red) and decreasing (blue) magnetic field and the difference of both curves (green) at 0.1 GPa. The right inset displays a schematic drawing of the measurement circuit. Data of the nonmagnetic reference material LaSiAl at ambient pressure is shown as gray line in both panels ference in \(\rho_{\rm{AHE}}\) between positive and negative magnetic fields) up to 1.5 GPa [top inset of Fig. 4(a)]. Above 1.5 GPa the anomalous Hall jump saturates to around 0.5(1) \(\mu\Omega\)cm. As we have shown above, the \(M(H)\) curves taken at different pressures fall on top of each other [see Fig. 1(b)], suggesting the absence of changes in the magnetic structure as a source for the suppression of the AHE. Moreover, the electronic bands close to the Fermi level are only slightly affected by pressure [see Fig. 1(c)], making it unlikely that this significant decrease results from a pressure-induced change in the position of the Weyl nodes. We find that while \(R_{s}\) scales nearly linearly with \(\rho_{xx}\) for pressure \(\lesssim\) 1 GPa, the scaling deviates significantly from linear behavior at higher pressures (see Appendix C). A linear relation between \(R_{s}\) and \(\rho_{xx}\) suggests that the observed AHE at ambient pressure has a significant extrinsic skew-scattering contribution [63], which gets suppressed at high pressures. Given the robustness of the electronic structure and magnetic order against pressure, the most plausible explanation for this is a pressure-dependent change in the nature or distribution of domain walls. Previous work has shown that Weyl fermions can undergo skew scattering from magnetic domain walls which contain the axis of the average magnetization, leading to an extrinsic contribution to the AHE qualitatively consistent with our data [64]. An even more unusual Hall response is observed for field applied along [001] [see sketch in Fig. 4(b)]. We note that in this geometry the magnetic field is applied perpendicular to the ferromagnetically ordered moments in the \(\langle 001\rangle\) plane and this Hall response thus cannot arise from the bulk in-plane magnetization. This so-called loop Hall effect (LHE) is displayed in Fig. 4(b). It is only observed in the ferromagnetic regime, displays hysteresis even in the absence of any observable \(M_{z}\) magnetization hysteresis, and is absent in the non-magnetic analog LaAlSi. \(\rho_{\rm{LHE}}(H)\) is obtained by recording the Hall resistivity \(\rho_{xy}\) for \(H\parallel[001]\) upon increasing and decreasing magnetic field and taking the difference between both curves, as shown in the left inset of Fig. 4(b) for 0.1 GPa as an example. Similar to the AHE, the application of external pressure leads to a decrease in the LHE [see Fig. 4(b)]. While the existence of the LHE in CeAlSi has been argued to be tied to the presence of the Weyl nodes near the Fermi energy [36], no clear physical mechanism has been provided for its origin. ## IV Discussion In the following we present a simplified model for Weyl fermions in a non-centrosymmetric FM, and show that a domain wall scattering mechanism, similiar to that previously explored to understand the AHE [64], can also lead to the LHE for domain walls which are perpendicular to the average magnetization. Our key idea here is that the bulk magnetic domains in CeAlSi host an in-plane magnetization with hard axis along [001], so that the out-of-plane _bulk_ contribution to the field-induced magnetization \(M_{z}^{\rm{bulk}}\) is not expected to be hysteretic as we tune the magnetic field \(H_{z}\). We instead argue that the hysteretic LHE must be attributed to the hysteretic _domain wall_ magnetization \(M_{z}^{\rm{DW}}\) as we tune \(H_{z}\), as schematically depicted in Fig. 4(a) and 4(b). Our calculations show that the intra-node skew scattering of Weyl fermions as they cross a domain wall with nonzero \(M_{z}^{\rm{DW}}\) can explain the LHE. To illustrate this physics, we study a model with 4 pairs of Weyl nodes in the \(k_{z}=0\) plane (see Appendix E for details). These could be viewed as a caricature of the \(W_{3}^{\prime}\) Weyl nodes found to lie \(\sim 46\) meV above the Fermi level close to the \(k_{z}=0\) plane in CeAlSi [36; 43]. For a single Weyl node, with chirality \(+1\), we consider a simple linearized Hamiltonian: \[\mathcal{H}_{+}=v_{F}\sigma_{i}G_{ij}^{(+)}q_{j}+\sigma_{i}M_{i} \tag{1}\] where \(v_{F}\) is the nodal velocity, \(\sigma\) is the spin Pauli matrix, \(q\) denotes the momentum relative to the Weyl node position, and the tensor \(G_{ij}\) is chosen to yield an elliptical Fermi surface at fixed \(k_{z}\) with its major axis rotated away from the \(x,y\) axes. The Hamiltonian for the other Weyl nodes can be reconstructed using symmetries. The Weiss field \(\mathbf{M}\) is nonzero in the magnetically ordered phase and tunes the momentum of the Weyl node; we assume this leads to topologically nontrivial FS pockets enclosing single Weyl nodes. For a magnetic field \(H_{z}\) applied along the \(z\)-axis, the system will support domains of \(\mathbf{M}\), with magnetization aligned along different in-plane directions, which are separated by domain walls. As shown in [45], the domain wall magnetization in such cases supports an out-of-plane component \(M_{z}^{\rm{DW}}\). Fig. 5(c) shows the computed Hall conductance scaled by the longitudinal con Figure 5: (a) Domain wall between two bulk magnetic domains showing twisted magnetization configuration with \(M_{z}^{DW}(x)=M\sin\theta(x)\). (b) Skew scattering of Weyl fermions of hysteretic domain wall loops, with red regions having \(M_{z}^{DW}>0\) and blue having \(M_{z}^{DW}<0\), provides a mechanism for the loop Hall effect (LHE). The black arrows are classical representations of the Weyl fermions trajectories. (c) Calculated LHE shown as a ratio of Landauer conductances, versus the maximal out of plane tilt angle \(\theta\) of the domain wall magnetization. ductance showing that it is an odd function of \(M_{z}^{\rm DW}\) (see Appendix F for details). Crudely, this small Landauer conductance [65; 66] ratio is expected to be related to the ratio of loop Hall to longitudinal resistivity - our experiments show that \(\rho_{xy}^{\rm LHE}/\rho_{xx}\!\sim\!10^{-3}\), in reasonable agreement with the theoretical estimate in Fig. 5. We thus propose that the mechanism for the aptly named LHE is the skew scattering of Weyl fermions off hysteretic domain wall loops or surfaces. Since we expect the domain wall magnetization \(M_{z}^{\rm DW}\!\ll\!M_{z}^{\rm bulk}\), the hysteretic behavior of \(M_{z}^{\rm DW}\) cannot be resolved in bulk magnetization measurements. Our model might also help to understand the observation of a similar LHE reported previously in other compounds, in which Weyl fermions were predict to exist [67; 68; 69]. ## V Conclusions In summary, our study emphasizes, through a key tuning parameter (hydrostatic pressure), the importance of ferromagnetism for the low temperature topological features in CeAlSi. Our QO data show a difference in the Berry phase above and below \(T_{C}\), indicating that FM ordering shifts oppositely charged Weyl nodes away from each other in momentum space, leading to a change in the Fermi-surface topology. We have argued that this also leads to an increase in the scattering rate below \(T_{C}\), and thus to a drop in the amplitude of Shubnikov - de Haas oscillations in contrast to the conventional LK formula. This result calls for angular dependent Shubnikov - de Haas and de Haas - van Alphen experiments in an extended field range. We have also discovered pressure dependent changes in the AHE and LHE below \(T_{C}\). Since our DFT calculations indicate that the electronic band structure is robust against pressure, we argue that these changes in the AHE and LHE must arise from differences in domain wall defects and we have shown how Weyl fermions scattering off hysteretic domain walls can lead to the LHE. ## Data availability Data that underpin the findings of this study are available at Edmond - the open research data repository of the Max Planck Society at [70]. ###### Acknowledgements. We acknowledge fruitful discussions with A. P. Mackenzie. We also thank U. Burkhardt for carrying out energy dispersive x-ray analysis on the samples. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 101019024. This work was also supported by the Sao Paulo Research Foundation (FAPESP) grants 2017/10581-1, 2018/11364-7, 2020/12283-0, CNPq grants # 304496/2017-0, 310373/2019-0 and CAPES, Brazil. This research was financially supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), under the Discovery Grants program grant No. RGPIN-2016-06666. Computations were made on the supercomputers Beluga and Narval managed by Calcul Quebec and the Digital Research Alliance of Canada. The operation of these supercomputers is funded by the Canada Foundation for Innovation, the Ministere de la Science, de l'Economie et de l'Innovation du Quebec, and the Fonds de recherche du Quebec - Nature et technologies. V.B.-C. and M.C are members of the Regroupement quebecois sur les materiaux de pointe (RQMP). ## Appendix A Electrical resistivity Figures 6(a) and 6(b) present the electrical resistivity (\(\rho\)) as a function of temperature at several pressures for two different samples of CeAlSi. At high temperatures \(\rho(T)\) exhibits a metallic behavior for both samples at all studied pressures. Moreover, a clear kink is observed at low-temperatures characterizing the ferromagnetic transition, in good agreement with previous reports at ambient pressure [36; 45; 46]. A broad shoulder is observed at around 80 K. It shows up as a maximum in the temperature derivative of the electrical resistivity [see insets of Figs. 6(a) and 6(b)]. The shape and the position of the maximum is nearly unaffected by the application of external pressure, suggesting that the gap between the ground state and the first excited crystalline electrical field state does not change with increasing pressure. ## Appendix B Quantum oscillations The left panels of Fig. 7 present the longitudinal conductivity measured with \(H\parallel[001]\) after subtraction of a third order polynomial (fit between 5 and 9 T) \(\Delta\sigma_{xx}\) as a function of \(1/(\mu_{0}H)\). We note that quantum oscillations are clearly seen up to 40 K at all studied pressures. Furthermore, the unusual behavior of the quantum oscillations amplitudes can be seen by the naked eye. The oscillations in the paramagnetic state at 15 K are more pronounced than the oscillations in the ferromagnetic state at 2 K. The panels on the right side of Fig. 7 present the Fast Fourier transformation (FFT) of \(\Delta\sigma_{xx}\), using a Hamming window from 0.2 to 0.11 T\({}^{-1}\), as a function of frequency (\(f\)) at several temperatures for selected pressures. Only one oscillation frequency \(f\approx 20(5)\) T is present. It is unaffected by changes in pressure and/or temperature. The effective mass (\(m^{*}\)) was estimated in the paramagnetic state of CeAlSi by fitting the FFT amplitude as a function of temperature by the Lifshitz-Kosevich (LK) formula [55]: \[R_{T}=\frac{\alpha Tm^{*}}{B\sinh(\alpha Tm^{*}/B)}, \tag{2}\] in which \(\alpha=2\pi^{2}k_{B}/e\hbar\approx 14.69\) T/K, \(T\) is the temperature, \(B\) is the magnetic field and \(m^{*}\) the effective mass. As shown in Fig. 8, the application of external pressure leads to a decrease in the value of \(m^{*}\). ## Appendix C Appendix C: Hall Effect ### Anomalous Hall Effect Figure 9 presents the Hall resistivity (\(\rho_{yz}\)) as a function of magnetic field at 2 K for several pressures. A linear background was determined by performing a linear fit in the range 0.2 T \(\leqslant H\leqslant\) 0.6 T. We obtained the anomalous Hall (\(\rho_{\text{AHE}}\)) effect by subtracting the linear background using \(\rho_{yz}=R_{0}H+\rho_{\text{AHE}}\). As we can see in Fig. 10, the linear dependence between the anomalous Hall coefficient (\(R_{S}\)) and the longitudinal resistivity (\(\rho_{xx}\)) characterizes the presence of a skew scattering contribution to the AHE in CeAlSi at ambient pressure [63]. The observation of this contribution in a good metal regime can be attributed to domain wall scattering of Weyl fermions (see Sec. IV), as this contribution should be the dominant one in highly conducting samples (\(\sigma_{xx}\geqslant 0.5\times 10^{6}\) (\(\Omega\)cm)\({}^{-1}\)) [63]. Furthermore, the application of external pressure suppresses the linear relation between \(R_{s}\) and \(\rho_{xx}\), which is better seen in the inset of Fig. 10, where the exponents obtained with allometric fits (\(R_{S}=a+b\rho_{xx}^{n}\)) are shown as a function of pressure. One can clearly see the increase of the exponent \(n\) as a function of increasing pressure, reaching 1.21(1) at 1.1 GPa, indicating that the skew scattering contribution of the AHE from the domain walls is being suppressed by application of pressure. The domains themselves (bulk) also contribute to the AHE. It is possible to differentiate both contributions, as analyzed in great detail in Ref. [64], by considering that the domain wall scattering contribution to the AHE is limited by the electron mean free path, whereas the bulk contribution is not. The total Hall resistivity is therefore an average between the bulk and domain wall contributions. Our results suggests that at low pressures (\(p\leqslant 1.5\) GPa) the AHE is dominated by the skew scattering contribution coming from the domain walls, while in the high-pressure range (\(p\geqslant 1.5\) GPa), where the AHE is not skew scattering type, it is dominated by the contribution of the domains. ### Two-band model fits To accurately estimate the carrier densities and mobilities of CeAlSi, we have simultaneously fit the longitudinal (\(\sigma_{xx}\)) and the Hall (\(\sigma_{xy}\)) conductivities considering a two-band model described by: \[\sigma_{xx} =e\left(\frac{n_{e}\mu_{e}}{1+\mu_{e}^{2}\left(\mu_{0}H\right)^{ 2}}+\frac{n_{h}\mu_{h}}{1+\mu_{h}^{2}\left(\mu_{0}H\right)^{2}}\right)\] \[\sigma_{xy} =e\left(\mu_{0}H\right)\left(\frac{n_{e}\mu_{e}^{2}}{1+\mu_{e}^{ 2}\left(\mu_{0}H\right)^{2}}-\frac{n_{h}\mu_{h}^{2}}{1+\mu_{h}^{2}\left(\mu_{ 0}H\right)^{2}}\right),\] where \(n\) denotes the electron (\(e\)) and hole (\(h\)) carrier densities, and \(\mu_{e}\) and \(\mu_{h}\) are the electron and hole mobilities, respectively. Figure 11(a) presents a representative plot of the fits at 2 K and 1.2 GPa, in which a good agreement between the experimental data and the fits is observed. Figure 11(b) displays the carrier densities as a function of Figure 6: (a) and (b) Electrical resistivity (\(\rho\)) as a function of temperature at several pressures for two different samples of CeAlSi. The insets displays a magnified view of the low-temperature range. The insets show the temperature derivative of \(\rho\) as a function of temperature at several pressures for two different samples of CeAlSi. Figure 7: (Left panels) Longitudinal conductivity after subtraction of a third order polynomial \(\Delta\sigma_{xx}\) as a function of \(1/(\mu_{0}H)\) at several temperatures for selected pressures. (Right panels) Fast Fourier transformation of \(\Delta\sigma_{xx}\) shown in the left. temperature at several pressures. Figure 11(c) shows the mobilities as a function of temperature at several pressures. ## Appendix D: Bandstructure calculations Figure 12 shows the electronic bands and DOS zoomed in the vicinity of the Fermi level, to emphasize the negligible effect of pressure on the bands contributing to the AHE and LHE. Note, also that no crossing feature nor electron pocket was found in our ambient pressure calculation along the \(\Gamma-X\) high symmetry path, in contrary to Fig. 3(a) of [36]. This discrepancy could be attributed to the different exchange-correlation functional or to our use of theoretically relaxed lattice parameters, while [36] used experimental values which, in the case of the PBE-GGA functional used in their paper, will be smaller than the theoretical one. Nevertheless, from the pressure dependence of the electronic bands relative to the Fermi level, an electron pocket could likely appear along this path upon further increasing the pressure. We further refine the analysis of the electronic structure by calculating the orbital decomposition of the electronic wavefunction inside the atom-centered PAW spheres for Ce \(5d\) states (left panels, red), as well as for Al (middle panels, green) and Si (right panels, blue) \(2p\) states, in the same energy range as Fig. 12. The relative weights of the different orbitals at ambient pressure (top panels) and 3 GPa (bottom panels) are essentially identical, thus confirming the negligible effect of pressure on the electronic bands. The calculated magnetic structure does not display any significant differences between 0 and 3 GPa either, in agreement with the experimental observations (see Fig. 1(b) of the main text). For 0 GPa (3 GPa), we find a magnetic moment of 0.880 \(\mu_{B}\) (0.878 \(\mu_{B}\)) inside the PAW spheres of the 2 inequivalent Ce atoms in the unit cell. Considering that one moment points mostly in the \(\hat{x}\) direction and the other in the \(\hat{y}\) direction with an angle of 87.3\({}^{\circ}\) (89.8\({}^{\circ}\)) between them, we find a total net magnetic moment of 1.311 \(\mu_{B}\) (1.284 \(\mu_{B}\)) oriented along [110] for the whole unit cell. Note that the net size of the magnetic moment depends strongly on the choice of \(U\). ## Appendix E: Simplified model for Weyl nodes We choose a simple model for CeAlSi with 4 pairs of Weyl nodes as shown in Fig. 14 (top left panel). We have chosen the Weyl nodes and Fermi pockets for \(T>T_{C}\) to be consistent with the \(C_{4v}\) and mirror \(M_{x},M_{y}\) crystal symmetries of CeAlSi, as well as time-reversal symmetry. These nodes crudely mimic the \(W_{3}^{\prime}\) nodes found slightly above the Fermi level in previous _ab initio_ elec Figure 8: Effective mass \(m^{*}\) as a function of pressure for magnetic fields parallel to [001]. Figure 10: Anomalous Hall coefficient (\(R_{S}\)) as a function of the longitudinal resistivity (\(\rho_{xx}\)) at several pressures. Figure 9: Hall resistivity (\(\rho_{yz}\)) as a function of magnetic field applied parallel to [100] at 2 K for several pressures. The solid orange line is an extrapolation of a linear fit performed in the range 0.2 T \(\leqslant\)\(H\)\(\leqslant\) 0.6 T, which yields the ordinary background of \(\rho_{yz}\). tronic structure calculations. With the onset of magnetism, the Weyl nodes get displaced with opposite chirality nodes being displaced in opposite directions. As shown in Fig. 14(top right panel), this can lead to a topological transition of the Fermi surface where each pocket now encloses a single Weyl node. At the same time, for the type of Fermi surface sketched above, certain extremal orbits can remain unchanged in area (dashed lines), so that the QO frequency will be unaffected as observed. If the Weyl nodes are not widely separated even after the topological Fermi surface phase transition, proximity to a Lifshitz transition may lead to an enhancement of the electron scattering rate (in the presence of weak disorder) due to a large density of states, which can enhance the Dingle temperature and explain the strong observed deviation from the Lifshitz-Kosevich formula. Within the symmetry broken for \(T<T_{C}\), it is reasonable to consider the physics of isolated Weyl nodes. We model a single Weyl node as having an elliptical pocket with velocity tensor and a simple coupling to the Weiss field from the magnetization. \[\mathcal{H}_{+} =\sigma_{i}G_{ij}^{(+)}\tilde{q}_{j}+\sigma_{i}M_{i}, \tag{3}\] \[G_{ij}^{(+)} =\begin{pmatrix}|a|&\\ &\frac{1}{|a|}\\ &&1\end{pmatrix}\begin{pmatrix}\cos\beta&\sin\beta&0\\ -\sin\beta&\cos\beta&0\\ 0&0&1\end{pmatrix}, \tag{4}\] Here \(\tilde{q}_{i}=v_{F}q_{i}\) where \(\mathbf{q}\) denotes the momentum measured from the Weyl node location, and \(v_{F}\) is a velocity scale. The real matrix, \(G_{ij}\), defined this way results in an ellipsoidal Fermi surface, whose xy-plane cross section has an elliptical shape with the major and minor axis, \(|a|\) and \(1/|a|\) respectively for \(|a|>1\), and the major axis is rotated from the \(q_{y}\)-axis by the angle \(\beta\). For \(|a|<1\), the major axis is instead rotated from the \(q_{x}\) axis by \(\beta\). **Position of Weyl points:** When \(M_{i}=0\), the Weyl point resides at \(\mathbf{q}=0\). When \(M_{i}\neq 0\), the Weyl point shifts to the point satisfying the following equation \[\tilde{q}_{i}^{*}=-\left[G^{(+)}\right]_{ij}^{-1}M_{j} \tag{5}\] Figure 11: (a) Longitudinal (\(\sigma_{xx}\)) and Hall (\(\sigma_{xy}\)) conductivities at 2 K and 1.2 GPa. Carrier densities (b) and mobilities (c) obtained from the two-band fits as a function of temperature at several pressures. Figure 12: Electronic bands and DOS at ambient pressure (blue) and 3 GPa (red), zoomed in the vicinity of the Fermi level. Figure 13: Orbital decomposition of the electronic wavefunction at (a) ambient pressure and (b) 3 GPa, zoomed in the vicinity of the Fermi level. **Eigenspectrum:** The eigenvalues of \({\cal H}_{+}\) are given by \[E=\pm\sqrt{\tilde{q}_{i}[G^{(+)}]_{ij}^{T}[G^{(+)}]_{ji}\tilde{q}_{i}+2M_{i}[G^{(+ )}]_{ij}\tilde{q}_{j}+M_{i}M_{i}} \tag{6}\] Written in the form which is useful for numerics: \[0 = \tilde{q}_{x}^{2}\left(|a|^{2}c^{2}+\frac{1}{|a|^{2}}s^{2}\right) \tag{7}\] \[+\tilde{q}_{x}\left[2cs\left(|a|^{2}-\frac{1}{|a|^{2}}\right) \tilde{q}_{y}+2\left(|a|cM_{x}-\frac{1}{|a|}sM_{y}\right)\right]\] \[+\left[\tilde{q}_{z}^{2}+2M_{z}\tilde{q}_{z}+\left(|a|^{2}s^{2}+ \frac{1}{|a|^{2}}c^{2}\right)\tilde{q}_{y}^{2}\right.\] \[+\left.2\left(|a|sM_{x}+\frac{1}{|a|}cM_{y}\right)\tilde{q}_{y}+M ^{2}-E^{2}\right],\] where \(s\equiv\sin\beta\) and \(c\equiv\cos\beta\). The quadratic equation allows us to determine the mover modes given the Fermi energy and the Weiss field. If the \(\tilde{q}_{x}\) solutions are real-valued, we obtain travelling waves; the complex-valued solutions correspond to evanescent waves. The eigenfunctions are merely the eigenfunctions of a usual 2-by-2 Hermitian matrix, generally expressed in terms of the Pauli matrices as \(d_{i}\sigma_{i}\), where \(d_{i}=G_{ij}^{(+)}\tilde{q}_{j}+M_{i}\). The wave functions are given by \[\psi=\begin{cases}\frac{1}{\sqrt{2d(d+d_{3})}}\left(\begin{matrix}d+d_{3}\\ d_{1}+\mathrm{i}d_{2}\end{matrix}\right),\text{ for }E=d>0,\\ \frac{1}{\sqrt{2d(d+d_{3})}}\left(\begin{matrix}\mathrm{i}d_{2}-d_{1}\\ d_{3}+d\end{matrix}\right),\text{ for }E=-d<0.\end{cases} \tag{8}\] The group velocity for a mover are given by \[v_{i}=\frac{\left[G^{(+)}\right]_{ij}^{T}\left(\left[G^{(+)}\right]_{jl} \tilde{q}_{l}+M_{j}\right)}{E} \tag{9}\] **Negative-chirality node:** With \({\bf M}=0\), we can use \(C_{4v}\), time-reversal, and mirror symmetries \({\cal M}_{x}\), \({\cal M}_{y}\) to write out the Hamiltonian for all 8 Weyl nodes. For instance a negative chirality node is obtained under a mirror operation, where we can relate the g-tensor part of the Hamiltonian \({\cal H}^{(+)}\) to the g-tensor part of \({\cal H}^{(-)}\). For the Weyl point related to the original one by a mirror \({\cal M}_{y}\), we have \[{\cal H}^{(-)} = \sigma_{i}G_{ij}^{(-)}\tilde{q}_{j}+\sigma_{i}M_{i}, \tag{10}\] \[G^{(-)} = -\begin{pmatrix}|a|&\\ &\frac{1}{|a|}\\ &&1\end{pmatrix}\begin{pmatrix}\cos\beta&-\sin\beta&0\\ \sin\beta&\cos\beta&0\\ 0&0&1\end{pmatrix}. \tag{11}\] The distinctions from \(G^{(+)}\) are (i) the prefactor -1 which leads to the negative determinant and (ii) the \(\sin\beta\) which used to be \(-\sin\beta\) in \(G^{(+)}\). The latter amounts to a rotation of the Fermi surface about \(q_{z}\)-axis by \(-\beta\) instead of \(\beta\) in \({\cal H}^{(+)}\). All the formulae derived in earlier in this section can be straightforwardly generalized for \({\cal H}^{(-)}\). **Choice of parameters:** As an illustrative example, we choose \(|a|=0.5\) and \(\beta=\pi/4\). This results in elliptical cross-sections (at any given \(k_{z}\)) for the Fermi surfaces near a Weyl point with major : minor axis ratio of \(4:1\). The major axis of the ellipse is rotated by \(\pi/4\), so that it points along the \(45^{\circ}\) direction in the \((k_{x},k_{y})\) plane. We also choose other parameters to be reasonable values in line with the _ab initio_ calculations, namely \(v_{F}=500\) meVAand chemical potential \(\mu=-30\) meV (below the Weyl node). This leads to Fermi pockets with a typical size \(k_{F}\sim 0.06\) A\({}^{-1}\). We fix the Weiss field to have a magnitude \(|M|=|E_{F}|/4\). We note that our results do not change qualitatively if we choose somewhat different parameters - however, it is important that the elliptical Fermi pockets are not aligned along the tetragonal \(x\) or \(y\) axes (see Fig. 14). ## Appendix F Modeling the domain wall We assume a domain wall width \(N\times w=40\) nm (corresponding to \(N=40\).) We consider a domain wall be Figure 14: Top: Illustrative example of 4 topologically trivial Fermi surface pockets for \(T>T_{c}\), each enclosing a pair of Weyl points (WP) with opposite topological charge, located at momenta \((K_{0},\pm K_{1},0)\), \((-K_{0},\pm K_{1},0)\), \((\pm K_{1},K_{0},0)\), \((\pm K_{1},-K_{0},0)\). For \(T\!<\!T_{c}\), the in-plane magnetization leads to a momentum space displacement of the Weyl points, leading to transition into 8 topologically nontrivial Fermi pockets. The area of certain maximal orbits (dashed lines) can remain unchanged across this transition. Bottom: Projected view of the elliptical cross sections of the topologically nontrivial Fermi surfaces for \(T\!<\!T_{c}\). tween a left and a right region (see illustration in Fig. 15) with the following Weiss fields, respectively, \[\mathbf{M}_{I} =M\left(\cos\gamma,\sin\gamma,0\right)^{T}, \tag{12}\] \[\mathbf{M}_{III} =M\left(\cos\gamma,-\sin\gamma,0\right)^{T}. \tag{13}\] The domain wall region, region II, has a width \(Nw\), which is partitioned into \(N\) intervals, each with width \(w\). The Weiss field in \(j\)-th interval is given by \[\mathbf{M}_{j} =M\left(\cos\theta_{j}\cos\gamma_{j},\cos\theta_{j}\sin\gamma_{j},\sin\theta_{j}\right)^{T}, \tag{14}\] \[\gamma_{j} =\gamma-2\gamma\frac{j-1/2}{N},\] (15) \[\theta_{j} =\left(1-\frac{2|j-\frac{1}{2}-\frac{N}{2}|}{N}\right)\theta, \tag{16}\] where \(\theta\) is the angle at the center of region II. \(\theta_{j}\) monotonically decreases away from the center of region II. A large \(N\) models a smooth variation of the Weiss field in region II. ### Transmission and reflection coefficients The domain wall will lead to a scattering between Weyl Fermi surfaces. For simplicity, we assume a smooth domain wall and only take into account the intra-node scattering as shown in Fig. 16. We now sketch the computation of the transmission coefficient (TC) and reflection coefficient (RC) at the domain wall, defined earlier. For concreteness, we show the calculation for \(\mathcal{H}^{(+)}\). The step-by-step summary is given below * For a given Fermi energy \(E\) and the parallel momenta \((q_{y},q_{z})\), we compute the x-momenta for all the regions. * Compute the eigenfunctions * Wave function in each region is a superposition of a left mover and a right mover, except in Region III, where the wave function consists of only a right mover \[\Psi_{I} =\chi_{R}e^{\mathbf{i}\mathbf{q}_{R}\cdot\mathbf{x}}+r\chi_{L}e^{ \mathbf{i}\mathbf{q}_{L}\cdot\mathbf{x}}\] (17) \[\Psi_{II,j} =c_{1}^{(j)}\eta_{1}^{(j)}e^{\mathbf{i}\mathbf{p}_{1}^{(j)}\cdot \mathbf{x}}+c_{2}^{(j)}\eta_{2}^{(j)}e^{\mathbf{i}\mathbf{p}_{2}^{(j)}\cdot \mathbf{x}},\] (18) \[\Psi_{III} =t\xi_{R}e^{\mathbf{i}\mathbf{k}_{R}\cdot\mathbf{x}},\] (19) where \(t\) and \(r\) are the transmission and reflection amplitude respectively. * We then match the wave function at each boundary at \(x_{j}=jw\) for \(j=0,1,\cdots,N\) to determine \(r,t\) and \(c_{1,2}^{(j)}\). This can be formulated in transfer matrix form. This can be seen below. Figure 16: Illustration of the Fermi surfaces and the domain wall induced intra-node scattering (dashed arrow). For simplicity, we have not shown the displacements of the Fermi pockets relative to each other in the two domains or their difference in spin textures, but this is taken into account in our calculations as given below. Figure 15: Evolution of the magnetization across a domain wall. Region I and region III indicate bulk domains, and region-II is the domain wall region. Going across the domain wall, the magnetization vector twists, with the perpendicular domain wall magnetization \(M_{x}^{DW}>0\) for the depicted configuration. We will denote \(\mathbf{M}(x)=M(\cos\theta(x)\cos\gamma(x),\cos\theta(x)\sin\gamma(x),\sin \theta(x))\). In region-I, we choose \(\theta=0,\gamma=\pi/4\), while we set \(\theta=0,\gamma=-\pi/4\) in region III. In region II, we assume a twisting magnetization profile, with the maximum out of plane component determined by \(\theta_{\max}\) which is achieved at the center of region II. \[\chi_{R}+r\chi_{L} =c_{1}^{(1)}\eta_{1}^{(1)}+c_{2}^{(1)}\eta_{2}^{(1)}, \tag{20}\] \[c_{1}^{(1)}\eta_{1}^{(1)}e^{\mathrm{i}p_{1x}^{(1)}w}+c_{2}^{(1)} \eta_{2}^{(1)}e^{\mathrm{i}p_{2x}^{(1)}w} =c_{1}^{(2)}\eta_{1}^{(2)}e^{\mathrm{i}p_{1x}^{(2)}w}+c_{2}^{(2)} \eta_{2}^{(2)}e^{\mathrm{i}p_{2x}^{(2)}w}\] (21) \[c_{1}^{(2)}\eta_{1}^{(2)}e^{\mathrm{i}p_{1x}^{(2)}2w}+c_{2}^{(2) }\eta_{2}^{(2)}e^{\mathrm{i}p_{2x}^{(2)}2w} =c_{1}^{(3)}\eta_{1}^{(3)}e^{\mathrm{i}p_{1x}^{(3)}2w}+c_{2}^{(3)} \eta_{2}^{(3)}e^{\mathrm{i}p_{2x}^{(3)}2w}\] (22) \[c_{1}^{(j)}\eta_{1}^{(j)}e^{\mathrm{i}p_{1x}^{(j)}2w}+c_{2}^{(j) }\eta_{2}^{(j)}e^{\mathrm{i}p_{2x}^{(j)}2w} =c_{1}^{(j+1)}\eta_{1}^{(j+1)}e^{\mathrm{i}p_{1x}^{(j+1)}jw}+c_{2} ^{(j+1)}\eta_{2}^{(j+1)}e^{\mathrm{i}p_{2x}^{(j+1)}jw}\] (23) \[c_{1}^{(N)}\eta_{1}^{(N)}e^{\mathrm{i}p_{1x}^{(N)}Nw}+c_{2}^{(N) }\eta_{2}^{(N)}e^{\mathrm{i}p_{2x}^{(N)}Nw} =t\xi_{R}e^{\mathrm{i}k_{R}Nw}. \tag{24}\] Rewriting in matrix form \[\left(\chi_{R}\ \ \chi_{L}\right)\begin{pmatrix}1\\ r\end{pmatrix} =\begin{pmatrix}\eta_{1}^{(1)}&\eta_{2}^{(1)}\end{pmatrix}\begin{pmatrix}c _{1}^{(1)}\\ c_{2}^{(2)}\end{pmatrix}, \tag{25}\] \[\left(\eta_{1}^{(j)}e^{\mathrm{i}p_{1x}^{(j)}jw}\ \ \eta_{2}^{(j)}e^{ \mathrm{i}p_{2x}^{(j)}jw}\right)\begin{pmatrix}c_{1}^{(j)}\\ c_{2}^{(j)}\end{pmatrix} =\begin{pmatrix}\eta_{1}^{(j+1)}e^{\mathrm{i}p_{1x}^{(j+1)}jw}& \eta_{2}^{(j+1)}e^{\mathrm{i}p_{2x}^{(j+1)}jw}\end{pmatrix}\begin{pmatrix}c _{1}^{(j+1)}\\ c_{2}^{(j+1)}\end{pmatrix}\] (26) \[\left(\eta_{1}^{(N)}e^{\mathrm{i}p_{1x}^{(N)}Nw}\ \ \eta_{2}^{(N)}e^{ \mathrm{i}p_{2x}^{(N)}Nw}\right)\begin{pmatrix}c_{1}^{(N)}\\ c_{2}^{(N)}\end{pmatrix} =t\xi_{R}e^{\mathrm{i}k_{R}Nw}. \tag{27}\] From above, we can solve for \(r\) and \(t\) by the transfer matrix \(T^{(j)}\): \[\begin{pmatrix}1\\ r\end{pmatrix} =te^{\mathrm{i}k_{Rx}Nw}\left(\chi_{R}\ \ \chi_{L}\right)^{-1}T^{(1)} \cdots T^{(N)}\xi_{R}\equiv t\begin{pmatrix}u_{1}\\ u_{2}\end{pmatrix} \tag{28}\] \[T^{(j)} =\left(\eta_{1}^{(j)}e^{\mathrm{i}p_{1x}^{(j)}(j-1)w}\ \ \eta_{2}^{(j)}e^{ \mathrm{i}p_{2x}^{(j)}(j-1)w}\right)\left(\eta_{1}^{(j)}e^{\mathrm{i}p_{1x}^{( j)}jw}\ \ \eta_{2}^{(j)}e^{\mathrm{i}p_{2x}^{(j)}jw}\right)^{-1}, \tag{29}\] where, in the definition of the transfer matrix, the two matrices differ from each other, apart from the inverse operation, by the phase factors: one involves \((j-1)w\), whereas the other involves \(jw\). We finally obtain \[t =1/u_{1} \tag{30}\] \[r =u_{2}/u_{1}. \tag{31}\] TC and RC are given by \[TC =\frac{|v_{x,\mathrm{trans}}|}{|v_{x,\mathrm{inc}}|}|t|^{2}, \tag{32}\] \[RC =\frac{|v_{x,\mathrm{refl}}|}{|v_{x,\mathrm{inc}}|}|r|^{2}. \tag{33}\] The longitudinal conductance \(g_{xx}\) and the transverse conductance \(g_{yx}\) are then computed using TC and RC [64] (see also Refs. [71] and [72].) ### Results: case \(|a|=0.5,\beta=\pi/4\) #### iv.2.1 Parameters The \(xy\)-plane cross section of the Fermi surface near a Weyl point is an ellipse whose major axis is rotated by \(\pi/4\). The results corresponds to the following parameters: * Fermi velocity \(v_{F}=500\) meV.A * Fermi wave vector \(k_{F}\sim 0.06\) A\({}^{-1}\) * Fermi energy \(E_{F}=-30\) meV * Weiss field \(|M|=0.015\) a.u., which corresponds to \(|E_{F}|/4\). * Domain wall width \(N\times w=40\) nm for \(N=40\). #### iv.2.2 Results We will show results of the anomalous Hall contribution obtained by antisymmetrizing the off-diagonal conductance: \(g_{yx}^{A}(\theta)=\frac{1}{2}(g_{yx}(\theta)-g_{yx}(-\theta))\), namely antisymmetrizing w.r.t merely reversing the \(M_{z}\) component of the Weiss field. Figure 17 shows \(g_{yx}^{A}\) for the 4 pairs of WPs: (i) Fermi surfaces in each pair are related by either \(\mathcal{M}_{x}\) or \(\mathcal{M}_{y}\) mirror operation at zero Weiss field (see Fig. 14), and (ii) different pairs are related by a \(C_{4v}\) rotation at zero Weiss field (see Fig. 14.) A few main results are summarized below: 1. AHE contributions from the two Fermi surfaces in each pair has opposite signs, yet they do not cancel each other out, so AHE is still non vanishing. 2. AHE from the four WPs related by \(C_{4z}\) rotations has the same sign (see Fig. 17.) 3. Interchanging region I and region III leads to the same \(\theta\)-dependence of \(g_{yx}^{A}\) (see Fig. 18.) This suggests that as long as the total \(z\)-component of the Weiss field does not vanish, AHE contribution from the domain wall is non-zero. 4. Ratio \(g_{yx}^{A}/g_{xx}\) is of the order \(10^{-3}\) at small angle, see Fig. 5(c).
2309.08263
Improving Voice Conversion for Dissimilar Speakers Using Perceptual Losses
The rising trend of using voice as a means of interacting with smart devices has sparked worries over the protection of users' privacy and data security. These concerns have become more pressing, especially after the European Union's adoption of the General Data Protection Regulation (GDPR). The information contained in an utterance encompasses critical personal details about the speaker, such as their age, gender, socio-cultural origins and more. If there is a security breach and the data is compromised, attackers may utilise the speech data to circumvent the speaker verification systems or imitate authorised users. Therefore, it is pertinent to anonymise the speech data before being shared across devices, such that the source speaker of the utterance cannot be traced. Voice conversion (VC) can be used to achieve speech anonymisation, which involves altering the speaker's characteristics while preserving the linguistic content.
Suhita Ghosh, Yamini Sinha, Ingo Siegert, Sebastian Stober
2023-09-15T09:18:38Z
http://arxiv.org/abs/2309.08263v2
# Improving Voice Conversion for Dissimilar Speakers Using Perceptual Losses ###### Abstract The rising trend of using voice as a means of interacting with smart devices has sparked worries over the protection of users' privacy and data security [1]. These concerns have become more pressing, especially after the European Union's adoption of the General Data Protection Regulation (GDPR). The information contained in an utterance encompasses critical personal details about the speaker, such as their age, gender, socio-cultural origins and more. If there is a security breach and the data is compromised, attackers may utilise the speech data to circumvent the speaker verification systems or imitate authorised users [2]. Therefore, it is pertinent to anonymise the speech data before being shared across devices, such that the source speaker of the utterance cannot be traced. Voice conversion (VC) can be used to achieve speech anonymisation, which involves altering the speaker's characteristics while preserving the linguistic content. Many voice conversion approaches have been proposed over the years, where the deep learning-based methods outperform the conventional ones [3]. Further, the generative adversarial network (GAN) based approaches produce natural-sounding conversions [3]. However, the quality is dependant on the selection of the target speaker. This is because GAN-based VC methods typically use non-parallel data, which prevents the computation of loss between the source utterance and the conversion conditioned on a speaker other than the source. The quality of conversion degrades when the acoustic properties between the source and target speakers are diverse. However, to achieve a successful anonymisation, the source and target speakers should not have very similar acoustic properties, such as pitch. In this work, we propose perceptual losses which are computed between the source and converted utterances. The losses facilitate the model to capture representations which are pertinent with respect to how humans perceive speech quality. The models trained with the proposed losses produce less robotic voices compared to the baseline, and improves the overall quality for all target speakers. ## Related Work In earlier VC approaches, parallel data was utilised, where utterances of the same linguistic content are available from both the source and target speakers. Many traditional statistical modelling-based parametric [4, 5] and non-parametric [6, 7] parallel VC methods were proposed. Compared to the conventional methods, the sequence-to-sequence deep neural network (DNN) based method [8] using parallel data produces less robotic voices. However, it does not preserve prosody and produces mispronunciations [9]. Further, the model learns one-to-one mapping, limiting its usage. The recent works focus more on non-parallel data [3], as it is much easier and cheaper to collect. A few VC methods [10] use phonetic posteriorgrams (PPGs) as input to the encoder-decoder framework, which produces the translated acoustic features, consequently used by a vocoder to generate the converted speech. The PPG-based conversions are generally not smooth resulting in degraded voice quality and naturalness [3]. Many non-parallel variational auto encoder (VAE) [3, 11] methods were proposed, which typically disentangle the content and speaker embeddings using a reconstruction loss. The VAE-based approaches are prone to spectrum smoothing, which leads to a buzzy-sounding voice [3]. A plethora of GAN-based VC approaches [12, 13, 14] have been proposed to overcome this over-smoothing effect. GAN-based VC approaches use cycle-consistency loss [15], which enable them to use non-parallel data. ## Method Our architecture is based on the GAN-based method StarGANv2-VC [14]. We describe the architecture and then the perceptual losses used in the work. ### StarGANv2-VC StarGANv2-VC [14] is a non-parallel many-to-many voice conversion GAN-based approach. The architecture is shown in Figure 1. In StarGANv2-VC, only one generator is required for conversion among many pairs. We describe the pertinent components of the framework, which Figure 1: StarGAN based Voice Conversion architecture are portrayed in Figure 1: * **Generator**: The generator G produces the converted mel-spectrogram \(\overline{X}\) using three inputs: log mel-spectrogram generated from the source utterance \(X_{s}\), fundamental frequency (F0) embeddings \(h_{f}\) from the source utterance and target speaker's style-code \(h_{s}\). The F0 embeddings are the convolutional outputs from a pre-trained joint detection and classification (JDC) network [16], which has convolutional layers followed by BLSTM units. The converted mel-spectrogram \(\overline{X}\) bears the style/timbre of the target speaker and the linguistic content of the source. * **Speaker Style Encoder**: The speaker style encoder S captures representations of the speaker's style. The style may represent accent, mannerism and other attributes which can be associated with the speaker independent of the content spoken. Provided a mel-spectrogram \(X_{r}\), which is generated from a reference utterance different from the source mel-spectrogram \(X_{s}\) and a speaker-code \(r\), the S generates the speaker style embeddings \(h_{s}\). The speaker-code is a one-hot encoding of the speaker labels. The embedding \(h_{s}\) acts as one of the inputs to the generator G, which contributes to the style of the converted mel-spectrogram \(\overline{X}\). S initially processes the input mel-spectrogram through multiple convolutional layers which are shared for all speakers, followed by a speaker-specific linear layer which maps the shared features into a speaker-specific style embedding. * **Discriminator and Speaker Classifier:** The architecture has a discriminator D, as present in any GAN model, which performs the quality check of the conversions by capturing the representations for the real and fake samples. The additional adversarial speaker classifier (C) has the same architecture as D. When the D is trained, keeping the weights of G fixed, the C classifies the source speaker, which encourages G to produce conversions having no trace of source speaker's attributes. When G is trained, the weights of D are fixed, the C classifies the target speaker, which facilitates providing feedback to G, such that it produces conversions sounding like the target speaker. ### Perceptual Losses Task specific perceptual losses facilitate models to capture pertinent representations, required to achieve the goal [17]. In our case, to improve the overall quality of voice conversions for all target speakers. * **Short Time Objective Intelligibility (STOI):** STOI [18] is an intrusive metric that compares the degraded signal with the high quality ground truth to measure the intelligibility of the noisy signal. The STOI score ranges from 0 to 1, with higher values indicating better intelligibility. In our case, \(X\) and \(\bar{X}\) act as the ground truth and noisy signals respectively. To calculate STOI, firstly speech signals are divided into short frames where each frame overlaps with the adjacent frames to capture the temporal context. For each frame, the short-time power spectrum is calculated using a Fourier transform. The modulation spectrum of both the signals are calculated by applying a perceptual auditory filter-bank (one-third octave band) to the short-time power spectra. The correlation coefficient between the modulation spectra of the original and degraded speech signals are calculated, which provides a similarity measure between two spectra. The STOI score at time frame \(m\) is calculated by taking average over all one-third octave bands as shown in Equation 1, where \(j\) is the index of the one-third octave band, \(x_{j,m}\) and \(\bar{x}_{j,m}\) denote the vectors representing the short-term temporal envelopes for time frame m and one-third octave band \(j\) of the clean and noisy signals respectively. \(\mu(\cdot)\) denotes sample mean and \(J\) is the total number of the one-third octave bands. \[f_{stoi}(X_{m},\bar{X}_{m})=\sum_{j=1}^{J}\frac{(x_{j,m}-\mu_{x_{j,m}})(\bar{ x}_{j,m}-\mu_{\bar{x}_{j,m}})}{\|x_{j,m}-\mu_{x_{j,m}}\|\bar{x}_{j,m}-\mu_{\bar{x}_{ j,m}}\|_{2}}\] (1) The loss is calculated as shown in Equation 4, as done in [19], where mean squared error (Equation 3) is also considered along with STOI score (Equation 2), as STOI calculates the discrepancy only for frequencies below 4.3 KHz. \(\lambda_{stoi}\) and \(\lambda_{mse}\) are hyper-parameters which weigh the contribution of \(L_{stoi_{s}}\) and \(L_{stoi_{m}}\) respectively. \[L_{stoi_{s}}=(1-f_{stoi}(X_{m},\bar{X}_{m}))\] (2) \[L_{stoi_{m}}=(\|X_{m}^{J}-\bar{X}_{m}^{J}\|_{1}/J)\] (3) \[L_{stoi}=\frac{1}{m}(\lambda_{stoi}*L_{stoi_{s}}+\lambda_{mse}*L_{stoi_{m}})\] (4) * **Predicted Mean Opinion Score (pMOS):** MOS is a subjective measure which is used to assess the naturalness of the converted voice in voice conversion [14]. The measure correlates well with human perception of audio quality and naturalness. However, it is arduous and expensive as many participants' involvement is needed. Therefore, a measure similar to MOS is desirable, which captures the intrinsic naturalness of the conversions. MOSNet proposed in [20] can be used as a proxy MOS score generator. MOSNet is a combination of a convolutional neural network (CNN) and bidirectional long short-term memory (BLSTM) architecture. The CNN layers extract the representations required to assess the quality of the frames. BLSTM can effectively incorporate prolonged temporal dependencies and sequential traits into representative features. At the end two fully-connected layers are used, which regresses the frame-wise features into a frame-level quality score, which is followed by a global averaging operation to obtain the utterance-level score. The loss is calculated as shown in Equation 5, where MOS(.) denotes MOSNet and \(\lambda_{mos}\) is a hyperparameter. The loss encourages G to produce conversions having naturalness similar to the original utterance. \[L_{mos}=\lambda_{mos}*\|MOS(X)-MOS(\bar{X})\|_{1}\] (5) * **Pitch correlation coefficient (PCC):** Pitch is the perceptual measure of F0. The pitch contour contributes to the intonation or prosody of an utterance [21]. PCC is the Pearson correlation between two normalised F0 contours, which provides the similarity between two utterances with respect to prosody [21]. The F0 contours for two utterances having same content and intonation will vary for two groups (age, gender, etc). However, there should not be a large difference between the normalised F0 contours, i.e. the change in F0 over time should not vary much. Therefore, a higher PCC is desirable. PCC Loss is represented in Equation 6, where Pearson(.) is the Pearson correlation operator. \[L_{pcc}=1-Pearson\Bigg{(}\frac{F(X)}{\|F(X)\|_{1}},\frac{\bar{F}(X)}{\|\bar{F}(X )\|_{1}}\Bigg{)}\] (6) ### Objective Function The generator G is trained with loss \(L_{G}\), where \(L_{adv}\) is the typical GAN adversarial loss, \(L_{aspk}\) is the adversarial speaker classification loss, \(L_{sty}\) is the style reconstruction loss and \(L_{cyc}\) is the cyclic consistency loss, as proposed in [14]. \(L_{p}\) denotes one of the proposed perceptual losses. \[L_{G}=\underset{G,S}{min}L_{adv}+\lambda_{aspk}L_{aspk}+\lambda_{sty}L_{sty}+ \lambda_{cyc}L_{cyc}+\lambda_{p}L_{p} \tag{7}\] The discriminator D and classifier C are trained using the objective function shown in Equation 8, where \(L_{spk}\) is the speaker classification loss [14]. \[L_{D}=\underset{D,C}{min}-L_{adv}+\lambda_{spk}L_{spk}+\lambda_{p}L_{p} \tag{8}\] The \(\lambda\) for the corresponding loss denotes the hyperparameter which weighs the loss's contribution. ### Experiment Details We train all the models using English utterances of the 20 speakers from VCTK [22] dataset, as done in [14]. The utterances are resampled to 24 kHz and randomly split as 80%/10%/10% (train/val/test). The models are trained for 150 epochs and with batch size of 64. The log-melspectrograms are derived from 2 second long utterances. AdamW optimizer is used with initial learning rate of 0.0001. The hyperparameters are set as: \(\lambda_{spk}=0.1,\lambda_{aspk}=0.5,\lambda_{sty}=1,\lambda_{cyc}=1,\lambda_ {stoi}=1,\lambda_{mse}=0.1\). STOI is computed using hyperparameters same as in [18]. The naturalness of the conversions is evaluated using pMOS. The intelligibility of the conversions is measured using character error rate (CER), using the transcriptions from Whisper [23] medium-English model. We use automatic speaker verification (ASV) to measure speaker similarity as done in [14]. We trained an AlexNet as done in [14] for speaker classification for the selected twenty speakers. The classification accuracy (Speaker CLS) serves as the objective metric to assess how close the conversions sound to the target speaker. ### Results and Discussion We randomly selected 5 male and 5 female speakers as the target speakers. For each source speaker, randomly 50 utterances are selected, which leads to 1000 conversions. The model trained using \(L_{pcc}\) produces the best results with respect to naturalness and intelligibility, followed by the model trained using pMOS loss. It is also observed the standard deviation for the baseline is much higher than the ones trained using target perceptual losses. Therefore, the proposed losses produce better quality conversions overall, and not just for specific target speakers. With respect to speaker similarity, all the models perform similarly, where PCC outperforms. model to disentangle the content and speaker representations, which leads to improved conversions not dependant on the target speaker selection. In this work, we focus only on the naturalness and intelligibility aspect of voice quality. As future work, we will perform listening tests to validate the results obtained through objective measures. Further, we intend to incorporate perceptual losses which capture the emotional content as well. This would be useful for the intelligent speech devices, whose response is driven by the emotion of the end-user. ## Acknowledgements This research has been partly funded by the Federal Ministry of Education and Research of Germany in the project Emonymous (project number S21060A) and partly funded by the Volkswagen Foundation in the project AnonymPrevent (AI-based Improvement of Anonymity for Remote Assessment, Treatment and Prevention against Child Sexual Abuse).
2302.14538
Neutrinoless double beta decay in Left-Right symmetric model with double seesaw mechanism
We discuss a left-right (L-R) symmetric model with the double seesaw mechanism at the TeV scale generating Majorana masses for the active left-handed (LH) flavour neutrinos $\nu_{\alpha L}$ and the heavy right-handed (RH) neutrinos $N_{\beta R}$, $\alpha,\beta = e,\mu,\tau$, which in turn mediate lepton number violating processes, including neutrinoless double beta decay. The Higgs sector is composed of two Higgs doublets $H_L$, $H_R$, and a bi-doublet $\Phi$. The fermion sector has the usual for the L-R symmetric models quarks and leptons, along with three $SU(2)$ singlet fermion $S_{\gamma L}$. The choice of bare Majorana mass term for these sterile fermions induces large Majorana masses for the heavy RH neutrinos leading to two sets of heavy Majorana particles $N_j$ and $S_k$, $j,k=1,2,3$, with masses $m_{N_j} \ll m_{S_k}$. Working with a specific version of the model in which the $\nu_{\alpha L} - N_{\beta R}$ and the $N_{\beta R} - S_{\gamma L}$ Dirac mass terms are diagonal, and assuming that $m_{N_j} \sim (1 - 1000)$ GeV and ${\rm max}(m_{S_k}) \sim (1 - 10)$ TeV, $m_{N_j} \ll m_{S_k}$, we study in detail the new ``non-standard'' contributions to the $0\nu\beta\beta$ decay amplitude and half-life arising due to the exchange of virtual $N_j$ and $S_k$. We find that in both cases of NO and IO light neutrino mass spectra, these contributions are strongly enhanced and are dominant at relatively small values of the lightest neutrino mass $m_{1(3)} \sim (10^{-4} - 10^{-2})$ eV over the light Majorana neutrino exchange contribution. In large part of the parameter space, the predictions of the model for the $0\nu\beta\beta$ decay generalised effective Majorana mass and half-life are within the sensitivity range of the planned next generation of neutrinoless double beta decay experiments LEGEND-200 (LEGEND-1000), nEXO, KamlAND-Zen-II, CUPID, NEXT-HD.
Sudhanwa Patra, S. T. Petcov, Prativa Pritimita, Purushottam Sahu
2023-02-28T12:52:31Z
http://arxiv.org/abs/2302.14538v2
# Neutrinoless double beta decay in Left-Right symmetric model with double seesaw ###### Abstract We discuss a left-right (L-R) symmetric model with the double seesaw mechanism at the TeV scale generating Majorana masses for the active left-handed (LH) flavour neutrinos \(\nu_{\alpha L}\) and the heavy right-handed (RH) neutrinos \(N_{\beta R}\), \(\alpha,\beta=e,\mu,\tau\), which in turn mediate lepton number violating processes, including neutrinoless double beta decay. The Higgs sector is composed of two Higgs doublets \(H_{L}\), \(H_{R}\) and a bi-doublet \(\Phi\). The fermion sector has the usual for the L-R symmetric models quarks and leptons, along with three \(SU(2)\) singlet fermion \(S_{\gamma L}\). The choice of bare Majorana mass term for these sterile fermions induces large Majorana masses for the heavy RH neutrinos leading to two sets of heavy Majorana particles \(N_{j}\) and \(S_{k}\), \(j,k=1,2,3\), with masses \(m_{N_{j}}\ll m_{S_{k}}\). Working with a specific version of the model in which the \(\nu_{\alpha L}-N_{\beta R}\) and the \(N_{\beta R}-S_{\gamma L}\) Dirac mass terms are diagonal, and assuming that \(m_{N_{j}}\sim(1-1000)\) GeV and \(\max(m_{S_{k}})\sim(1-10)\) TeV, \(m_{N_{j}}\ll m_{S_{k}}\), we study in detail the new "non-standard" contributions to the \(0\nu\beta\beta\) decay amplitude and half-life arising due to the exchange of virtual \(N_{j}\) and \(S_{k}\). We find that in both cases of NO and IO light neutrino mass spectra, these contributions are strongly enhanced and are dominant at relatively small values of the lightest neutrino mass \(m_{1(3)}\sim(10^{-4}-10^{-2})\) eV over the light Majorana neutrino exchange contribution. In large part of the parameter space, the predictions of the model for the \(0\nu\beta\beta\) decay generalised effective Majorana mass and half-life are within the sensitivity range of the planned next generation of neutrinoless double beta decay experiments LEGEND-200 (LEGEND-1000), nEXO, KamlAND-Zen-II, CUPID, NEXT-HD. Keywords:Left-Right Theories, Seesaw Mechanism, Lepton Number Violation, Neutrinoless Double beta Decay ## 1 Introduction Neutrino mass and mixing, which was confirmed by oscillation experiments [1; 2; 3; 4; 5] can not be understood within the Standard Model (SM) of particle physics since it predicts massless neutrinos. So, there has to be a mechanism beyond the SM which generates nonzero mass for these tiny particles. The seesaw mechanism has become quite famous for explaining the same by extending the SM in the minimal possible way. Some of the variants of this mechanism are type-I [6; 7; 8; 9], type-II[10; 11; 12; 13; 14] and type-III [15; 16] seesaw which can be achieved by adding a right-handed neutrino, a scalar triplet and a fermion triplet to the SM respectively. However, a heavy right-handed scale associated with these seesaw mechanisms renders them unverifiable at the collider experiments. Thus arises the necessity of bringing down the seesaw scale to a verifiable TeV range. The seesaw mechanisms assume neutrinos are Majorana particles, which can be probed via the lepton number violating process of neutrinoless double beta decay [17]. Such a rare transition occurs when two neutrons simultaneously decay into two protons and two electrons without any neutrinos. It can be induced either by light left-handed neutrinos, called the standard mechanism or by exotic particles like heavy right-handed neutrinos or sterile neutrinos, called new physics contribution. In the standard mechanism case, the experimental limits on the half-life of the decay can only be saturated by quasi-degenerate [18] light neutrinos, which are disfavored by cosmological data sets [19; 20; 21; 22]. On the other hand, identifying the correct neutrino mass hierarchy, considering the sum of light neutrino masses, would require a multi-ton scale detector that is beyond feasible in the near future. Any comparative future experimental observation of \(0\nu\beta\beta\) decay would only be attributed to new physics contribution. The current lower limit on the decay half life of Ge\({}^{76}\) is \(T_{1/2}^{0\nu}>1.8\times 10^{26}\) yrs at 90% C.L. from GERDA [23]. Experiments using the isotope Xe\({}^{136}\) like EXO-200 [24] and KamLAND-Zen [25; 26] have derived the lower bounds on half-life as \(T_{1/2}^{0\nu}>3.5\times 10^{25}\) yrs and \(T_{1/2}^{0\nu}>1.07\times 10^{26}\) yrs respectively. With this motivation, we consider a Left-Right symmetric model with a double seesaw mechanism [27; 28] as new physics and study the new contributions to \(0\nu\beta\beta\) decay process. Left-Right Symmetric Model (LRSM) [29; 30; 31; 32] is a well-suited candidate for physics beyond SM for several reasons. To name a few, it can explain the theoretical origin of maximal parity violation in weak interaction, it can incorporate neutrino mass due to the presence of a right-handed neutrino state, it appears as a subgroup of SO(10) Grand Unified Theory, and it can be broken down to SM gauge symmetry at low energies. Moreover, it delivers rich phenomenology if the left-right symmetry breaking occurs at few TeV scale [33; 34; 35; 36; 37; 38; 39; 40; 41; 42; 43; 44; 45; 46; 47; 48; 49; 50; 51; 52; 53; 54; 55; 56; 57; 58; 59; 60; 61; 62; 63; 64]. The spontaneous symmetry breaking of LRSM to SM plays a vital role in generating neutrino mass through the seesaw mechanism. The seesaw scheme varies with the choice of scalars considered in the left-right model and regulates the associated phenomenology. In general, symmetry breaking can be done with the help of Higgs doublets or Higgs triplets or with the combination of both doublets and triplets. In the case of Higgs doublets, neutrinos don't get Majorana mass, and thus the model forbids any signatures of lepton number violation or lepton flavour violation. In the case of Higgs triplets, neutrino mass is generated via the type-I plus type-II seesaw mechanism. Even though Majorana mass is generated for light and heavy neutrinos in this case, the seesaw can't be probed by experiments considering the high scale associated with it. The seesaw scale can be brought down to the TeV range in the case of a linear seesaw and inverse seesaw, some of which are discussed in ref.[65; 66; 67; 68; 69; 70; 71; 72; 73]. However in case of a linear seesaw and inverse seesaw the light neutrinos are Majorana. In contrast, the heavy neutrinos are pseudo-Dirac, due to which the heavy neutrinos do not play a dominant role in lepton number violation. To study dominant new contributions to LNV and LFV decays, the Higgs and fermion sectors of LRSM have been extended in various refs [74; 75; 76; 77; 78]. We explore here a double seesaw mechanism [27; 28] within a left-right symmetric model without Higgs triplets allowing significant lepton number violation and new physics contribution to neutrinoless double beta decay. We keep the scalar sector of the model minimal while adding only one sterile neutrino per generation in the fermion sector. Even though the Higgs and fermion sectors are the same as in the case of a linear and inverse seesaw, the choice of bare Majorana masses for sterile neutrinos can induce large Majorana masses for heavy RH neutrino as well. The non-zero masses for RH neutrinos are generated through the double seesaw mechanism by implementing seesaw approximation twice. In the first step, the Majorana mass matrix and masses of the RH neutrinos are generated via the type-I seesaw mechanism. In this case, light neutrino mass becomes linearly dependent on a heavy sterile neutrino mass scale. This is how the double seesaw mainly differs from the canonical seesaw mechanism, where the light neutrino masses are inversely proportional to heavy RH neutrino masses. Another essential feature of our model is that we express mass relations between light and heavy Majorana neutrinos in terms of oscillation parameters and the lightest neutrino mass. Thus, it enables us to derive meaningful information about the absolute scale of the lightest neutrino mass and mass hierarchy from the new contributions to the neutrinoless double beta decay process by saturating the current experimental limits. The plan of the paper can be summarized as follows. In Section 2, we give a brief description of the left-right symmetric model with the double seesaw mechanism. In Section 3 we explain the implementation of the double seesaw mechanism and the origin of Majorana masses for light and heavy right-handed (RH) and sterile neutrinos. The generation of the masses of gauge bosons associated with the \(SU(2)_{\rm R}\) gauge group as well the constraints on their masses and on their mixing with the Standard Model gauge bosons are also considered in this Section. The general expression for the neutrinoless double beta decay half-life, including the new physics (i.e., the "non-standard") contributions is given in Section 4, in which we also, present and discuss briefly the nuclear matrix elements of the process and their current uncertainties. Detailed phenomenological analysis of the non-standard contributions together with numerical estimates of their magnitude are presented in Section 5. We also give predictions of the considered model for the neutrinoless double beta decay "generalised" effective Majorana mass and half-life accounting, in particular, for the uncertainties of the relevant nuclear matrix elements. Section 6 contains brief comments on the potential lepton flavour violation and collider phenomenology of the considered model. Section 7 contains a summary of the results obtained in the present study. In Appendix 8, we give a detailed derivation of the the masses and mixing of the light and heavy Majorana neutrinos in the considered left-right symmetric model with a double seesaw mechanism of neutrino mass generation. ## 2 Left-Right Symmetric Model with Double Seesaw Left-Right symmetric models were proposed with the motivation of restoring parity (or left-right) symmetry at a high scale [29; 30; 31; 32]. Therefore, in the model the left- and right-handed fermion fields are assigned to \(SU(2)_{L}\) and \(SU(2)_{R}\) doublets, respectively, which are related by a discrete symmetry. The complete gauge group, which is an extension of the SM gauge group can be written as: \[\mathcal{G}_{LR}\equiv SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\,, \tag{1}\] where \(SU(3)_{C}\) is omitted for simplicity. The electric charge for any particle in this model is defined as \[Q=T_{3L}+T_{3R}+\frac{B-L}{2}\,. \tag{2}\] where \(T_{3L}\) (\(T_{3R}\)) is the third component of the isospin associated with the \(SU(2)_{L}\) (\(SU(2)_{R}\)) gauge group. The model's fermion sector comprises all the Standard Model fermions plus a right-handed neutrino \(N_{R}\). The fermion fields with their respective quantum numbers can be written as follows: \[q_{L}=\begin{pmatrix}u_{L}\\ d_{L}\end{pmatrix}\equiv[2,1,1/3]\,,\;q_{R}=\begin{pmatrix}u_{R}\\ d_{R}\end{pmatrix}\equiv[1,2,1/3]\,,\] \[\ell_{L}=\begin{pmatrix}\nu_{L}\\ e_{L}\end{pmatrix}\equiv[2,1,-1]\,,\;\;\;\;\;\ell_{R}=\begin{pmatrix}N_{R}\\ e_{R}\end{pmatrix}\equiv[1,2,-1]\,.\] The scalar sector is responsible for the spontaneous symmetry breaking of LRSM to SM and plays a crucial role in deciding the type of seesaw mechanism through which neutrino masses can be generated. The left-right symmetry breaking can be done either with the help of doublets \(H_{L}\), \(H_{R}\) or triplets \(\Delta_{L}\), \(\Delta_{R}\), or with the combination of both doublets and triplets. The next step of symmetry breaking, i.e., the breaking of SM symmetry to \(U(1)_{em}\), is done with the help of the doublet \(\phi\) contained in the bidoublet \(\Phi\). The doublets \(H_{L}\), \(H_{R}\) and the bidoublet \(\Phi\) have the form, \[H_{L}=\begin{pmatrix}h_{L}^{+}\\ h_{L}^{0}\end{pmatrix}\equiv[2,1,1]\,,\] \[H_{R}=\begin{pmatrix}h_{R}^{+}\\ h_{R}^{0}\end{pmatrix}\equiv[1,2,1]\,,\] \[\Phi=\begin{pmatrix}\phi_{1}^{0}&\phi_{2}^{+}\\ \phi_{1}^{-}&\phi_{2}^{0}\end{pmatrix}\equiv[2,2,0]\,,\] The symmetry breaking steps can be sketched as follows: \[\begin{array}{c}\underline{\text{Spontaneous symmetry breaking of LRSM:}}\\ \begin{array}{c}\underline{\text{{\it SU(2)}}_{L}}\ \ \times\ \underline{\text{{\it SU(2)}}_{R}}\ \times\ \underline{\text{{\it U(1)}}_{B-L}}\\ \{T_{L},T_{3L}\}\ \ the generation of Majorana neutrino masses. However, we note that when \(v_{2}\ll v_{1}\) and \(|Y_{3}|\ll|Y_{4}|\) one can have small Dirac neutrino masses. In this scenario, the charged lepton and neutrino masses can be written as: \[M_{e} \approx Y_{4}v_{1}\,, \tag{8}\] \[M_{D}^{\nu} = v_{1}\left(Y_{3}+M_{e}\frac{v_{2}}{|v_{1}|^{2}}\right)\,. \tag{9}\] The gauge couplings of \(SU(2)_{L},SU(2)_{R}\), and \(U(1)_{B-L}\) are denoted as \(g_{L},g_{R}\), and \(g_{BL}\) respectively. When the gauge couplings of \(SU(2)_{L}\) and \(SU(2)_{R}\) gauge group become equal, i.e. \(g_{L}=g_{R}\), there exist two symmetry transformations between the left and right. This additional discrete left-right symmetry corresponds to either generalized parity \(\mathcal{P}\) or generalized charge conjugation \(\mathcal{C}\)[32, 79]. Under the parity symmetry operation, the fields change as follows ; \[\ell_{L}\stackrel{{\mathcal{P}}}{{\longleftrightarrow}}\ell_ {R},\qquad q_{L}\stackrel{{\mathcal{P}}}{{\longleftrightarrow}}q_ {R},\quad, \tag{10}\] \[\Phi\stackrel{{\mathcal{P}}}{{\longleftrightarrow}} \Phi^{\dagger},\quad H_{L}\stackrel{{\mathcal{P}}}{{ \longleftrightarrow}}H_{R},\quad\widetilde{\Phi}\stackrel{{ \mathcal{P}}}{{\longleftrightarrow}}\widetilde{\Phi}^{\dagger}\] whereas charge conjugation operation transforms the fields as \[\ell_{L}\stackrel{{\mathcal{C}}}{{\longleftrightarrow}}\ell_ {R}^{c},\qquad q_{L}\stackrel{{\mathcal{C}}}{{\longleftrightarrow}}q_ {R}^{c}, \tag{11}\] \[\Phi\stackrel{{\mathcal{C}}}{{\longleftrightarrow}} \Phi^{T},\quad H_{L}\stackrel{{\mathcal{C}}}{{\longleftrightarrow}}H_ {R}^{*},\quad\widetilde{\Phi}\stackrel{{\mathcal{C}}}{{ \longleftrightarrow}}\widetilde{\Phi}^{T}\] All the Left-Right symmetric models either have a \(\mathcal{P}\) or \(\mathcal{C}\) symmetry. It should be noted that the combination of the two symmetries, \(\mathcal{CP}\), does not switch the left and right-handed fields, and is not, therefore, a left-right symmetry. The Lagrangian in Eq. (4) becomes invariant by imposing left right symmetry with discrete \(\mathcal{P}\) symmetry and it leads to hermitian Yukawa matrices as follows \[Y_{1}=Y_{1}^{\dagger},\quad Y_{2}=Y_{2}^{\dagger},\quad Y_{3}=Y_{3}^{\dagger },\quad Y_{4}=Y_{4}^{\dagger} \tag{12}\] Therefore quark, charged lepton and Dirac mass matrices presented in Eq. (6), and (7) are hermitian matrices. On the other hand, if discrete \(\mathcal{C}\) symmetry is imposed on the Lagrangian in Eq. (4), it leads to symmetric Yukawa matrices, \[Y_{1}=Y_{1}^{T},\quad Y_{2}=Y_{2}^{T},\quad Y_{3}=Y_{3}^{T},\quad Y_{4}=Y_{4}^ {T} \tag{13}\] and the corresponding mass matrices of Eq. (6), and (7) become symmetric matrices. However, in our discussion, we consider a left-right model with discrete \(\mathcal{P}\) symmetry. ## 3 Neutrino Masses and Mixing In order to implement the double (or cascade) seesaw mechanism [27, 28] of neutrino mass generation within the manifest left-right symmetric model, we extend the fermion sector with the addition of one sterile neutrino \(S_{L}\equiv[1,1,0]\) (\(S_{L}\stackrel{{\mathcal{P}}}{{\longleftrightarrow}}(S^{c})_{R}\)) per generation. The relevant interaction Lagrangian \(\mathcal{L}_{LRDSM}\) is given by: \[-\mathcal{L}_{LRDSM} = \mathcal{L}_{M_{D}}+\mathcal{L}_{M_{RS}}+\mathcal{L}_{M_{S}}\,, \tag{14}\] where the individual terms can be expanded as follows. * \({\cal L}_{M_{D}}\) is the Dirac mass term connecting left-handed and right-handed neutrino fields \(\nu_{L}-N_{R}\): \[{\cal L}_{M_{D}} = \sum_{\alpha,\beta}\overline{\nu_{\alpha L}}[M_{D}]_{\alpha\beta}N _{\beta R}+\mbox{ h.c.}\] (10) \[\subset\sum_{\alpha,\beta}\overline{\ell_{\alpha L}}\left((Y_{ \ell})_{\alpha\beta}\Phi+(\widetilde{Y}_{\ell})_{\alpha\beta}\hat{\Phi}\right) \ell_{\beta R}+\mbox{ h.c.}\] * \({\cal L}_{M_{RS}}\) is another Dirac mass term connecting \(N_{R}\) and \(S_{L}\) and in the considered left-right symmetric theory it has the form: \[{\cal L}_{M_{RS}} = \sum_{\alpha,\beta}\overline{S_{\alpha L}}[M_{RS}]_{\alpha\beta}N _{\beta R}+\mbox{ h.c.}\] (11) \[\subset\sum_{\alpha,\beta}\overline{S_{\alpha L}}(Y_{RS})_{ \alpha\beta}\widetilde{H_{R}}^{\dagger}\ell_{\beta R}+\mbox{ h.c.}\] * The bare Majorana mass term \({\cal L}_{M_{S}}\) for sterile neutrinos \(S_{L}\) is given by: \[{\cal L}_{M_{S}} = \frac{1}{2}\sum_{\alpha,\beta}\overline{S^{c}_{\alpha R}}[M_{S}] _{\alpha\beta}S_{\beta L}+\mbox{ h.c.}\] (12) \[\subset\sum_{\alpha,\beta}\frac{1}{2}(M_{S})_{\alpha\beta} \overline{S^{c}_{\alpha R}}S_{\beta L}+\mbox{ h.c.}\,,\] where \(S^{c}_{\alpha R}\equiv C(\overline{S_{\alpha L}})^{T}\), \(C\) being the charge conjugation matrix (\(C^{-1}\gamma_{\mu}C=-\,\gamma_{\mu}^{T}\)). We have taken into account the scalar fields' VEVs as \(\langle H_{R}^{0}\rangle=v_{R}\) and \(\langle H_{L}^{0}\rangle=0\), which prevents the mass term from linking \(\nu_{L}-S^{c}_{R}\) through the interaction \(\sum_{\alpha,\beta}\overline{\ell_{\alpha L}}(Y_{LS})_{\alpha\beta}\widetilde {H_{L}}S^{c}_{\beta R}+\mbox{ h.c.}\) despite being permitted by gauge symmetry. ### The Double Seesaw Approximation After the spontaneous symmetry breaking, the complete \(9\times 9\) neutral fermion mass matrix in the flavor basis of \((\nu_{L},N^{c}_{R},S_{L})\) can be written as: \[{\cal M}_{LRDSM}=\left[\begin{array}{ccc}{\bf 0}&M_{D}&{\bf 0}\\ M_{D}^{T}&{\bf 0}&M_{RS}\\ {\bf 0}&M_{RS}^{T}&M_{S}\end{array}\right] \tag{13}\] We assume in what follows that \(|M_{D}|\ll|M_{RS}|\ll|M_{s}|\). This allows us to apply to the mass matrix \({\cal M}_{LRDSM}\) twice the seesaw approximate block diagonalization procedure for getting the mass matrices of light and heavy neutrinos, as discussed below. * **First Seesaw Approximation:** We implement the first seesaw block diagonalization procedure on the lower right \(6\times 6\) sub-matrix of \(\mathcal{M}_{LRDSM}\) as indicated below. \[\mathcal{M}_{LRDSM}=\left[\begin{array}{c|cc}\mathbf{0}&\omit\span \omit\span\omit\span M_{D}\ \mathbf{0}\\ \hline M_{D}^{T}&0&M_{RS}\\ \mathbf{0}&M_{RS}^{T}&M_{S}\end{array}\right]\] \[\xrightarrow[1\text{st seesaw}]{M_{S}\text{>}M_{RS}\gg M_{D}} \left[\begin{array}{c|cc}\mathbf{0}&\omit\span\omit\span M_{D}\ \mathbf{0}\\ \hline M_{D}^{T}&-M_{RS}M_{S}^{-1}M_{RS}^{T}&\mathbf{0}\\ \mathbf{0}&\mathbf{0}&M_{S}\end{array}\right]\] (11) * **Second Seesaw Approximation:-** Denoting \(-M_{RS}M_{S}^{-1}M_{RS}^{T}=M_{R}\), which is the expression for the mass matrix for right-handed neutrinos, we repeat the diagonalization procedure with seesaw condition, \(|M_{R}|\gg|M_{D}|\). We get the resultant matrix structure as \[\left[\begin{array}{c|c|c|c|c|c|c|c|}\mathbf{0}&\omit\span\omit\span M_{D} &\mathbf{0}\\ \hline M_{D}^{T}&M_{R}&\mathbf{0}\\ \hline\mathbf{0}&\mathbf{0}&M_{S}\end{array}\right]\xrightarrow[2\text{nd seesaw}]{M_{R}\gg M_{D}\over 2 \text{nd seesaw}}\left[\begin{array}{c|c|c|c|c|c|c|}-M_{D}M_{R}^{-1}M_{D}^{T}&0& 0\\ \hline\mathbf{0}&M_{R}&0\\ \hline\mathbf{0}&0&M_{S}\end{array}\right]\] (12) Using the above results, the light neutrino, the heavy neutrino and sterile fermions mass matrices \(m_{\nu}\), \(m_{N}\) and \(m_{S}\) can be expressed as: \[m_{\nu} \cong-M_{D}\left(-M_{RS}M_{S}^{-1}M_{RS}^{T}\right)^{-1}M_{D}^{T}\] \[=\frac{M_{D}}{M_{RS}^{T}}M_{S}\frac{M_{D}^{T}}{M_{RS}},\] \[m_{N} \equiv M_{R}\cong-M_{RS}M_{S}^{-1}M_{RS}^{T},\] \[m_{S} \cong M_{S}\,. \tag{13}\] In the double seesaw expression for the light neutrino Majorana mass matrix as given in Eq. (13), different choices of \(M_{D}\) and \(M_{RS}\) are possible. Following [80; 81; 82; 83; 84; 85], we have considered in the present article the case of \(M_{D}\) and \(M_{RS}\) being proportional to identity such that \(M_{D}=k_{d}I\) and \(M_{RS}=k_{rs}I\), where \(k_{d}\) and \(k_{rs}\) are real constants with \(|k_{d}|<|k_{rs}|\). This means, \(M_{D}M_{RS}^{-1}=\frac{k_{d}}{k_{rs}}I\). As discussed in [82; 70], the \begin{table} \begin{tabular}{|c|c|c||c|c|c||c|} \hline \hline \(M_{D}\) & \(M_{RS}\) & \(M_{S}\) & \(m_{\nu}\)(eV) & \(m_{N}\) & \(m_{S}\) & \(V^{\nu N}\) & \(V^{NS}\) \\ \hline \(10^{-4}\) & \(10^{3}\) & \(10^{4}\) & \(0.1\) & \(10^{2}\) & \(10^{4}\) & \(10^{-6}\) & \(0.1\) \\ \(10^{-5}\) & \(10^{2}\) & \(10^{3}\) & \(0.01\) & \(10\) & \(10^{3}\) & \(10^{-6}\) & \(0.1\) \\ \(10^{-5}\) & \(10^{1}\) & \(10^{2}\) & \(0.1\) & \(1\) & \(10^{2}\) & \(10^{-5}\) & \(0.1\) \\ \hline \hline \end{tabular} \end{table} Table 1: A representative set of model parameters in left-right symmetric models and the order of magnitude estimation of various neutrino masses within the double seesaw mechanism. All the masses are expressed in units of GeV except the light neutrino masses, which are in the eV scale. diagonal structures of \(M_{D}\) and \(M_{RS}\) may arise as a consequence of \(Z_{2}\times Z_{2}\) symmetry [83]. With the introduction of additional permutation symmetry in the diagonal elements of \(M_{D}\) and \(M_{RS}\), one can get equal diagonal elements. As we have indicated, these kinds of considerations have been reasoned for the double seesaw mechanism, e.g., in the references [80; 81; 82; 83; 84; 85]. With the choices for the forms of \(M_{D}\) and \(M_{RS}\) made above, the relation between light neutrino and sterile neutrino mass matrices \(m_{\nu}\) and \(m_{S}\) can be written as \(m_{\nu}=\frac{k_{d}^{2}}{k_{rs}^{2}}m_{S}\). The mass matrix \(m_{N}\) can also be determined from Eq. (10) and the relationship between light neutrino and heavy right-handed neutrino mass matrices \(m_{\nu}\) and \(m_{N}\) has the form \(m_{N}=-k_{d}^{2}\frac{1}{m_{\nu}}\). In the basis in which the charged lepton mass matrix is diagonal we will work with in what follows, the light neutrino Majorana mass matrix is diagonalized with the help of a unitary mixing matrix - the Pontecorvo, Maki, Nakagawa, Sakata (PMNS) mixing matrix \(U_{\rm PMNS}\equiv U_{\nu}\)[86; 87; 88]: \[m_{\nu}^{\rm diag}=U_{\rm PMNS}^{\dagger}m_{\nu}U_{\rm PMNS}^{\ast}={\rm diag }\left(m_{1},m_{2},m_{3}\right)\,,\ \ m_{i}>0\,,\] so the physical masses \(m_{i}\) are related to the mass matrix \(m_{\nu}\) in the flavour basis as \[m_{\nu}=U_{\rm PMNS}m_{\nu}^{\rm diag}U_{\rm PMNS}^{T}\,.\] The right-handed neutrino Majorana mass matrix \(m_{N}\) is diagonalized as \(\widehat{m_{N}}={U_{N}}^{\dagger}m_{N}{U_{N}}^{\ast}\), \(\widehat{m_{N}}={\rm diag}(m_{N_{1}},m_{N_{2}},m_{N_{3}})\), \(m_{N_{j}}\) being the mass of the heavy RH Majorana neutrino \(N_{j}\), \(j=1,2,3\). It proves convenient to work with positive masses of \(N_{j}\), \(m_{N_{j}}>0\). Given the relation \(m_{N}=-k_{d}^{2}\frac{1}{m_{\nu}}\) and the positivity of the eigenvalues of \(m_{\nu}\), the requirement that the eigenvalues of \(m_{N}\) are also positive implies that the unitary transformation matrices diagonalizing the light neutrino and the heavy right-handed neutrino mass matrices \(m_{\nu}\) and \(m_{N}\) are related in the following way: \[U_{N}=i\,U_{\nu}^{\ast}\equiv i\,U_{PMNS}^{\ast}\,. \tag{11}\] Since \(m_{S}=(k_{rs}^{2}/k_{d}^{2})m_{\nu}\), the diagonalization of sterile neutrino Majorana mass matrix \(m_{S}\), \(\widehat{m_{S}}=U_{S}^{\dagger}m_{S}{U_{S}}^{\ast}\), where \(\widehat{m_{S}}={\rm diag}(m_{S_{1}},m_{S_{2}},m_{S_{3}})\), \(m_{S_{k}}>0\), \(k=1,2,3\), can be performed with the help of the same mixing matrix \(U_{PMNS}\): \[U_{S}=U_{\nu}\equiv U_{PMNS}\,. \tag{12}\] Thus, in the considered scenario, the light neutrino masses \(m_{i}\), the heavy RH neutrino masses \(m_{N_{j}}\) and the sterile neutrino masses (\(m_{S_{k}}\)) are related as follows: \[m_{i}=\frac{k_{d}^{2}}{m_{N_{i}}}=\frac{k_{d}^{2}}{k_{rs}^{2}}\,m_{S_{i}}\,,\ i=1,2,3\,. \tag{13}\] In what follows, we will use the standard parametrization of the PMNS matrix (see, e.g., [89]): \[U_{\rm PMNS}=\] \[\left(\begin{matrix}c_{13}c_{12}&c_{13}s_{12}&s_{13}e^{-i\delta}\\ -c_{23}s_{12}-c_{12}s_{13}s_{23}e^{i\delta}&c_{12}c_{23}-s_{12}s_{13}s_{23}e^ {i\delta}&s_{23}c_{13}\\ s_{12}s_{23}-c_{12}c_{23}s_{13}e^{i\delta}&-c_{12}s_{23}-s_{12}s_{13}c_{23}e^ {i\delta}&c_{13}c_{23}\end{matrix}\right){\rm P} \tag{14}\] where the mixing angles are denoted by \(s_{ij}=\sin\theta_{ij}\), \(c_{ij}=\cos\theta_{ij}\), \(0\leq\theta_{ij}\leq\pi/2\), \(\delta\) is the Dirac CP violation phase, \(0\leq\delta\leq 2\pi\), P is the diagonal phase matrix containing the two Majorana CP violation phases \(\alpha\) and \(\beta\)[90], P = diag\(\left(1,e^{i\alpha/2},e^{i\beta/2}\right)\). The Majorana phases take values in the interval \([0,\pi]\). The experimental values of different oscillation parameters for the light neutrino mass spectrum with normal ordering (NO) and inverted ordering (IO) (see, e.g., [89]) are taken from Ref. [91] and are presented in Table 2. * Masses of light neutrinos It proves convenient to express the masses of the two heavier neutrinos in terms of the lightest neutrino mass and the neutrino mass squared differences measured in neutrino oscillation experiments. In the case of NO light neutrino mass spectrum, \(m_{1}<m_{2}<m_{3}\), we have: \[m_{1} =\text{lightest neutrino mass}\,,\] \[m_{2} =\sqrt{m_{1}^{2}+\Delta m_{\text{sol}}^{2}}\,,\] \[m_{3} =\sqrt{m_{1}^{2}+\Delta m_{\text{atm}}^{2}}\,,\] (3.13) where \(\Delta m_{\text{sol}}^{2}=\Delta m_{21}^{2}\) and \(\Delta m_{\text{atm}}^{2}=\Delta m_{31}^{2}\). Similarly, for inverted mass ordering, \(m_{3}<m_{1}<m_{2}\), we get: \[m_{3} =\text{lightest neutrino mass}\,,\] \[m_{1} =\sqrt{m_{3}^{2}-\Delta m_{\text{sol}}^{2}-\Delta m_{\text{atm}} ^{2}}\,,\] \[m_{2} =\sqrt{m_{3}^{2}-\Delta m_{\text{atm}}^{2}}\;,\] (3.14) with \(\Delta m_{\text{sol}}^{2}=\Delta m_{21}^{2}\) and \(\Delta m_{\text{atm}}^{2}=\Delta m_{32}^{2}\). Depending on the value of the lightest neutrino mass the neutrino mass spectrum can also be normal hierarchical (NH) when \(m_{1}\ll m_{2}<m_{3}\), inverted hierarchical \begin{table} \begin{tabular}{|l|c|c|} \hline Parameter & Best fit values & \(3\sigma\) range \\ \hline \hline \(\Delta m_{21}^{2}\left[10^{-5}\,\text{eV}\right]\) & 7.34 & 6.92–7.90 \\ \hline \(|\Delta m_{31}^{2}|\left[10^{-3}\,\text{eV}\right]\) (NO) & 2.485 & 2.389–2.578 \\ \(|\Delta m_{32}^{2}|\left[10^{-3}\,\text{eV}\right]\) (IO) & 2.465 & 2.374–2.556 \\ \hline \(\sin^{2}\theta_{12}/10^{-1}\) (NO) & 3.05 & 2.65–3.47 \\ \(\sin^{2}\theta_{12}/10^{-1}\)(IO) & 3.03 & 2.64–3.45 \\ \hline \(\sin^{2}\theta_{23}/10^{-1}\) (NO) & 5.45 & 4.36–5.95 \\ \(\sin^{2}\theta_{23}/10^{-1}\) (I)) & 5.51 & 4.39–5.96 \\ \hline \(\sin^{2}\theta_{13}/10^{-2}\) (NO) & 2.22 & 2.01–2.41 \\ \(\sin^{2}\theta_{13}/10^{-2}\) (IO) & 2.23 & 2.03–2.43 \\ \hline \end{tabular} \end{table} Table 2: The current updated estimates of experimental values of Neutrino oscillation parameters for global best fits and \(3\sigma\) range taken from [91]. (IH) if \(m_{3}\ll m_{1}<m_{2}\), or else quasi-degenerate (QD) when \(m_{1}\cong m_{2}\cong m_{3}\), \(m_{1,2,3}^{2}>>\Delta m_{31(23)}^{2}\), i.e., \(m_{1,2,3}\ \raisebox{-2.15pt}{\hbox to 0.0pt{$\sim$}}\raisebox{2.15pt}{$>$}\ 0.1\) eV. All considered types of neutrino mass spectrum are compatible with the existing data. The best upper limit on the lightest neutrino mass \(m_{1(3)}\) has been obtained in the KATRIN experiment. It is in the range of the QD spectrum and effectively reads: \(m_{1,2,3}<0.80\) eV (90% C.L.). * Masses of heavy RH neutrinos Since \(m_{i}\) and \(m_{N_{j}}\) are inversely proportional to each other, for NO light neutrino mass spectrum, \(m_{1}<m_{2}<m_{3}\), \(m_{N_{1}}\) has to be the largest RH neutrino mass. We can express \(m_{N_{2}}\) and \(m_{N_{3}}\) in terms of \(m_{N_{1}}\) and the light neutrino masses: \[m_{N_{2}}=\frac{m_{1}}{m_{2}}m_{N_{1}}\,,\quad m_{N_{3}}=\frac{m_{1}}{m_{3}}m_ {N_{1}}\,,\quad m_{N_{3}}<m_{N_{2}}<m_{N_{1}}\,.\] (3.15) In the case of IO spectrum, \(m_{3}<m_{1}<m_{2}\), \(m_{N_{3}}\) is the largest mass. The mass relations in this case become: \[m_{N_{1}}=\frac{m_{3}}{m_{1}}m_{N_{3}}\,,\quad m_{N_{2}}=\frac{m_{3}}{m_{2}}m_ {N_{3}}\,,\quad m_{N_{1}}<m_{N_{2}}<m_{N_{3}}\,.\] (3.16) * Masses of sterile neutrinos Since \(m_{i}\) and \(m_{S_{k}}\) are directly proportional to each other, in the NO case \(m_{S_{3}}\) is the heaviest sterile neutrino mass and the analogous mass relations read: \[m_{S_{1}}=\frac{m_{1}}{m_{3}}m_{S_{3}}\,,\quad m_{S_{2}}=\frac{m_{2}}{m_{3}}m_ {S_{3}}\,,\quad m_{S_{1}}<m_{S_{2}}<m_{S_{3}}\,.\] (3.17) For the IO spectrum we have \(m_{S_{3}}<m_{S_{1}}<m_{S_{2}}\) and \[m_{S_{1}}=\frac{m_{1}}{m_{2}}m_{S_{2}}\,,\quad m_{S_{3}}=\frac{m_{3}}{m_{2}}m_ {S_{2}}\,,\quad m_{S_{3}}<m_{S_{1}}<m_{S_{2}}\,.\] (3.18) * Neutrino Mixing The diagonalisation of \({\cal M}_{\rm LRDSM}\) leads to the following relation between the fields of the neutral fermions written in the flavour (weak interaction eigenstate) basis and in the mass eigenstate basis: \[\begin{pmatrix}\nu_{\alpha L}\\ N_{\beta L}^{c}\\ S_{\gamma L}\end{pmatrix}=\begin{pmatrix}V_{\alpha i}^{\nu\nu}&V_{\alpha j}^{ \nu N}&V_{\alpha k}^{\nu SS}\\ V_{\beta i}^{N\nu}&V_{\beta j}^{\nu N}&V_{\beta k}^{NS}\\ V_{\gamma i}^{S\nu}&V_{\gamma j}^{\bar{\gamma}N}&V_{\gamma k}^{\bar{\gamma}S} \end{pmatrix}\begin{pmatrix}\nu_{iL}\\ N_{jL}^{c}\\ S_{kL}\end{pmatrix}\,. \tag{3.19}\] Here \(N_{\beta L}^{c}\equiv C(\overline{N_{\beta R}})^{T}\), \(N_{jL}^{c}\equiv C(\overline{N_{jR}})^{T}=N_{jL}\) (\(N_{j}\) are Majorana fields), \(C\) being the charge conjugation matrix, the indices \(\alpha,\beta,\gamma\) run over three generations of light left-handed neutrinos, heavy right-handed neutrinos and sterile neutrinos in flavor basis respectively, whereas the indices \(i,j,k\) run over corresponding mass eigenstates. The mixing matrix elements in Eq. (3.19) are given in Eq. (8.27) in Appendix 8. The mixing between the right-handed neutrinos and sterile neutrinos (\(N_{L}^{c}-S_{L}\)) is given by the term, \[V^{NS}\propto M_{RS}M_{S}^{-1} \tag{3.20}\] while the mixing between the fields of the left-handed flavour neutrinos and the heavy right-handed neutrinos (\(\nu_{L}-N_{L}^{c}\)) is determined by \[V^{\nu N}\propto M_{D}M_{R}^{-1}=-M_{D}{M_{RS}^{T}}^{-1}M_{S}M_{RS}^{-1} \tag{3.21}\] The mixing between sterile and light neutrinos (\(\nu_{L}-S_{L}\)) is vanishing, \(V_{\alpha k}^{\nu S}\cong 0\) and \(V_{\gamma i}^{S\nu}\cong 0\). The possible sets of numerical values for different mixing matrices, masses and mixing that can give rise to dominant contributions to LNV decays are listed in Table 1. By choosing one representative set of model parameters from the table, we get the mixing as given below. \[\begin{pmatrix}V_{\alpha i}^{\nu\nu}&V_{\alpha j}^{\nu N}&V_{\alpha k}^{\nu S }\\ V_{\beta i}^{N\nu}&V_{\beta j}^{NN}&V_{\beta k}^{NS}\\ V_{\gamma i}^{S\nu}&V_{\gamma j}^{SN}&V_{\gamma k}^{SS}\end{pmatrix}\simeq \begin{pmatrix}\mathcal{O}(1.0)&\mathcal{O}(10^{-6})&0\\ \mathcal{O}(10^{-6})&\mathcal{O}(1.0)&\mathcal{O}(0.1)\\ 0&\mathcal{O}(0.1)&\mathcal{O}(1.0)\end{pmatrix} \tag{3.22}\] In the above matrix, the non-zero elements come from \(V_{\alpha i}^{\nu\nu}\), \(V_{\beta j}^{NN}\), \(V_{\beta k}^{NS}\), \(V_{\gamma j}^{SN}\) and \(V_{\gamma k}^{SS}\) while all other terms are negligibly small. These non-zero mixings would contribute sizeably to the predicted neutrinoless double beta decay rate. Thus, Eq. (3.19) can be rewritten for the fields of flavour neutrinos \(\nu_{\alpha L}\) and the heavy RH neutrinos \(N_{\beta L}^{c}\) as: \[\nu_{\alpha L} \cong V_{\alpha i}^{\nu\nu}\nu_{iL}+V_{\alpha j}^{\nu N}N_{jL}\,,\] \[N_{\beta L}^{c} = V_{\beta i}^{N\nu}\nu_{iL}+V_{\beta j}^{NN}N_{jL}+V_{\beta k}^{ NS}S_{kL}\,. \tag{3.23}\] As we have indicated, in the considered model we have \(|V_{\alpha j}^{\nu N}|\sim 10^{-6}\). Correspondingly, the contribution to the \(0\nu\beta\beta\) decay amplitude arising from the coupling of \(N_{jL}\) to the electron in the LH (i.e., V-A) charged lepton current involves the factor \((V_{ej}^{\nu N})^{2}\) and is negligible. ### Gauge Boson Masses We briefly summarize here the gauge bosons masses and mixing in our model which will be used in estimating half-life of neutrinoless double beta decay process. Besides the SM gauge bosons \(W_{L}^{\pm}\) and \(Z\), there are right-handed gauge bosons \(W_{R}^{\pm}\) and \(Z^{\prime}\) which get their masses from left-right symmetry breaking. Following ref [32] and choosing VEVs of the Higgs fields as, \[\langle H_{R}^{0}\rangle=v_{R}\,,\quad\langle\phi_{1,2}^{0}\rangle=v_{1,2}\,, \tag{3.24}\] the mass matrix for charged gauge bosons, in the basis (\(W_{L}^{+}\)\(W_{R}^{+}\)) can be written as, \[\mathbb{M}_{CGB}=\begin{cases}\frac{g_{1}^{2}v^{2}}{2}&-g_{L}g_{R}v_{1}v_{2}\\ -g_{L}g_{R}v_{1}v_{2}&\frac{g_{R}^{2}}{2}(\frac{1}{2}v_{R}^{2}+v^{2})\end{cases}\,,\] where \(v^{2}=v_{1}^{2}+v_{2}^{2}\) and \(g_{R}=g_{L}\). The physical mass for extra charged gauge boson is given by \[M_{W_{R}} \simeq \frac{1}{2}g_{R}v_{R}\,. \tag{3.25}\] The mixing angle between \(W_{R}\) and \(W_{L}\) is defined as \[\tan 2\theta_{LR}\approx 8\frac{g_{L}}{g_{R}}\frac{v_{1}v_{2}}{v_{R}^{2}}\] The neutral gauge boson mass matrix is given by \[\mathbb{M}_{NGB}=\left\{\begin{array}{ccc}\frac{g_{L}^{2}v^{2}}{2}&-\frac{g_ {L}g_{R}}{2}v^{2}&0\\ -\frac{g_{L}g_{R}}{2}v^{2}&\frac{g_{R}^{2}}{2}(\frac{1}{2}v_{R}^{2}+v^{2})&- \frac{g_{R}g_{BL}}{4}v_{R}^{2}\\ 0&-\frac{g_{R}g_{BL}}{4}v_{R}^{2}&\frac{g_{BL}^{2}v_{R}^{2}}{4}\end{array} \right\}\,.\] As can be easily checked, this mass matrix has one zero eigenvalue corresponding to the photon \(A_{\mu}\). After few simplification, the mass eigenstates \(Z_{\mu}\), \(Z_{\mu}^{{}^{\prime}}\) and \(A_{\mu}\) are related to the weak eigenstates \((W_{L\mu}^{0},W_{R\mu}^{0},Z_{BL\mu})\) in the following way, \[W_{L\mu}^{0} = \cos\theta_{W}Z_{L\mu}+\sin\theta_{W}A_{\mu}\,,\] \[W_{R\mu}^{0} = \cos\theta_{R}Z_{R\mu}-\sin\theta_{W}\sin\theta_{R}Z_{L\mu}+\cos \theta_{W}\sin\theta_{R}A_{\mu}\,,\] \[Z_{BL\mu}^{0} = -\sin\theta_{R}Z_{R\mu}-\sin\theta_{W}\cos\theta_{R}Z_{L\mu}+ \cos\theta_{W}\cos\theta_{R}A_{\mu}\,,\] where \[Z_{L\mu} \equiv Z_{\mu}\cos\xi+Z_{\mu}^{{}^{\prime}}\sin\xi\,,\] \[Z_{R\mu} \equiv -Z_{\mu}\sin\xi+Z_{\mu}^{{}^{\prime}}\cos\xi\,. \tag{3.26}\] Here, the mixing angles are defined as \(\tan\theta_{R}=g_{BL}/g_{R}\), \(\tan\theta_{W}=g_{Y}/g_{L}\) with \(g_{Y}=g_{BL}g_{R}/\sqrt{g_{BL}^{2}+g_{R}^{2}}\), while the mixing angle between the \(Z\) and the heavy \(Z^{{}^{\prime}}\) reads: \[\tan 2\xi\approx\frac{v_{1}v_{2}}{v_{R}^{2}}\,\,\,\frac{-4g_{R}^{2}\sqrt{g_{L}^ {2}g_{R}^{2}+g_{BL}^{2}(g_{L}^{2}+g_{R}^{2})}}{(g_{BL}^{2}+g_{R}^{2})^{2}}\,. \tag{3.27}\] The physical mass for extra neutral gauge boson \(Z^{\prime}\) is given by: \[M_{Z^{{}^{\prime}}}^{2} \simeq \frac{1}{2}\left(g_{BL}^{2}+g_{R}^{2}\right)\left[v_{R}^{2}+\frac {g_{R}^{2}v^{2}}{g_{R}^{2}+g_{BL}^{2}}\right] \tag{3.28}\] The value of \(\tan 2\xi\) has to be smaller than \(10^{-3}\) in order to satisfy the electroweak precision constraints in the limit \(v_{R}^{2}\gg v_{1}^{2}+v_{2}^{2}\). With \(v^{2}=v_{1}^{2}+v_{2}^{2}\simeq(246\,\,{\rm GeV})^{2}\) and \(g_{R}\simeq g_{L}=0.653\), we have \(\tan\theta_{W}=\frac{g_{BL}}{\sqrt{g_{BL}^{2}+g_{R}^{2}}}\), which implies \(\frac{g_{BL}^{2}}{g_{L/R}^{2}}=\frac{\sin^{2}\theta_{W}}{1-2\sin^{2}\theta_{W} }\approx 0.43\), where \(sin^{2}\theta_{W}=0.231\). Using this result and \(g_{L}=g_{R}\), we get for the angle describing the \(Z-Z^{{}^{\prime}}\) mixing: \(|\tan 2\xi|\cong 2.67v_{1}v_{2}/v_{R}^{2}\). The upper limit \(|\tan 2\xi|<10^{-3}\) implies: \[\frac{v_{1}\,v_{2}}{v_{R}^{2}}<3.75\times 10^{-4}\,. \tag{3.29}\] This in turn leads to the following upper limit on the \(W_{L}-W_{R}\) mixing angle \(\theta_{LR}\): \[\theta_{LR}\cong 4\,\frac{v_{1}\,v_{2}}{v_{R}^{2}}<1.50\times 10^{-3}\,. \tag{3.30}\] The left-handed gauge boson masses are similar to those of the SM gauge bosons with \(g_{Y}=\left(g_{R}g_{BL}\right)/\left(g_{R}^{2}+g_{BL}^{2}\right)^{1/2}\), while the masses of the extra heavy gauge bosons are related as follows, \[M_{W_{R}} \simeq \frac{1}{2}g_{R}v_{R}\,, \tag{3.31}\] \[M_{Z^{\prime}} \simeq \frac{\sqrt{g_{BL}^{2}+g_{R}^{2}}}{g_{R}}M_{W_{R}}{\simeq}1.2\,M_ {W_{R}}\,. \tag{3.32}\] The current experimental bound on \(M_{W_{R}}>5\) TeV is obtained in high energy collider experiments at LHC [92; 93; 94], while the low energy precision measurements [95; 96] imply a lower bound on the \(Z^{{}^{\prime}}\) mass, i.e. \(M_{Z^{\prime}}>6\) TeV. ## 4 Neutrinoless double beta decay Neutrinoless double beta decay process can be induced by the exchange of light active Majorana neutrinos, which is usually referred to as "the standard mechanism", or by some other lepton number violating "non-standard mechanism" associated with BSM physics. In this section, we discuss the standard and the new physics contributions to \(0\nu\beta\beta\) decay amplitude and rate that arise in our model due to the exchange of the light Majorana neutrinos \(\nu_{i}\), heavy Majorana neutrinos \(N_{1,2,3}\) and sterile Majorana neutrinos \(S_{1,2,3}\). The charged current (CC) interaction Lagrangian for leptons and quarks, relevant for our further discussion, are given by: \[\mathcal{L}_{\rm CC}^{\ell} = \sum_{\alpha=e,\mu,\tau}\left[\frac{g_{L}}{\sqrt{2}}\overline{ \ell}_{\alpha L}\gamma_{\mu}\nu_{\alpha L}W_{L}^{\mu}+\frac{g_{R}}{\sqrt{2}} \overline{\ell}_{\alpha R}\gamma_{\mu}N_{\alpha R}W_{R}^{\mu}\right]+{\rm h.c.} \tag{4.1}\] \[= \frac{g_{L}}{\sqrt{2}}\overline{e}_{L}\gamma_{\mu}\nu_{eL}W_{L}^{ \mu}+\frac{g_{R}}{\sqrt{2}}\overline{e}_{R}\gamma_{\mu}N_{eR}W_{R}^{\mu}+{\rm h.c.}+\cdots\] \[\mathcal{L}_{\rm CC}^{\rm q} = \left[\frac{g_{L}}{\sqrt{2}}\,\overline{u}_{L}\,\gamma_{\mu}d_{L }\,W_{L}^{\mu}+\frac{g_{R}}{\sqrt{2}}\,\overline{u}_{R}\,\gamma_{\mu}d_{R}\,W _{R}^{\mu}\right]+{\rm h.c.} \tag{4.2}\] Using Eq. (3.23), \(\mathcal{L}_{\rm CC}^{\ell}\) be rewritten as: \[\mathcal{L}_{\rm CC}^{\ell} = \frac{g_{L}}{\sqrt{2}}\bigg{[}\overline{e}_{L}\gamma_{\mu}\{V_{ ei}^{\nu\nu}\nu_{i}+V_{ei}^{\nu N}N_{i}\}W_{L}^{\mu}\bigg{]}+{\rm h.c.} \tag{4.3}\] \[+\frac{g_{R}}{\sqrt{2}}\bigg{[}\overline{e}_{R}\gamma_{\mu}\{V_{ ei}^{N\nu}\nu_{i}+V_{ei}^{NN}N_{i}+V_{ei}^{NS}S_{i}\}W_{R}^{\mu}\bigg{]}+{\rm h.c.}\] In the present model, the heavy neutrino masses are around \(1-1000\) GeV, \(\nu_{L}-N_{L}^{c}\) mixing \(|V_{\nu N}|\leq 10^{-6}\) and \(\nu_{L}-S_{L}\) mixing is vanishing. Other contributions involving the light-heavy neutrino mixings and the \(W_{L}-W_{R}\) mixing are negligible. The lepton Lagrangian that is relevant for the dominant contributions to \(0\nu\beta\beta\) decay rate is: \[\mathcal{L}_{\rm CC}^{\ell} = \frac{g_{L}}{\sqrt{2}}\left[\overline{e}_{L}\,\gamma_{\mu}\{{\rm V} ^{\nu\nu}_{ei}\,\nu_{i}\}\,W_{L}^{\mu}\right]+{\rm h.c.} \tag{28}\] \[+\frac{g_{R}}{\sqrt{2}}\bigg{[}\overline{e}_{R}\gamma_{\mu}\{{\rm V }^{N\,N}_{ej}N_{j}+{\rm V}^{NS}_{ek}S_{k}\}W_{R}^{\mu}\bigg{]}+{\rm h.c.}\] Thus, in the considered model, the dominant contributions to the \(0\nu\beta\beta\) decay amplitude are given by: * the standard mechanism due to the exchange of light neutrino \(\nu_{i}\), mediated by left-handed gauge boson \(W_{L}\), i.e. due to purely left-handed (LH) CC interaction; * new contributions due to the exchange of heavy neutrinos \(N_{1,2,3}\) and sterile neutrinos \(S_{1,2,3}\), mediated by right-handed gauge boson \(W_{R}\), i.e., due to purely right-handed (RH) CC interaction. The contribution due to exchange of virtual \(S_{1,2,3}\) is possible due to the mixing between \(N_{L}^{c}\) and \(S_{L}\). The so-called \(<\lambda>\)- and \(<\eta>\)- mechanism contributions \(0\nu\beta\beta\) decay amplitude arising from the product of LH and RH lepton currents [97] are sub-dominant being strongly suppressed. The \(<\lambda>\)-mechanism contribution involves the factor \(|V_{ei}^{N\nu}|(M_{W_{L}}/M_{W_{R}})^{2}<2.6\times 10^{-10}\), where we have used \(|V_{ei}^{N\nu}|=10^{-6}\), \(M_{W_{L}}=80.38\) GeV and \(M_{W_{R}}>5\) TeV, while the \(<\eta>\)-mechanism contribution is suppressed by the factor \(|V_{ei}^{N\nu}\sin\theta_{LR}|<10^{-9}\). As a consequence, we neglect these contributions in the analysis which follows. The Feynman diagrams for the dominant contributions of interest to the \(0\nu\beta\beta\) decay amplitude are shown in Fig. 1, where the first diagram from the left corresponds to the standard mechanism, while the second and third diagrams correspond to the new contributions mediated by \(N_{1,2,3}\) and \(S_{1,2,3}\), respectively. Figure 1: Feynman diagrams for the process of neutrinoless double beta decay mediated by the exchange of virtual (a) light Majorana neutrinos \(\nu_{i}\) (the standard mechanism), (b) heavy neutrinos \(N_{R}\) (heavy Majorana neutrinos \(N_{1,2,3}\)) and (c) heavy sterile neutrinos \(S_{L}\) (heavy Majorana neutrinos \(S_{1,2,3}\)). When \(0\nu\beta\beta\) decay is mediated by only light Majorana neutrinos \(\nu_{i}\), the inverse half-life for this process can be expressed as, \[\left[T_{1/2}^{0\nu}\right]^{-1} = g_{\rm A}^{4}\,G_{01}^{0\nu}\,\left|{\cal M}_{\nu}^{0\nu}\right|^ {2}\,\left|\eta_{\nu}\right|^{2} \tag{4.5}\] where \(g_{\rm A}\) is the axial coupling constant, \(G_{01}^{0\nu}\) is the phase space factor, \({\cal M}_{\nu}^{0\nu}\) is the Nuclear Matrix Elements (NME) for light neutrino exchange and \(\eta_{\nu}\) is a dimensionless particle physics parameter that is a measure of lepton number violation. Considering both the standard mechanism and the new contributions to this decay process in our model, the inverse half life can be written as: \[\left[T_{1/2}^{0\nu}\right]^{-1} = g_{\rm A}^{4}\,G_{01}^{0\nu}\bigg{[}|{\cal M}_{\nu}^{0\nu}\cdot \eta_{\nu}|^{2}+|{\cal M}_{N}^{0\nu}\cdot\left(\eta_{N}+\eta_{S}\right)|^{2} \bigg{]}, \tag{4.6}\] where \({\cal M}_{N}^{0\nu}\) is the Nuclear Matrix Elements (NME) for the heavy neutrino exchange and \(\eta_{N}\) and \(\eta_{S}\) are lepton number violating parameters associated with the exchange of the heavy neutrinos \(N_{1,2,3}\) and \(S_{1,2,3}\). In the analysis and the numerical estimates which follow, we will use a mildly quenched value of the axial coupling constant \(g_{\rm A}=1.00\), the unquenched value being, as is well known, \(g_{\rm A}=1.27\). If it turns out that \(g_{\rm A}\) is actually not quenched, that will reduce the estimates of the \(0\nu\beta\beta\) decay half-lives made in the present study by a factor of 2.60. The interference term of the light neutrino \(\nu_{1,2,3}\) and the heavy neutrinos \(N_{1,2,3}\) and \(S_{1,2,3}\) exchange contributions to the \(0\nu\beta\beta\) decay amplitude, which are generated respectively by LH \((V-A)\) CC and RH \((V+A)\) CC interactions, is strongly suppressed, being proportional to the electron mass [98] (see also [99]) and we have neglected it in Eq. (4.6). The values of \(G_{01}^{0\nu}\) and the NMEs for both light and heavy neutrino exchange mechanism are distinct for different isotopes and can be found, e.g., in [100]. We present in Table 3 the values obtained by six different groups of authors using different methods of NME calculation. Of particular importance for the estimates of the relative magnitude of the new non-standard contributions in the \(0\nu\beta\beta\) decay amplitude with respect to the contribution of the standard mechanism is the ratio \({\cal M}_{N}^{0\nu}/{\cal M}_{\nu}^{0\nu}\). As it follows from Table 3, the ratio \({\cal M}_{N}^{0\nu}/{\cal M}_{\nu}^{0\nu}\) predicted by each of the six cited groups using different methods of NME calculation is essentially the same for the four isotopes \({}^{76}\)Ge, \({}^{82}\)Se, \({}^{130}\)Te, \({}^{136}\)Xe - it varies with the isotope by not more than \(\sim 15\%\). At the same time, for a given isotope the ratio of interest obtained by the six different methods of NME calculation quoted in Table 3 varies by a factor of up to \(\sim 3.5\). In view of this we will use the values of the NMEs for \({}^{76}\)Ge as reference values in our numerical analysis. For the minimal and maximal values of the ratio \({\cal M}_{N}^{0\nu}/{\cal M}_{\nu}^{0\nu}\) for \({}^{76}\)Ge we get from Table 3: \[22.2\lesssim\frac{{\cal M}_{N}^{0\nu}}{{\cal M}_{\nu}^{0\nu}}\lesssim 76.3\,, \quad{}^{76}{\rm Ge}\,. \tag{4.7}\] They correspond respectively to \({\cal M}_{\nu}^{0\nu}=4.68\) and 5.26. The dimensionless particle physics parameters \(\eta_{\nu}\), \(\eta_{N}\) and \(\eta_{s}\) in Eq. (4.6) are functions of neutrino masses, mixing parameters and CPV phases and can be expressed as: \[|\eta_{\nu}|=\sum_{i=1,2,3}\frac{{\rm V}_{ei}^{\nu\nu 2}\,m_{\nu_{i}} }{m_{e}}\,, \tag{4.8}\] \[|\eta_{N}|=m_{p}\left(\frac{M_{W_{L}}}{M_{W_{R}}}\right)^{4}\sum_ {j=1,2,3}\frac{{\rm V}_{ej}^{NN}{}^{2}}{m_{N_{j}}}\,,\] (4.9) \[|\eta_{S}|=m_{p}\left(\frac{M_{W_{L}}}{M_{W_{R}}}\right)^{4}\sum_ {k=1,2,3}\frac{{\rm V}_{ek}^{NS}{}^{2}}{m_{S_{k}}}\,. \tag{4.10}\] where \(m_{e}\) and \(m_{p}\) are the electron and proton masses. The quantity \(m_{e}|\eta_{\nu}|\equiv|m_{\beta\beta,L}^{\nu}|\) is the effective Majorana mass (EMM) associated with the standard mechanism of \(0\nu\beta\beta\) decay (see, e.g., [18, 111]). In [112] it was noticed that there exists a short distance (contact) contribution to the \(0\nu\beta\beta\) decay amplitude even in the case of light neutrino exchange. The magnitude of this contribution was investigated in a number of studies. Using the results of the estimates of the \(nn\to ppee\) amplitude derived in [113] the magnitude of this contribution relative to the standard light neutrino exchange one was calculated for the neutrinoless double beta decay of \({}^{48}\)Ca in [114] and for \({}^{76}\)Ge, \({}^{130}\)Te and \({}^{136}\)Xe in [115]. Both groups of authors find a positive contribution enhancing the standard one by about 43% and 30% respectively for \({}^{48}\)Ca and \({}^{76}\)Ge, \({}^{130}\)Te, \({}^{136}\)Xe. These effects are accounted for in our analysis by the much larger uncertainties in the NMEs included in the analysis. The mixing parameters in Eqs. (4.8) - (4.10) are given in Appendix 8. In the framework of our model we have: \(V^{\nu\nu}\approx U_{\nu}\), \(V^{NN}\approx U_{N}\) and \(V^{NS}\equiv M_{RS}M_{S}^{-1}U_{S}\). We recall that \(U_{N}=i\,U_{\nu}^{*}\), \(U_{S}=U_{\nu}\) and \(U_{\nu}\equiv U_{PMNS}\) (Eqs. (3.9) and (3.10)). \begin{table} \begin{tabular}{|l|c c|c c|c c|c c|} \hline \hline & \({}^{76}\)Ge & & \({}^{82}\)Se & & \({}^{130}\)Te & & \({}^{136}\)Xe & \\ methods & \({\cal M}_{\nu}^{0\nu}\) & \({\cal M}_{N}^{0\nu}\) & \({\cal M}_{\nu}^{0\nu}\) & \({\cal M}_{N}^{0\nu}\) & \({\cal M}_{\nu}^{0\nu}\) & \({\cal M}_{N}^{0\nu}\) & \({\cal M}_{\nu}^{0\nu}\) & \({\cal M}_{N}^{0\nu}\) \\ \hline dQRPA [101] & 3.12 & 187.3 & 2.86 & 175.9 & 2.90 & 191.4 & 1.11 & 66.9 \\ \hline QRPA-Tu [102, 103] & 5.16 & 287.0 & 4.64 & 262.0 & 3.89 & 264.0 & 2.18 & 152.0 \\ \hline QRPA-Jy [104] & 5.26 & 401.3 & 3.73 & 287.1 & 4.00 & 338.3 & 2.91 & 186.3 \\ \hline IBM-2 [105] & 4.68 & 104 & 3.73 & 82.9 & 3.70 & 91.8 & 3.05 & 72.6 \\ \hline CDFT [106, 107, 108] & 6.04 & 209.1 & 5.30 & 189.3 & 4.89 & 193.8 & 4.24 & 166.3 \\ \hline ISM [109] & 2.89 & 130 & 2.73 & 121 & 2.76 & 146 & 2.28 & 116 \\ \hline \hline \(G_{01}^{0\nu}\)\(\left[10^{-14}{\rm yrs}^{-1}\right]\)[110] & 0.22 & & 1 & 1.4 & & 1.5 & \\ \hline \hline \end{tabular} \end{table} Table 3: Values of Nuclear Matrix Elements for various isotopes calculated by different methods for light and heavy neutrino exchange. Here QRPA-Jy uses CD-Bonn short range correlations (SRC) and the rest use Argonne SRC, with minimally quenched \(g_{A}=1\). The last row shows the phase space factor \(G_{01}^{0\nu}\) for various isotopes [100, 110]. The expressions for \(|\eta_{N}|\) and \(|\eta_{S}|\) in Eqs. (4.9) and Eqs. (4.10) are obtained under the condition \(\langle p^{2}\rangle\ll M_{i}^{2}\), where \(\sqrt{\langle p^{2}\rangle}\) is the average momentum exchanged in the process of \(0\nu\beta\beta\) decay and \(M_{i}\) here is a generic notation for the masses of \(N_{1,2,3}\) and \(S_{1,2,3}\). The chiral structure of the matrix elements involving virtual \(N_{1,2,3}\) and \(S_{1,2,3}\) propagators in the case of the heavy neutrino exchange contribution is given by: \[P_{R}\frac{\not{p}+M_{i}}{p^{2}-M_{i}^{2}}P_{R}=\frac{M_{i}}{p^{2}-M_{i}^{2}}P_ {R}\,, \tag{4.11}\] where \(P_{R}=(1+\gamma_{5})/2\) is the RH projection operator. A typical value of the neutrino virtuality is \(\langle p^{2}\rangle\cong(190\,\mathrm{MeV})^{2}\) (see, e.g., [116]). Thus, in the case of interest, we have \(p^{2}\ll M_{i}^{2}\) and the heavy state propagators reduce to a good approximation to \(1/M_{i}\). It proves convenient for our further analysis to rewrite the inverse half-life in terms of one particle physics parameter - generalised effective Majorana mass (GEMM) - that contains the lepton number violating information in it: \[\left[T_{1/2}^{0\nu}\right]^{-1} = G_{01}^{0\nu}\Bigg{[}|\mathcal{M}_{\nu}^{0\nu}\eta_{\nu}|^{2}+ \mathcal{M}_{N}^{0\nu}\big{|}\eta_{N}+\eta_{S}\big{|}^{2}\Bigg{]} \tag{4.12}\] \[= G_{01}^{0\nu}\Bigg{|}\frac{\mathcal{M}_{\nu}^{0\nu}}{m_{e}} \Bigg{|}^{2}\bigg{[}\big{|}m_{\beta\beta,L}^{\nu}\big{|}^{2}+\big{|}m_{\beta \beta,R}^{N}+m_{\beta\beta,R}^{S}\big{|}^{2}\Bigg{]}\] \[= G_{01}^{0\nu}\Bigg{|}\frac{\mathcal{M}_{\nu}^{0\nu}}{m_{e}} \Bigg{|}^{2}\big{|}m_{\beta\beta,L,R}^{\mathrm{eff}}\big{|}^{2}\,,\] where [116] \[m_{\beta\beta,R}^{N} = \sum_{j}m_{p}m_{e}\frac{\mathcal{M}_{N}^{0\nu}}{\mathcal{M}_{\nu} ^{0\nu}}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{{V_{ej}^{NN}}^{2}}{m_{N_{j}}}\,,\] \[m_{\beta\beta,R}^{S} = \sum_{k}m_{p}m_{e}\frac{\mathcal{M}_{N}^{0\nu}}{\mathcal{M}_{\nu} ^{0\nu}}\frac{M_{W_{L}}^{4}}{M_{W_{R}}^{4}}\frac{{V_{ek}^{NS}}^{2}}{m_{S_{k}}}\,. \tag{4.13}\] It follows from Eqs. (4.9), (4.10) and (4.13) that the new physics contributions to the \(0\nu\beta\beta\) decay amplitude are suppressed, in particular, by the factor \(M_{W_{L}}^{4}/M_{W_{R}}^{4}<(1.6\times 10^{-2})^{4}\), where \(M_{W_{L}}=80.38\) GeV is the SM \(W\)-bosons mass and we have used the lower bound \(M_{W_{R}}>5\) TeV [92; 93; 94; 95; 96]. Fixing \(M_{W_{R}}\) at, e.g., \(=5.5\) TeV, we have for the ratio \(\left(M_{W_{L}}^{4}/M_{W_{R}}^{4}\right)\sim\mathcal{O}(10^{-8})\). Taking further the masses of \(S_{k}\) and \(N_{j}\) in the ranges respectively of \((10^{2}-10^{4})\) GeV and \((1-10^{2})\) GeV, the mixing \(V_{ej}^{NN}\approx U_{N}\) and \(V_{ek}^{NS}\equiv M_{RS}M_{S}^{-1}U_{S}\) from Appendix 8, one finds that the new physics contributions can be in the \(0.01-0.1\) eV range (see Table 4), i.e., within the experimental search sensitivity. We note that, since the dominant contributions to \(0\nu\beta\beta\) decay arises from more than one contribution, it is also possible that there might be interference between them in the decay rate of the process. The interference of light neutrino (\(\nu_{i}\)) contribution due to purely \(V-A\) interaction involving LH currents with either of the heavy neutrino \(N_{j}\) and \(S_{k}\) contributions, which are generated by \((V+A)\) interaction with RH currents, is suppressed, as we have indicated earlier. However, the interference between the contributions of the heavy neutrinos \(N_{j}\) and \(S_{k}\) both involving RH currents, in general, can't be neglected. In the case when this interference is not taken into consideration, the generalised effective Majorana mass is determined by the sum of individual contributions of the three types of neutrinos \(\nu_{i}\), \(N_{j}\) and \(S_{k}\): \[\left|m^{\rm eff}_{\beta\beta,L,R}\right|\equiv m^{\nu+N+S}_{ee}=\left(\left|m^{ \nu}_{\beta\beta,L}\right|^{2}+\left|m^{N}_{\beta\beta,R}\right|^{2}+\left|m^{ S}_{\beta\beta,R}\right|^{2}\right)^{\frac{1}{2}}\,. \tag{4.14}\] Accounting for the interference, the generalised effective Majorana mass can be written as: \[\left|m^{\rm eff}_{\beta\beta,L,R}\right|\equiv m^{\nu+\left|N+S \right|}_{ee} = \left(\left|m^{\nu}_{\beta\beta,L}\right|^{2}+\left|m^{N}_{\beta \beta,R}+m^{S}_{\beta\beta,R}\right|^{2}\right)^{\frac{1}{2}} \tag{4.15}\] \[= \left((m^{\nu+N+S}_{ee})^{2}+2{\rm Re}(m^{N}_{\beta\beta,R}\cdot m ^{S^{*}}_{\beta\beta,R})\right)^{\frac{1}{2}}\,.\] In order to assess the relevance of the interference term \(2{\rm Re}(m^{N}_{\beta\beta,R}\cdot m^{S^{*}}_{\beta\beta,R})\) in our study, we consider both the cases of neglecting it and of taking it into account. We express next the three terms in the generalised effective Majorana mass in terms of the PMNS mixing angles, Dirac and Majorana CPV phases present in the PMNS matrix, the three light neutrino masses and, in the case of the non-standard contributions, the masses of \(N_{1,2,3}\) and of \(S_{1,2,3}\). The effective Majorana mass term for standard mechanism can be written as (see, e.g., [18; 111]): \[\left|m^{\nu}_{\beta\beta,L}\right| = \left|\sum_{i=1}^{3}U^{2}_{ei}m_{i}\right|=\left|m_{1}c^{2}_{12} c^{2}_{13}+m_{2}s^{2}_{12}c^{2}_{13}e^{i\alpha}+m_{3}s^{2}_{13}e^{i(\beta-2 \delta)}\right| \tag{4.16a}\] where \(m_{1},m_{2},m_{3}\) are masses of the light Majorana neutrinos \(\nu_{1,2,3}\) and we have used the standard parametrization of the PMNS matrix. Defining \[C_{N}=m_{e}\,m_{p}\,\frac{M^{0\nu}_{N}}{M^{0\nu}_{\nu}}\frac{M^{4}_{W_{L}}}{M ^{4}_{W_{R}}}\,, \tag{4.17}\] the expression for \(m^{N}_{\beta\beta,R}\) can be cast in the form: \[\left|m^{N}_{\beta\beta,R}\right| = \frac{C_{N}}{m_{N_{1}}}\left|\left[U^{2}_{e1}+\frac{U^{2}_{e2}e^{ i\,\alpha}m_{N_{1}}}{m_{N_{2}}}+\frac{U^{2}_{e3}e^{i\,\beta}m_{N_{1}}}{m_{N_{3}}} \right]\right| \tag{4.18}\] \[= \frac{C_{N}}{m_{N_{1}}}\left|\left[U^{2}_{e1}+\frac{U^{2}_{e2}e^{ i\,\alpha}m_{2}}{m_{1}}+\frac{U^{2}_{e3}e^{i\,\beta}m_{3}}{m_{1}}\right]\right|\] \[= \frac{C_{N}}{m_{N_{1}}m_{1}}\left|m^{\nu}_{\beta\beta,L}\right|\,, \hskip 28.452756pt\mbox{NO case}\,,\] \[\left|m^{N}_{\beta\beta,R}\right| = \frac{C_{N}}{m_{N_{3}}}\left|\left[\frac{U^{2}_{e1}m_{N_{3}}}{m_{ N_{1}}}+\frac{U^{2}_{e2}e^{i\,\alpha}m_{N_{3}}}{m_{N_{2}}}+U^{2}_{e3}e^{i\,\beta} \right]\right| \tag{4.19}\] \[= \frac{C_{N}}{m_{N_{3}}}\left|\left[\frac{U^{2}_{e1}m_{1}}{m_{3}} +\frac{U^{2}_{e2}e^{i\,\alpha}m_{2}}{m_{3}}+U^{2}_{e3}e^{i\,\beta}\right]\right|\] \[= \frac{C_{N}}{m_{N_{3}}m_{3}}\left|m^{\nu}_{\beta\beta,L}\right|\,, \hskip 28.452756pt\mbox{IO case}\,,\] where we have used Eqs. (3.15) and (3.16). We see that in the considered setting the contribution due to exchange of the heavy Majorana neutrinos \(N_{1,2,3}\) is proportional to the standard contribution due to the light Majorana neutrino exchange, \(|m^{N}_{\beta\beta,R}|\propto|m^{\nu}_{\beta\beta,L}|\). We consider next \(m^{S}_{\beta\beta,R}\). It follows from Eq. (3.11) that \(m_{N_{i}}=\frac{k_{rs}^{2}}{m_{S_{i}}}\). As it is described in Appendix (8), the mixing \(V^{NS}_{ek}\equiv M_{RS}M_{S}^{-1}U_{S}\). We can diagonalize \(M_{S}\) as \(M_{S}=U_{S}M_{S}^{D}{U_{S}}^{T}\). Since \(M_{RS}=k_{rs}I\), the mixing \[V^{NS}_{ek} =k_{rs}(U_{S}M_{S}^{D}{U_{S}}^{T})^{-1}U_{S}\] \[=k_{rs}U^{*}_{S}\,{\rm diag}(1/m_{S_{1}},1/m_{S_{2}},1/m_{S_{3}}) \tag{4.20}\] Using the relation \(m_{N_{i}}=\frac{k_{rs}^{2}}{m_{S_{i}}}\) and Eqs. (3.15) - (3.18), the expression for \(m^{S}_{\beta\beta,R}\) can be written as: \[\left|m^{S}_{\beta\beta,R}\right| = C_{N}\,k_{rs}^{2}\left|\left[\frac{U_{e1}^{2}}{m_{S_{1}}^{3}}+ \frac{U_{e2}^{2}e^{i\,\alpha}}{m_{S_{2}}^{3}}+\frac{U_{e3}^{2}e^{i\,\beta}}{m_ {S_{3}}^{3}}\right]\right| \tag{4.21}\] \[= \left|C_{N}\bigg{[}\frac{U_{e1}^{2}\,m_{N_{1}}}{m_{S_{1}}^{2}}+ \frac{U_{e2}^{2}e^{i\,\alpha}\,m_{N_{2}}}{m_{S_{2}}^{2}}+\frac{U_{e3}^{2}e^{i \,\beta}\,m_{N_{2}}}{m_{S_{3}}^{2}}\bigg{]}\right|\] \[= \left|\frac{C_{N}\,m_{N_{1}}\,m_{1}\,m_{2}^{2}}{m_{S_{3}}^{2}\,m _{1}^{3}}\bigg{[}U_{e1}^{2}+U_{e2}^{2}e^{i\,\alpha}\,\frac{m_{1}^{3}}{m_{2}^{ 3}}+U_{e3}^{2}e^{i\,\beta}\,\frac{m_{1}^{3}}{m_{3}^{3}}\bigg{]}\right|\quad \text{NO case}\,,\] \[\left|m^{S}_{\beta\beta,R}\right| = \left|C_{N}\,m_{N_{3}}\bigg{[}\frac{U_{e1}^{2}}{m_{S_{1}}^{2}} \frac{m_{3}}{m_{1}}+\frac{U_{e2}^{2}e^{i\,\alpha}}{m_{S_{2}}^{2}}\frac{m_{3}}{ m_{2}}+\frac{U_{e3}^{2}e^{i\,\beta}}{m_{S_{3}}^{2}}\bigg{]}\right| \tag{4.22}\] \[= \left|\frac{C_{N}\,m_{N_{3}}\,m_{3}\,m_{2}^{2}}{m_{S_{2}}^{2}\,m _{3}^{3}}\bigg{[}U_{e1}^{2}\,\frac{m_{3}^{3}}{m_{1}^{3}}+U_{e2}^{2}e^{i\, \alpha}\,\frac{m_{3}^{3}}{m_{2}^{3}}+U_{e3}^{2}e^{i\,\beta}\bigg{]}\right|\,, \quad\text{IO case}\,.\] It follows from Eqs. (4.18) - (4.22) that \(|m^{N}_{\beta\beta,R}|\) and \(|m^{S}_{\beta\beta,R}|\) exhibit very unusual dependence on the lightest neutrino mass \(m_{1(3)}\): \(|m^{N}_{\beta\beta,R}|\propto 1/m_{1(3)}\) and \(|m^{S}_{\beta\beta,R}|\propto 1/m_{1(3)}^{2}\). Correspondingly, the new physics contributions to the \(0\nu\beta\beta\) decay amplitude are strongly enhanced at relatively small values of \(m_{1(3)}\). We will discuss this dependence in greater detail in the next Section. We will show, in particular, that in the considered scenario with \(m_{S_{k}}\sim(10^{2}-10^{4})\) GeV and \(m_{N_{j}}\sim(1-100)\) GeV of interest, the lightest neutrino mass \(m_{1(3)}\) cannot be smaller than \(\sim 10^{-4}\) eV. We will also show that due to the indicated enhancement the new contributions dominate over the standard mechanism contribution for \(m_{1(3)}\sim(10^{-4}-10^{-2})\) eV 1. Footnote 1: The effects of the heavy Majorana neutrino exchange in \(0\nu\beta\beta\) decay amplitude in a left-right symmetric model setting were studied recently J. de Vries et al., JHEP 11 (2022) 056 [117]. However, the version of the left-right symmetric model considered by us and in J. de Vries et al., JHEP 11 (2022) 056 [117], differ significantly and practically there is no overlap in what concerns the results on the contributions of interest of the heavy Majorana exchange to the \(0\nu\beta\beta\) decay amplitude. ## 5 Phenomenological analysis In the present Section, we will discuss the effects of the new physics contributions to the \(0\nu\beta\beta\) decay amplitude on the predictions for the effective Majorana mass and the \(0\nu\beta\beta\) decay half-life. We recall that if \(0\nu\beta\beta\) decay will be observed, the data on the half-life of \(0\nu\beta\beta\) decay generated by the standard mechanism can provide important information on the absolute scale of light neutrino masses and on the neutrino mass ordering [122; 123]. With additional input data about the values of the lightest neutrino mass \(m_{1(3)}\) (or the sum of the neutrino masses), it might be possible to get information about the values of the Majorana phases in the PMNS matrix as well [124; 18]. In what follows, we will investigate, in particular, how the quoted results are possibly modified by the new contributions to the \(0\nu\beta\beta\) decay amplitude. ### Mass parameter ranges We note first that there exist rather stringent constrains on coupling and masses of the heavy Majorana neutrinos associated with the low-scale type I seesaw mechanism of neutrino mass generation which have been comprehensively discussed in [125]. In the model studied by us the heavy Majorana neutrinos have masses greater than 1 GeV. The couplings of the heavy Majorana neutrino states in the left-handed (V-A) charged lepton current are suppressed, being smaller than \(\sim 10^{-6}\). Their couplings in the right-handed (V+A) charged current are not suppressed being \(\sim U_{PMNS}\), but the contribution of the (V+A) charged current interaction to the rates of experimentally measured observables is suppressed by the factor \((M_{W_{L}}/M_{W_{R}})^{4}<10^{-8}\), where \(M_{W_{L}}=80.38\) GeV in the mass of the Standard Model \(W^{\pm}\) boson, while \(M_{W_{R}}\) is the mass of its \(SU(2)_{R}\) counterpart, and we have used the constraint \(M_{W_{R}}>5\) TeV following from the LHC data. As a consequence, the low energy experimental constrains on the heavy Majorana neutrinos summarised in [125] are satisfied in the model considered by us and do not lead to additional restrictions on the couplings and/or masses of these states. The new non-standard contributions to the \(0\nu\beta\beta\) decay amplitude, \(|m^{N}_{\beta\beta,R}|\) and \(|m^{S}_{\beta\beta,R}|\), as it follows from Eqs. (4.18) - (4.22), have very peculiar dependence on the lightest neu \begin{table} \begin{tabular}{c c|c|c} \hline Isotope & \(T^{0\nu}_{1/2}\) yrs & \(m^{0\nu}_{\beta\beta}\) [eV] & Collaboration \\ \hline \({}^{76}\)Ge & \(>1.8\times 10^{26}\) & \(<(0.08-0.18)\) & GERDA [23] \\ \({}^{76}\)Ge & \(>2.7\times 10^{25}\) & \(<(0.2-0.433)\) & MAJORANA DEMONSTRATOR [118] \\ & \(>8.3\times 10^{25}\) & \(<(0.113-0.269)\) & [119] \\ \({}^{82}\)Se & \(>3.5\times 10^{24}\) & \(<(0.311-0.638)\) & CUPID-0 [120] \\ \({}^{130}\)Te & \(>2.2\times 10^{25}\) & \(<(0.09-0.305)\) & CUORE [121] \\ \({}^{136}\)Xe & \(>3.5\times 10^{25}\) & \(<(0.093-0.286)\) & EXO [24] \\ \({}^{136}\)Xe & \(>1.07\times 10^{26}\) & \(<(0.061-0.165)\) & KamLAND-Zen [25] \\ & \(>2.3\times 10^{26}\) & \(<(0.036-0.156)\) & [26] \\ \hline \end{tabular} \end{table} Table 4: The current lower limits on the half life \(T^{0\nu}_{1/2}\) and upper limits on the effective mass parameter \(m^{0\nu}_{\beta\beta}\) of neutrinoless double beta decay for different isotopes. The range for the effective Majorana mass parameter comes from uncertainties in the nuclear matrix element. trino mass \(m_{1(3)}\). They are strongly enhanced and, as we are going to show below, are considerably larger that the standard mechanism contribution \(|m_{\beta\beta,L}^{\nu}|\) at \(m_{1(3)}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt\hbox{$ \sim$}}}10^{-3}\) eV, where \(|m_{\beta\beta,R}^{N}|>|m_{\beta\beta,L}^{\nu}|\), \(|m_{\beta\beta,R}^{S}|>>|m_{\beta\beta,L}^{\nu}|\) and \(|m_{\beta\beta,R}^{S}|>>|m_{\beta\beta,R}^{N}|\). At \(m_{1(3)}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{$ \sim$}}}5\times 10^{-2}\) eV, however, we have \(|m_{\beta\beta,R}^{N}|,|m_{\beta\beta,R}^{S}|\ll|m_{\beta\beta,L}^{\nu}|\). This implies that the most stringent conservative experimental upper limit on \(|m_{\beta\beta}^{\nu}|<0.156\) eV reported by the KamLAND-Zen collaboration [26] (see Table 4) applies to \(|m_{\beta\beta,L}^{\nu}|\) since it corresponds to light neutrino masses \(m_{1,2,3}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt \hbox{$\sim$}}}0.1\) eV. Actually, it follows from the quoted upper limit that [89, 126]\(m_{1,2,3}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt \hbox{$\sim$}}}0.156/(\cos 2\theta_{12}-\sin^{2}\theta_{13})\cong 0.55\) eV, where we have used the \(3\sigma\) allowed ranges of \(\sin^{2}\theta_{12}\) and \(\sin^{2}\theta_{13}\) given in Table 2 neglecting the minor differences in the ranges corresponding to NO and IO neutrino mass spectra. Thus, the largest light neutrino mass \(m_{3(2)}\) is allowed to vary approximately between \(\sqrt{\Delta m_{31(23)}^{2}}\cong 5\times 10^{-2}\) eV and \(0.55\) eV. The Cosmic Microwave Background (CMB) data of WMAP and PLANCK experiments, combined with supernovae and other cosmological and astrophysical data can be used to obtain information in the form of an upper limit on the sum of neutrinos masses and thus on \(m_{3(2)}\) (see e.g., ref. [127]). Depending on the model complexity and the input data used one typically finds [128] (see also [129]): \(\sum_{j}m_{j}<(0.12-0.54)\) eV (95% CL). The quoted conservative upper limit on \(\sum_{j}m_{j}\) implies \(m_{3(2)}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$<$}}{\lower 2.1973pt \hbox{$\sim$}}}0.18\) eV. In our phenomenological and numerical analysis, we will use somewhat larger values of \(m_{3(2)}\), keeping in mind the existence of more stringent limits. We recall further that in the model considered by us \(m_{N_{i}}=k_{d}^{2}/m_{i}\), \(m_{S_{i}}=(k_{rs}^{2}/k_{d}^{2})m_{i}\), where \(k_{d}\) and \(k_{rs}\) are real constant parameters. Correspondingly, in the case of NO light neutrino mass spectrum, \(m_{1}<m_{2}<m_{3}\), we have \(m_{N_{3}}<m_{N_{2}}<m_{N_{1}}\) and \(m_{S_{1}}<m_{S_{2}}<m_{S_{3}}\). For IO spectrum, \(m_{3}<m_{1}<m_{2}\), we have instead: \(m_{N_{2}}<m_{N_{1}}<m_{N_{3}}\) and \(m_{S_{3}}<m_{S_{1}}<m_{S_{2}}\). In the double seesaw model under discussion, we should always have in the NO (IO) case \(m_{N_{1(2)}}\ll m_{S_{3(2)}}\), i.e., \(m_{S_{3(2)}}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt \hbox{$\sim$}}}10\,m_{N_{1(2)}}\). In what follows, we will consider the values of \(m_{S_{3(2)}}\) and \(m_{N_{1(3)}}\) in the intervals \((1-10)\) TeV and \((10^{2}-10^{3})\) GeV, respectively, while the mass of the lightest RH Majorana neutrino \(N_{3(2)}\) will be assumed to satisfy \(m_{N_{3(2)}}\geq 1\) GeV. The minimal value of \(m_{N_{3(2)}}\) of 1 GeV should correspond to the maximal allowed value of \(m_{3(2)}\cong 0.55\) eV considered by us. As a consequence, we have: \(k_{d}^{2}=\min(m_{N_{3(2)}})\max(m_{3(2)})=0.55\) eV GeV. We get similar value of \(k_{d}^{2}\) if we use \(m_{N_{3(2)}}=10\) GeV and \(m_{3(2)}\cong 0.05\) eV. Given the value of \(k_{d}^{2}\), the requirement that the mass of the heaviest RH Majorana neutrino \(m_{N_{1(3)}}\) should not exceed \(10^{3}\) GeV implies a lower limit on the mass of the lightest Majorana neutrino \(m_{1(3)}\): \(m_{1(3)}=k_{d}^{2}/m_{N_{1(3)}}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{ \lower 2.1973pt\hbox{$\sim$}}}0.55\times 10^{-3}\) eV. Thus, for consistency with the chosen ranges of value of the heavy Majorana fermions in the model, the value of the lightest neutrino mass should not be smaller than about \(5.5\times 10^{-4}\) eV. In the numerical analysis, we will perform we will exploit the range \(m_{1(3)}=(10^{-4}-1.0)\) eV. In the analysis which follows, we will use the values of the neutrino oscillation parameters given in Table 2. We set the Dirac phase \(\delta=0\). The Majorana phases \(\alpha\) and \(\beta\) are varied in the interval \([0,\pi]\). For the parameters \(M_{W_{R}}\), \(m_{N_{1(3)}}\)\(m_{S_{3(2)}}\) and the ratio \(M_{N}^{0\nu}/M_{\nu}^{0\nu}\) the following reference values will be utilised: \(M_{W_{R}}=5.5\) TeV, \(m_{N_{1(3)}}=300\) GeV, \(m_{S_{3(2)}}=3\) TeV and \(M_{N}^{0\nu}/M_{\nu}^{0\nu}\cong 22.2-76.3\) (concerning \(M_{N}^{0\nu}/M_{\nu}^{0\nu}\), see Eq. (4.7) and the discussion related to it). ### Light neutrino contribution The phenomenology of the light neutrino contribution to the \(0\nu\beta\beta\) decay half-life, including the properties of the corresponding effective Majorana mass \(|m^{\nu}_{\beta\beta,L}|\) have been extensively studied and are well known (see, e.g., [89]). In this subsection, we summarise the main features of \(|m^{\nu}_{\beta\beta,L}|\). **Normal Ordering** In this case \(|m^{\nu}_{\beta\beta,L}|\) (see Eq. 4.16) can be rewritten in terms of neutrino mass square differences as, \[\left|m^{\nu}_{\beta\beta,L}\right|=\left|m_{1}c_{12}^{2}c_{13}^{2}+\sqrt{m_{1 }^{2}+\Delta m_{21}^{2}}s_{12}^{2}c_{13}^{2}e^{i\alpha}+\sqrt{m_{1}^{2}+\Delta m _{31}^{2}}s_{13}^{2}e^{i\beta}\right|\,. \tag{5.1}\] The best fit values and the \(3\sigma\) allowed ranges of \(s_{12}^{2}\equiv\sin^{2}\theta_{12}\), \(s_{13}^{2}\equiv\sin^{2}\theta_{13}\) and of \(\Delta m_{21}^{2}\) and \(\Delta m_{31}^{2}\) are given in Table 2. The case of hierarchical light neutrino mass spectrum corresponds to \(m_{1}\ll m_{2}<m_{3}\). In this case \(m_{2}\approx\sqrt{\Delta m_{21}^{2}}\approx 8.57\times 10^{-3}\) eV and \(m_{3}\approx\sqrt{\Delta m_{31}^{2}}\approx 4.98\times 10^{-2}\) eV, and thus \(m_{1}\lesssim 8\times 10^{-4}\) eV. Depending on the values of the Majorana phases, \(|m^{\nu}_{\beta\beta,L}|\) can take values in the interval \((0.4-4.8)\times 10^{-3}\) eV, where we have used the the \(3\sigma\) allowed ranges of the relevant oscillation parameters. At \(m_{1}=10^{-4}\) eV we have: \(0.91\times 10^{-3}\) eV \(\lesssim|m^{\nu}_{\beta\beta,L}|\lesssim 4.37\times 10^{-3}\) eV. The effective Majorana mass \(|m^{\nu}_{\beta\beta,L}|\) exhibits strong dependence on the values of the Majorana phases \(\alpha\) and \(\beta\) in the case of NO neutrino mass spectrum with partial hierarchy corresponding to \(m_{1}=(10^{-3}-10^{-2})\) eV. Indeed, for \(\alpha=\pi\) and \(\beta=0\), \(|m^{\nu}_{\beta\beta,L}|\) is strongly suppressed for values of \(m_{1}\) lying in the interval \((1.3-9.0)\times 10^{-3}\) eV, where \(|m^{\nu}_{\beta\beta,L}|\lesssim 2\times 10^{-4}\) eV due to cancellations (partial or complete) between the three terms in the expression of \(|m^{\nu}_{\beta\beta,L}|\). Using the best fit values of the neutrino oscillation parameters, we find that a complete cancellation takes place and \(|m^{\nu}_{\beta\beta,L}|=0\) at \(m_{1}\cong 2.26\times 10^{-3}\) eV. At the same time, at \(m_{1}=2.26\)\((9.0)\times 10^{-3}\) eV, for example, \(|m^{\nu}_{\beta\beta,L}|\approx 5\)\((10)\times 10^{-3}\) eV if \(\alpha=0\) and \(\beta=0\). As \(m_{1}\) increases beyond \(10^{-2}\) eV, \(|m^{\nu}_{\beta\beta,L}|\) increases almost linearly with \(m_{1}\) and at \(m_{1}\cong 0.1\) eV enters the quasi-degenerate (QD) neutrino mass spectrum region where \(|m^{\nu}_{\beta\beta,L}|\gtrsim 0.05\) eV. **Inverted Ordering** \[\left|m^{\nu}_{\beta\beta,L}\right|=\left|\sqrt{m_{3}^{2}+\Delta m_{23}^{2}- \Delta m_{21}^{2}}\,c_{12}^{2}\,c_{13}^{2}+\sqrt{m_{3}^{2}+\Delta m_{23}^{2} }\,s_{12}^{2}\,c_{13}^{2}\,e^{i\alpha}+m_{3}\,s_{13}^{2}\,e^{i\beta}\right) \right|\,. \tag{5.2}\] The behavior of \(|m^{\nu}_{\beta\beta,L}|\) as a function of the lightest neutrino mass \(m_{3}\) is very different from the behavior in the NO case. Given the fact that \(m_{2}=\sqrt{m_{3}^{2}+\Delta m_{23}^{2}}\gtrsim 5\times 10^{-2}\) eV, \(m_{1}=\sqrt{m_{3}^{2}+\Delta m_{23}^{2}-\Delta m_{21}^{2}}\gtrsim 5\times 10^{-2}\) eV, \(s_{13}^{2}\cong 0.022\) and at \(3\sigma\) we have \((c_{12}^{2}-s_{12}^{2})\gtrsim 0.31\), complete cancellation between the three terms in Eq. (5.2) is not possible. Actually, at \(m_{3}^{2}\ll\Delta m_{23}^{2}\), or equivalently, at \(m_{3}\lesssim 1.6\times 10^{-2}\) eV, \(|m^{\nu}_{\beta\beta,L}|\) practically does not depend on \(m_{3}\). At these values of \(m_{3}\) we have \(\sqrt{\Delta m^{2}_{23}}\cos 2\theta_{12}\lesssim|m^{\nu}_{\beta\beta,L}|\lesssim \sqrt{\Delta m^{2}_{23}}\). Using the \(3\sigma\) allowed ranges of \(\sqrt{\Delta m^{2}_{23}}\) and \(\cos 2\theta_{12}\) from Table 2 we get: \(1.51\times 10^{-2}\) eV \(\lesssim|m^{\nu}_{\beta\beta,L}|\lesssim 5.06\times 10^{-2}\) eV. If the \(0\nu\beta\beta\) decay were generated by the standard mechanism only, the fact that in the case of hierarchical light neutrino mass spectrum the minimal value of \(|m^{\nu}_{\beta\beta,L}|\) for the IH spectrum is approximately by a factor of 3.4 larger than the maximal value of \(|m^{\nu}_{\beta\beta,L}|\) for the NH spectrum opens up the possibility of obtaining information about the type of neutrino mass spectrum from a measurement of \(|m^{\nu}_{\beta\beta,L}|\)[122]. As \(m_{3}\) increases beyond \(1.6\times 10^{-2}\) eV, \(|m^{\nu}_{\beta\beta,L}|\) also increases and at \(m_{3}\cong 0.1\) eV enters the QD region where \(|m^{\nu}_{\beta\beta,L}|\ \raisebox{-3.698858pt}{~{}\shortstack{$>$ \\ [-0.07cm] $\sim$}}\ 0.03\) eV growing linearly with \(m_{3}\). ### The contribution due to the exchange of \(N_{1.2.3}\) The contribution due to the exchange of virtual \(N_{1,2,3}\) in the NO and IO cases are given respectively in Eqs. (4.18) and (4.19) can be cast in the form: \[\left|m^{N}_{\beta\beta,R}\right| = \frac{C_{N}}{m_{N_{1(3)}}m_{1(3)}}\left|m^{\nu}_{\beta\beta,L} \right|\,,\hskip 28.452756pt\mbox{NO (IO) case}\,, \tag{5.3}\] where \(C_{N}\) is defined in Eq. (4.17) and \(|m^{\nu}_{\beta\beta,L}|\) is the effective Majorana mass associated with the standard mechanism discussed in the preceding sub-section. Using the \(M_{W_{L}}=80.38\) GeV, the reference values of \(M_{W_{R}}=5.5\) TeV we get: \[\frac{C_{N}}{m_{N_{1(3)}}\,m_{1(3)}}\cong 0.729\left(\frac{m_{N_{1(3)}}}{300 \mbox{ GeV}}\right)^{-1}\left(\frac{m_{1(3)}}{10^{-4}\mbox{ eV}}\right)^{-1} \frac{M_{N}^{0\nu}}{M_{\nu}^{0\nu}}\,. \tag{5.4}\] Taking into account the minimal and maximal reference values of \(M_{N}^{0\nu}/M^{0\nu}\) in Eq. (4.7) and fixing \(m_{N_{1(3)}}\) to the reference value of 300 GeV, the variation of the factor \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\) with the change of lightest neutrino mass is displayed in Fig.2. We also estimate the possible range of values of the factor \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\) for four values of the lightest neutrino mass \(m_{1(3)}\) as follows : \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\cong(16.2-55.6)\) for \(m_{1(3)}=10^{-4}\) eV, \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\cong(1.62-5.56)\) for \(m_{1(3)}=10^{-3}\) eV, \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\cong(0.32-1.11)\) for \(m_{1(3)}=5\times 10^{-3}\) eV, \(C_{N}/(m_{N_{1(3)}}m_{1(3)})\cong(0.16-0.56)\) for \(m_{1(3)}=10^{-2}\) eV. It is clear from these estimates that for \(10^{-4}\)eV \(\leq m_{1(3)}\leq 10^{-3}\) eV, the contribution due to exchange of virtual \(N_{1,2,3}\) is larger than the standard mechanism contribution: \(|m^{N}_{\beta\beta,R}|>|m^{\nu}_{\beta\beta,L}|\). For \(m_{1(3)}\sim(10^{-4}-5\times 10^{-4})\) eV we have actually: \(|m^{N}_{\beta\beta,R}|\gg|m^{\nu}_{\beta\beta,L}|\). In this interval of values of \(m_{1}\) in the NO case, \(|m^{N}_{\beta\beta,R}|\) lies in the region corresponding to the IO neutrino mass spectrum if only the standard mechanism (i.e., only light Majorana neutrino exchange) were operative in \(0\nu\beta\beta\) decay. The predicted values of \(|m^{N}_{\beta\beta,R}|\) in the IO case are larger than the experimental limits on effective Majorana mass reported by the GERDA and KamLAND-Zen experiments (see Table 4). In the region \(m_{1(3)}\sim(10^{-3}-10^{-2})\) eV we have roughly \(|m^{N}_{\beta\beta,R}|\sim|m^{\nu}_{\beta\beta,L}|\) (see below), with \(|m^{N}_{\beta\beta,R}|\) decreasing as \(1/m_{1(3)}\). In the NO case, \(|m^{\nu}_{\beta\beta,L}|\) can be strongly suppressed, i.e., depending on the values of the Majorana phases it can have value \(|m^{\nu}_{\beta\beta,L}|\leq 10^{-4}\) eV, and in this case \(|m^{N}_{\beta\beta,R}|\) will also be suppressed. At \(m_{1}=10^{-3}\) eV though at which \(|m^{\nu}_{\beta\beta,L}|\cong 3\times 10^{-4}\) eV, \(|m^{N}_{\beta\beta,R}|\) can be somewhat larger than \(|m^{\nu}_{\beta\beta,L}|\) owing to the relevant NME element ratio and can have a value \(|m^{N}_{\beta\beta,R}|\cong 1.5\times 10^{-3}\) eV. In the IO case, \(|m^{N}_{\beta\beta,R}|\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}} \raise 1.0pt\hbox{$>$}}|m^{\nu}_{\beta\beta,L}|\) in the discussed region. It can be larger than \(|m^{\nu}_{\beta\beta,L}|\) by a factor of 2. At \(m_{1(3)}>10^{-2}\) eV, \(|m^{\nu}_{\beta\beta,L}|\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}} \raise 1.0pt\hbox{$>$}}|m^{N}_{\beta\beta,R}|\) and at \(m_{1(3)}\geq 5\times 10^{-2}\) eV, we have \(|m^{\nu}_{\beta\beta,L}|>>|m^{N}_{\beta\beta,R}|\) and the contribution due to the exchange of \(N_{1.2.3}\) is subleading and practically negligible in both NO and IO cases. For values of \(m_{N_{1(3)}}\) smaller (larger) than the considered 300 GeV, \(|m^{N}_{\beta\beta,R}|\) will have values which are larger (smaller) than those discussed above by the factor 300 GeV/\(m_{N_{1(3)}}\). Since in the considered scenario the mass of the lightest \(N_{j}\) is assumed to satisfy \(m_{N_{3(2)}}\geq 1\) GeV and is given by \(m_{N_{3(2)}}=(m_{1(3)}/m_{3(2)})m_{N_{1(3)}}\), \(m_{1(3)}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}} \raise 1.0pt\hbox{$>$}}5.5\times 10^{-4}\) eV, and from the data it follows that \(m_{3(2)}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}} \raise 1.0pt\hbox{$>$}}5\times 10^{-2}\) eV, for consistency one should have also \(m_{N_{1(3)}}\mathrel{\hbox to 0.0pt{\lower 4.0pt\hbox{ $\sim$}} \raise 1.0pt\hbox{$>$}}100\) GeV. ### The contribution due to the exchange of \(S_{1.2.3}\) The important parameter for the contribution due to the exchange of \(S_{1.2.3}\) in the NO (IO) case is the dimensionful factor \[C_{S}^{\rm NO(IO)}\equiv\frac{C_{N}\,m_{N_{1(3)}}\,m_{1(3)}\,m_{3(2)}^{2}}{m_ {S_{3(2)}}^{2}\,m_{1(3)}^{3}}\,,\qquad\quad{\rm NO\ (IO)}\,. \tag{5.5}\] Taking into account Eq. (5.4), \(C_{S}^{\rm NO(IO)}\) can be cast in the form: \[C_{S}^{\rm NO(IO)}=0.729\times 10^{-6}\ {\rm eV}\,\frac{m_{N_{1(3)}}}{300\ {\rm GeV}}\,\left(\frac{m_{S_{3(2)}}}{3\ {\rm TeV}}\right)^{-2}\,\left(1+\frac{\Delta m_{31(23)}^{2}}{m_{1(3)}^{2}} \right)\,\frac{{\rm M}_{\rm N}^{0\nu}}{{\rm M}_{\nu}^{0\nu}}\,,\ \ {\rm NO \ (IO)}\,. \tag{5.6}\] Setting \(m_{N_{1(3)}}\), \(m_{S_{3(2)}}\), \(M_{W_{R}}\) to the reference values of 300 GeV, 3 TeV, 5.5 TeV respectively and using the values of \(\Delta m_{31(23)}^{2}\cong 2.5\times 10^{-3}\ {\rm eV}^{2}\) (see Table 2) and \({\rm M}_{\rm N}^{0\nu}/{\rm M}^{0\nu}\) as given in Eq. (4.7), the variation of \(C_{S}^{\rm NO(IO)}\) with the change of lightest neutrino mass is shown in Fig.3. Using these reference model parameters we calculate the factor \(C_{S}^{\rm NO(IO)}\) for different values of lightest neutrino mass as given below : \(C_{S}^{\rm NO(IO)}\cong(4.05-13.90)\ {\rm eV}\) for \(m_{1(3)}=10^{-4}\ {\rm eV}\), \(C_{S}^{\rm NO(IO)}\cong(0.040-0.139)\ {\rm eV}\) for \(m_{1(3)}=10^{-3}\ {\rm eV}\), \(C_{S}^{\rm NO(IO)}\cong(4.21\times 10^{-4}-1.45\times 10^{-3})\ {\rm eV}\) for \(m_{1(3)}=10^{-2}\ {\rm eV}\), \(C_{S}^{\rm NO(IO)}\cong(3.24\times 10^{-5}-1.11\times 10^{-4})\ {\rm eV}\) for \(m_{1(3)}=5\times 10^{-2}\ {\rm eV}\), \(C_{S}^{\rm NO(IO)}\cong(2.0-6.9)\times 10^{-5})\ {\rm eV}\) for \(m_{1(3)}=10^{-1}\ {\rm eV}\). It follows from these numerical estimates that \(C_{S}^{\rm NO(IO)}\), and thus \(|m_{\beta\beta,R}^{S}|\), decreases rapidly with the increase of \(m_{1(3)}\) in the interval \((10^{-4}-5\times 10^{-2})\ {\rm eV}\). We recall that the contributions due to the exchange of \(S_{1.2.3}\) in the NO and IO cases are given by \[\left|m_{\beta\beta,R}^{S}\right| = C_{\rm S}^{\rm NO}\left|U_{e1}^{2}+U_{e2}^{2}e^{i\,\alpha}\, \frac{m_{1}^{3}}{m_{2}^{3}}+U_{e3}^{2}e^{i\,\beta}\,\frac{m_{1}^{3}}{m_{3}^{3 }}\right|\ \ \ \ {\rm NO\ case}\,, \tag{5.7}\] \[\left|m^{S}_{\beta\beta,R}\right| = C_{\rm S}^{\rm IO}\left|U_{e1}^{2}\frac{m_{3}^{3}}{m_{1}^{3}}+U_{ e2}^{2}e^{i\,\alpha}\,\frac{m_{3}^{3}}{m_{2}^{3}}+U_{e3}^{2}e^{i\,\beta}\,\right| \quad\mbox{ IO case}\,, \tag{102}\] and that \(|U_{e1}|^{2}\cong 0.7\) and \(|U_{e3}|^{2}\cong 0.022\). Consider the NO case. We note first that with the increasing of \(m_{1}\) beyond \(10^{-2}\) eV, the contribution \(|m^{S}_{\beta\beta,R}|\) to the \(0\nu\beta\beta\) decay amplitude becomes sub-dominant and negligible. For \(m_{1}=(10^{-4}-10^{-2})\) eV, the ratio \(m_{1}^{3}/m_{3}^{3}\ll 1\), while \(m_{1}^{3}/m_{2}^{3}\ll 1\) in the interval \(m_{1}=(10^{-4}-4.5\times 10^{-3})\) eV. This implies that for \(m_{1}\mathrel{\raise 1.29pt\hbox{$<$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}}4.5 \times 10^{-3}\) eV, the second and third terms in the expression (101) for \(|m^{S}_{\beta\beta,R}|\) are practically negligible and \(|m^{S}_{\beta\beta,R}|\cong C_{\rm S}^{\rm NO}|U_{e1}|^{2}\) with essentially no dependence on the Majorana phases. In the interval \(m_{1}=(4.5\times 10^{-3}-10^{-2})\) eV the ratio \(m_{1}^{3}/m_{2}^{3}\) increases with \(m_{1}\) and at \(m_{1}=10^{-2}\) eV we have \(m_{1}^{3}/m_{2}^{3}\cong 0.44\). Thus, even at this value the maximal effect of the Majorana phases is to change the value of \(|U_{e1}|^{2}\cong 0.7\) to \(|U_{e1}|^{2}\pm m_{1}^{3}/m_{2}^{3}|U_{e2}|^{2}\), or to \(0.70\pm 0.13\), i.e. by at most 18%, where we have used the best fit values of \(\sin^{2}\theta_{12}\) and \(\sin^{2}\theta_{13}\). Thus, in the interval of values of \(m_{1}\) of interest, where the contribution of \(|m^{S}_{\beta\beta,R}|\) is important, there can not be significant compensation between the three terms in the expression for \(|m^{S}_{\beta\beta,R}|\). From the numerical estimates of \(|m^{\nu}_{\beta\beta,L}|\), \(|m^{N}_{\beta\beta,R}|\) and \(|m^{S}_{\beta\beta,R}|\) in the preceding and current sub-sections, it follows that in the interval of interest \(m_{1}=(10^{-4}-10^{-2})\) eV, we have \(|m^{S}_{\beta\beta,R}|>(\gg)|m^{\nu}_{\beta\beta,L}|,|m^{N}_{\beta\beta,R}|\). This is particularly important in the interval \(10^{-3}\) eV \(<m_{1}<10^{-2}\) eV, where \(|m^{\nu}_{\beta\beta,L}|<3\times 10^{-4}\) eV, while \(|m^{S}_{\beta\beta,R}|\mathrel{\raise 1.29pt\hbox{$>$\kern-7.5pt\lower 4.3pt \hbox{$\sim$}}}3\times 10^{-4}\) eV and, depending on the NME, at \(m_{1}=10^{-3}\) eV can be as large as \(|m^{S}_{\beta\beta,R}|\cong 9.7\times 10^{-2}\) eV \(\gg|m^{\nu}_{\beta\beta,L}|,|m^{N}_{\beta\beta,R}|\). The situation is very different in the IO case. In the interval \(m_{3}=(10^{-4}-10^{-2})\) eV, where the factor \(C_{\rm S}^{\rm IO}\) has a relatively large value, we have \((m_{3}/m_{2(1)})^{3}\lesssim 7.5\times 10^{-3}\), which implies that actually \(|m^{S}_{\beta\beta,R}|\cong C_{\rm S}^{\rm IO}\,|U_{e3}|^{2}\cong 2.2\times 1 0^{-2}\,C_{\rm S}^{\rm IO}\). As a consequence of the suppression due to \(|U_{e3}|^{2}\) we have in the interval of values of \(m_{3}\) of interest \(|m^{S}_{\beta\beta,R}|\ll|m^{N}_{\beta\beta,R}|\). ### The contribution of the interference term The contribution of the interference term \(2{\rm Re}(m^{N}_{\beta\beta,R}\cdot m^{S^{*}}_{\beta\beta,R})\) in Eq. (101) in the \(0\nu\beta\beta\) decay rate may be non-negligible only in the interval of values of \(m_{1(3)}=(10^{-4}-10^{-2})\) eV, where the new non-standard contributions are significant. Using the analytical expressions for \(|m^{N}_{\beta\beta,R}|\) and \(|m^{S}_{\beta\beta,R}|\) in Eqs. (100) - (103) and the results reported in the preceding subsections, it is not difficult to estimate the relative magnitude of the contribution of this term. Our results show that it varies significantly with the type of neutrino mass spectrum, the values of the lightest neutrino mass \(m_{1(3)}\) and of the Majorana phases \(\alpha\) and \(\beta\). The relative contribution of the interference term of interest is determined by the ratio: \[R\equiv\frac{2{\rm Re}(m^{N}_{\beta\beta,R}\cdot m^{S^{*}}_{\beta\beta,R})}{|m^ {\nu}_{\beta\beta,L}|^{2}+|m^{N}_{\beta\beta,R}|^{2}+|m^{S}_{\beta\beta,R}|^{2}}\,. \tag{103}\] Using the ratio \(R\), the generalised effective Majorana mass defined in Eq. (101) can be written as: \[m^{\nu+|N+S|}_{ee}=m^{\nu+N+S}_{ee}\,\sqrt{1+R}\,. \tag{104}\] In the case of NO spectrum, the sign of the interference term of interest depends on the Majorana phases \(\alpha\) and \(\beta\). For \(\alpha=\beta=0\), the ratio \(R<0\) and thus, the interference terms give a negative contribution to the \(0\nu\beta\beta\) decay rate. The magnitude of this contribution increases quickly when \(m_{1}\) increases from \(10^{-4}\) eV to \(10^{-3}\) eV with \(R\) changing from (-0.044) to (-0.48). The effect of the interference term peaks at \(m_{1}\cong 2\times 10^{-3}\) eV where \(R\cong-0.85\). Thus, at this value of \(m_{1}\) we have the maximal suppression of \(m_{ee}^{\nu+N+S}\) by the factor \(\sqrt{1+R}\): \(m_{ee}^{\nu+|N+S|}\cong 0.39\,m_{ee}^{\nu+N+S}\). The ratio \(R\) decreases rapidly when \(m_{1}\) increases beyond \(5\times 10^{-3}\) eV at which \(R\cong-0.48\). The quoted values of \(R\) at \(m_{1}=10^{-4}\) eV and \(m_{1}=10^{-3}\) eV are essentially independent of the value of the ratio \(M_{N}^{0\nu}/M_{\nu}^{0\nu}\) lying in the reference interval (22.2 - 76.3). The value of \(R\) quoted at \(m_{1}=5\times 10^{-3}\) eV corresponds to \(M_{N}^{0\nu}/M_{\nu}^{0\nu}=76.3\); for \(M_{N}^{0\nu}/M_{\nu}^{0\nu}=22.2\) it is significantly smaller in magnitude: \(R\cong-0.089\). The effect of the interference term is quite different for \(\alpha=\pi\), \(\beta=0\). In this case, the interference terms give a positive contribution to the \(0\nu\beta\beta\) decay rate for \(m_{1}<2.26\times 10^{-3}\) eV where \(R>0\). At \(m_{1}\cong 2.26\times 10^{-3}\) eV it goes through zero (\(R=0\)) since at this value \(m_{\beta\beta,L}^{\nu}\cong 0\) and thus \(m_{\beta\beta,R}^{N}\cong 0\). Correspondingly, at \(m_{1}\cong 2.26\times 10^{-3}\) eV the generalised effective Majorana mass (Eq. (4.15)) \(m_{ee}^{\nu+|N+S|}\cong m_{\beta\beta,R}^{S}\cong C_{\rm S}^{\rm NO}U_{e1}^{2}\). Taking into account that \(U_{e1}^{2}\cong\cos^{2}\theta_{12}\cong 0.7\) and using Eq. (5.6), for the reference values of \(m_{N_{1}}=300\) GeV, \(m_{S_{3}}=3\) TeV, \(M_{W_{R}}=5.5\) TeV and \(M_{N}^{0\nu}/M_{\nu}^{0\nu}=22.2\) (76.3) we find \(m_{ee}^{\nu+|N+S|}\cong 5.3\,(18.8)\times 10^{-3}\) eV. At \(m_{1}>2.26\times 10^{-3}\) eV the interference term is negative (\(R<0\)). It increases in magnitude as \(m_{1}\) increases in the interval \(m_{1}=3.5\times 10^{-3}-10^{-2}\) eV and, e.g., at \(m_{1}=10^{-2}\) eV we have \(R\cong-\,0.02\) (\(-\,0.16\)) for \(M_{N}^{0\nu}/M_{\nu}^{0\nu}=22.2\) (76.3). At \(m_{1}>10^{-2}\) eV we have \(|R|\ll 1\) and the interference term has a sub-leading (practically negligible) contribution in the \(0\nu\beta\beta\) decay rate. The results for the ratio \(R\) of interest are very different in the IO case. It is maximal in magnitude at \(m_{3}=10^{-4}\) eV, where \(R\cong-\,0.54\). However, for \(m_{3}\sim 10^{-4}\) eV, the predicted values of the generalised effective Majorana mass \(|m_{ee}^{\nu+|N+S|}|\) (see Eq. (4.15), as we will show in the next Section, are strongly disfavored (practically ruled out) by the existing upper limits from the KamLAND-Zen and GERDA experiments (see Table 4 and Fig. 2). In the region of values of \(m_{3}\mathrel{\raise 1.29pt\hbox{$>$\kern-7.5pt\lower 4.3pt\hbox{$\sim$}}}10^{-3}\) eV, where the predictions for the generalised effective Majorana mass are compatible with the current experimental upper limits one has \(|R|<0.06\), with the value of \(|R|\) decreasing rapidly with the increasing of \(m_{3}\). Thus, in the IO case, the interference term under discussion has at most, a sub-leading (practically negligible) effect on the \(0\nu\beta\beta\) half-life in the interval of values of \(m_{3}\) where the predictions of the model considered are compatible with the existing lower limits on the half-life. ### Numerical Results It follows from the analyses performed in the preceding four subsections in particular, that in the NO case the contribution due to the \(S_{1,2,3}\) exchange, \(|m_{\beta\beta,R}^{S}|\), dominates over the light neutrino \(\nu_{i}\) and \(N_{1,2,3}\) exchange contributions for \(10^{-4}\) eV \(\leq m_{1}\lesssim 1.5\times 10^{-3}\) eV. As a consequence, in the indicated interval of values of \(m_{1}\) the generalised effective Majorana mass \(|m_{ee}^{\nu+|N+S|}|\) exhibits weak dependence on the Majorana phases \(\alpha\) and \(\beta\) since \(|m^{S}_{\beta\beta,R}|\) practically does not depend on these phases. At \(m_{1}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{$ \sim$}}}2\times 10^{-3}\) eV, for \(\alpha=\beta=0\), the \(S_{1,2,3}\) contribution is subleading and \[m^{\nu+|N+S|}_{ee}\cong\sqrt{|m^{\nu}_{\beta\beta,L}|^{2}+|m^{N}_{\beta\beta,R}| ^{2}}\cong|m^{\nu}_{\beta\beta,L}|\,\left(1+\frac{C_{N}}{m_{N_{1}}m_{1}}\right) ^{\frac{1}{2}}\,,\ \ {\rm NO}\,,\ \alpha=\beta=0\,, \tag{11}\] where we have used Eq. (10). For \(\alpha=\pi\), \(\beta=0\), however, \(|m^{\nu}_{\beta\beta,L}|\) is strongly suppressed in the interval \(m_{1}\cong(1.5\times 10^{-3}-9\times 10^{-3})\) eV and goes through zero at \(m_{1}\cong 2.26\times 10^{-3}\) eV, where the value of \(m_{1}\) is obtained using the best fit values of the neutrino oscillations parameters. Therefore \(|m^{S}_{\beta\beta,R}|\) gives significant contribution to \(m^{\nu+|N+S|}_{ee}\) in the indicated interval and determines the minimal value of \(m^{\nu+|N+S|}_{ee}\). At \(m_{1}\cong 2.26\times 10^{-3}\) eV, e.g., we have \(|m^{\nu+|N+S|}_{ee}|\cong|m^{S}_{\beta\beta,R}|\cong C_{8}^{\rm NO}|U_{e1}|^{2}\). In contrast, in the IO case the contribution due to the \(S_{1,2,3}\) exchange \(|m^{S}_{\beta\beta,R}|\) and of the interference term \(2{\rm Re}(m^{N}_{\beta\beta,R}\cdot m^{S^{*}}_{\beta\beta,R})\) in the interval of values of \(m_{3}\) of interest are practically negligible. Thus, for the generalised effective Majorana mass is given by: \[m^{\nu+|N+S|}_{ee}\cong\sqrt{|m^{\nu}_{\beta\beta,L}|^{2}+|m^{N}_{\beta\beta,R }|^{2}}\cong|m^{\nu}_{\beta\beta,L}|\,\left(1+\frac{C_{N}}{m_{N_{3}}m_{3}} \right)^{\frac{1}{2}}\,,\ \ {\rm IO}\,. \tag{12}\] In this case \(|m^{\nu+|N+S|}_{ee}|\) depends significantly on the Majorana phases. The conclusions regarding the new non-standard contributions due to the exchange of virtual heavy Majorana fermions \(N_{1,2,3}\) and \(S_{1,2,3}\) to the \(0\nu\beta\beta\) decay generalised effective Majorana mass and half-life reached in the phenomenological analysis are confirmed by our numerical results. These are illustrated in Fig. 4. In the three upper panels of Fig. 4 we show i) \(m^{\nu}_{ee}\equiv|m^{\nu}_{\beta\beta,L}|\) (left panel), ii) \(m^{\nu+N+S}_{ee}\equiv\sqrt{|m^{\nu}_{\beta\beta,L}|^{2}+|m^{N}_{\beta\beta,R }|^{2}+|m^{S}_{\beta\beta,R}|^{2}}\) (middle panel), and iii) \(m^{\nu+|N+S|}_{ee}\equiv\sqrt{|m^{\nu}_{\beta\beta,L}|^{2}+|m^{N}_{\beta\beta,R}+m^{S}_{\beta\beta,R}|^{2}}\) (right panel), as functions of the lightest neutrino mass \(m_{1(3)}\) in the case of NO (IO) light neutrino mass spectrum. Thus, the upper left panel shows the dependence on \(m_{1(3)}\) of the standard mechanism effective Majorana mass, while in the upper middle and right panels, the dependence of the generalised effective Majorana mass (GEMM) in which the contributions due to the exchange of the heavy Majorana fermions \(N_{1,2,3}\) and \(S_{1,2,3}\) are included without accounting for (middle panel) and accounting for (right panel) their interference. The brown and blue bands correspond respectively to the NO (NH) and IO (IH) types of light neutrino mass spectrum. The overlap of the two bands indicates the region of the QD spectrum. Following the discussion of NME in Section 4, the ratio of nuclear matrix elements \({\cal M}^{0\nu}_{N}/{\cal M}^{0\nu}_{\nu}\) is varied in the interval 22.2 - 76.3, as given in Eq. (7). The minimal (maximal) value in this interval, \({\cal M}^{0\nu}_{N}/{\cal M}^{0\nu}_{\nu}=22.2\) (76.3), corresponds to \({\cal M}^{0\nu}_{\nu}=4.68\) (5.26). The results for \({\cal M}^{0\nu}_{N}/{\cal M}^{0\nu}_{\nu}=22.2\) (76.3) are indicated with solid (dashed) lines. For the parameters \(M_{W_{R}}\), \(m_{N_{1(3)}}\) and \(m_{S_{3(2)}}\) the reference values of 5.5 TeV, 300 GeV, and 3TeV, respectively, are used. All plots are obtained by varying the neutrino oscillation parameters in their respective \(3\sigma\) allowed ranges. The Majorana phases \(\alpha\) and \(\beta\) are varied in the interval \([0,\pi]\), while the Dirac phase \(\delta\) is set to zero. For both \({\cal M}^{0\nu}_{N}/{\cal M}^{0\nu}_{\nu}=22.2\) and 76.3, the curves showing the maximal (minimal) values of GEMM as functions of \(m_{1(3)}\) correspond to \(\alpha=\beta=0\) (\(\alpha=\pi\), \(\beta=0\)). The lower panels show the dependence on \(m_{1(3)}\) of the \(0\nu\beta\beta\) decay half-lives corresponding to the respective upper panels. The green horizontal band represents the current bound on effective Majorana mass from the experiments KamLAND-Zen and GERDA as given in Table 4, whereas the vertical pink bands represent the bound corresponding to the upper limit on the sum of light neutrino masses of 0.12 eV reported by the Planck experiment [19] and the prospective bound of 0.20 eV that can be set by the KATRIN [130] experiment. A comparison between the upper left and right panels in Fig. 4 shows that the presence of the new non-standard contributions change drastically the dependence of the effective Majorana mass of the standard mechanism \(|m_{ee}^{\nu}|\) on the lightest neutrino mass \(m_{1(3)}\) at Figure 4: Plots showing effective Majorana mass parameter (upper panel) and half-life (lower panel) of \(0\nu\beta\beta\) decay as functions of the lightest neutrino mass \(m_{1(3)}\) in the case of NO (IO) light neutrino mass spectrum. The left upper panel shows the dependence on \(m_{1(3)}\) of the standard mechanism effective Majorana mass, while in the middle and right upper panels, the dependence of the generalised effective Majorana mass in which the contributions due to the exchange of the heavy Majorana fermions \(N_{1,2,3}\) and \(S_{1,2,3}\) are included without accounting for (middle upper panel) and accounting for (upper right panel) their interference. The brown and blue bands correspond respectively to the NO (NH) and IO (IH) types of light neutrino mass spectrum. The overlap of the two bands indicates the region of the QD spectrum. The lower panels show the dependence on \(m_{1(3)}\) of the \(0\nu\beta\beta\) decay half-lives corresponding to the respective upper panels. The green horizontal band represents the current bound on effective Majorana mass from the experiments KamLAND-Zen and GERDA as given in Table 4, whereas the vertical pink bands represent the bound corresponding to the upper limit on the sum of light neutrino masses of 0.12 eV reported by the Planck [19] and the prospective bound of 0.20 eV that can be set by the KATRIN [130] experiment. See text for further details. \(m_{1(3)}<10^{-2}\) eV, where the new contributions dominate over the standard contribution. At \(m_{1(3)}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{$ \sim$}}}10^{-2}\) eV the new contributions are strongly suppressed and practically negligible and we have \(|m_{ee}^{\nu+|N+S|}|\cong|m_{ee}^{\nu}|\), as also is clearly seen in Fig. 4. The non-standard contributions are so large at relatively small values of the \(m_{1(3)}\) that for \(m_{1(3)}\lesssim 2\times 10^{-4}\) eV in the NO case they are ruled out even for the minimal value of \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=22.2\) by the existing upper limits from the KamLAND-Zen and GERDA experiments. In the IO case the new contributions are also ruled our for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\); for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=22.2\) they are ruled out for \(\alpha=\beta=0\), while for \(\alpha=\pi\), \(\beta=0\), they are compatible with the KamLAND-Zen and GERDA upper limits. For NO spectrum and \(\alpha=\beta=0\), the inequality \(|m_{ee}^{\nu+|N+S|}|>(>>)|m_{ee}^{\nu}|\) always holds in the interval of values of \(m_{1}\cong(3\times 10^{-4}-8\times 10^{-3})\) eV where the non-standard contributions are significant. In this interval and for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\) (22.2), \(|m_{ee}^{\nu+|N+S|}|\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{ \lower 2.1973pt\hbox{$\sim$}}}0.009\) (0.007) eV, and with the exception of a very narrow interval around \(m_{1}\cong 4.0\) (\(2.5)\times 10^{-3}\) eV at which the quoted minimum of \(|m_{ee}^{\nu+|N+S|}|\) takes place, we have \(|m_{ee}^{\nu+|N+S|}|\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{ \lower 2.1973pt\hbox{$\sim$}}}0.010\) eV. In most of the considered intervals of values of \(m_{1}\) the half-life \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$ <$}}{\lower 2.1973pt\hbox{$\sim$}}}0\) yrs. In the case of NO spectrum, \(\alpha=\pi\), \(\beta=0\) and \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\) (22.2), the value of \(|m_{ee}^{\nu+|N+S|}|\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{ \lower 2.1973pt\hbox{$\sim$}}}0.010\) eV, and \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$ <$}}{\lower 2.1973pt\hbox{$\sim$}}}10^{28}\) yrs, at \(m_{1}\lesssim 2.0\) (\(1.5)\times 10^{-3}\) eV. For the minimal value of \(|m_{ee}^{\nu+|N+S|}|\) we find \(\min(|m_{ee}^{\nu+|N+S|}|)\cong 9\) (\(3)\times 10^{-4}\) eV. It takes place at \(m_{1}\cong 9.0\) (\(8.0)\times 10^{-3}\) eV. We recall that \(|m_{ee}^{\nu}|\) goes through zero at \(m_{1}\cong 2.26\times 10^{-3}\) eV, while \(|m_{ee}^{\nu+|N+S|}|\cong 18.3\) (\(5.3)\times 10^{-3}\) eV at this value of \(m_{1}\). For IO spectrum, we have \(|m_{ee}^{\nu+|N+S|}|\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{ \lower 2.1973pt\hbox{$\sim$}}}0.015\) eV for \(m_{3}<9\times 10^{-3}\) eV, where \(|m_{ee}^{\nu+|N+S|}|>|m_{ee}^{\nu}|\) for any of the considered values of \(M_{0\nu}^{N}/M_{0\nu}^{\nu}\) and of \(\alpha\) and \(\beta\). The approximate equality \(|m_{ee}^{\nu+|N+S|}|\cong|m_{ee}^{\nu}|\) holds at \(m_{3}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$>$}}{\lower 2.1973pt\hbox{$ \sim$}}}10^{-2}\) eV. For all considered values of \(m_{3}\), \(M_{0\nu}^{N}/M_{0\nu}^{\nu}\) and the Majorana phases the predicted half-life \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\lesssim 10^{28}\) yrs, while in the case of \(\alpha=\beta=0\), we have \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\mathrel{\hbox to 0.0pt{\raise 2.1973pt\hbox{$ <$}}{\lower 2.1973pt\hbox{$\sim$}}}2\times 10^{27}\) yrs. It follows from our numerical analysis that most of the parameter space of the considered model, the predictions for the \(0\nu\beta\beta\) decay generalised effective Majorana mass and half-life are within the sensitivity range of the planned next generation of neutrinoless double beta decay LEGEND-200 (LEGEND-1000), nEXO, KamlAND-Zen-II, CUPID, NEXT-HD (see [131, 132] and references quoted therein). ## 6 Comments on LFV and LHC signatures The considered model has rich lepton flavour violating (LFV) and collider phenomenology. A detailed investigation of the model's phenomenology is beyond the scope of the present study. We limit ourselves here with a few brief comments. The LFV processes as like \(\mu\to e+\gamma\), \(\mu\to 3e\) decays and \(\mu-e\) conversion in nuclei can be mediated by heavy RH and sterile neutrinos \(N_{1,2,3}\) and \(S_{1,2,3}\). Although we expect the contributions due to \(N_{1,2,3}\) and especially due to \(S_{1,2,3}\) to be rather suppressed, there might be a relatively large region of the model's parameter space where they might still be in the range of sensitivity of the next generation of experiments MEG II, Mu3e, Mu2e, COMET and PRISM/PRIME (see, e.g., [133] and the references therein). At LHC, the main channel for the production of the heavy RH neutrinos \(N_{1,2,3}\) is via on-shell \(Z_{R}\) production and \(W_{R}\) fusion and can be expressed as \(p+p\to W_{R}^{\pm}\to l^{\pm}+N_{j}\), \(l=e,\mu,\tau\). This \(N_{j}\) further decays as \(N_{j}\to W_{R}^{*}\to l^{\prime\pm}+2j\), \(l^{\prime}=e,\mu,\tau\), which is considered as the "smoking gun" signature of lepton number and lepton flavour violation at LHC. This rapid decay of \(N_{j}\) happens in the case its mass is sufficiently large. Our model satisfies this requirement as we have taken \(\max(M_{N_{j}})\sim 100\) GeV. We recall that the mass of \(W_{R}\) is constrained by experiments CMS,ATLAS and low energy precision measurements as \(M_{W_{R}}\gtrsim 5\) TeV [92; 93; 94; 95; 96] and considering the relation \(M_{Z_{R}}\simeq 1.2M_{W_{R}}\), the mass of \(Z_{R}\) can be constrained as \(M_{Z_{R}}\gtrsim 6\) TeV. If the mass of \(N_{j}\) lies in the range \(5-20\) GeV, then it takes some time to decay and travels some distance resulting in a displaced vertex of leptons [134; 135]. So, the observable in this case would be a prompt charged lepton and a displaced leptonic vertex. The current status of displaced vertex searches at LHC can be found in ref. [136; 137; 138]. Another distinguishing feature in the signatures of small mass (\(<100\) GeV) and large mass (\(\sim 800\) GeV) RH neutrinos \(N_{j}\) is the angle between the produced charged leptons. In the former case, parallel tracks of charged leptons are expected, whereas in the later case, back-to-back emissions are expected [139]. ## 7 Summary In the present article, we have derived predictions for the neutrinoless double beta (\(0\nu\beta\beta\)) decay generalised effective Majorana mass and half-live in a left-right (L-R) symmetric model with the double seesaw mechanism at the TeV scale. The gauge group of the model is the standard L-R symmetric extension of the Standard Model (SM) gauge group: \(\mathcal{G}_{LR}\equiv SU(2)_{L}\times SU(2)_{R}\times U(1)_{B-L}\). The fermion sector has the usual for the L-R symmetric models, three families of left-handed (LH) and right-handed (RH) quark and lepton fields, including right-handed neutrino fields \(N_{\beta R}\), \(\beta=e,\mu,\tau\), assigned respectively to \(SU(2)_{L}\) and \(SU(2)_{R}\) doublets. It included also three \(SU(2)_{L,R}\) singlet LH fermion fields \(S_{\gamma L}\). The Higgs sector is composed of two \(SU(2)_{L}\) and \(SU(2)_{R}\) Higgs doublets \(H_{L}\) and \(H_{R}\), and of a bi-doublet \(\Phi\). The vacuum expectation value (VEV) of the \(SU(2)_{R}\) Higgs doublet \(H_{R}\) breaks the \(\mathcal{G}_{LR}\) gauge symmetry to the SM gauge symmetry \(SU(2)_{L}\times U(1)_{Y}\), while the VEVs of the two neutral components of the bi-doublet \(\Phi\) break \(SU(2)_{L}\times U(1)_{Y}\) to \(U(1)_{\rm em}\). The Yukawa couplings of the LH and RH fermion doublets to the bi-doublet \(\Phi\) generate (via the VEVs of the neutral components of \(\Phi\)) Dirac mass terms for the quarks and charged leptons, as well as a \(\nu_{\alpha L}-N_{\beta R}\) Dirac mass term \(M_{\rm D}^{\nu}\) involving the LH active flavor neutrino fields \(\nu_{\alpha L}\) and the RH fields \(N_{\beta R}\), \(\alpha,\beta=e,\mu,\tau\). The singlet LH fermion fields \(S_{\gamma L}\) are assumed to have a Majorana mass term \(M_{\rm S}\) and Yukawa coupling with the RH doublets containing \(N_{\beta R}\) which involves \(H_{R}\). This Yukawa coupling produces a \(S_{\gamma L}-N_{\beta R}\) Dirac mass term \(M_{\rm RS}\) when \(H_{R}\) develops a non-zero VEV. Under the condition \(|M_{\rm RS}|\ll|M_{\rm S}|\), the RH neutrinos \(N_{\beta R}\) get a Majorana mass term \(M_{\rm R}\cong-\,M_{\rm RS}M_{\rm S}^{-1}M_{\rm RS}^{T}\) via a seesaw-like mechanism. This in turn generates a Majorana mass term for the LH flavour neutrinos \(m_{\nu}\cong-\,M_{\rm D}^{\nu}M_{\rm R}^{-1}(M_{\rm D}^{\nu})^{T}\) via a second seesaw mechanism 2. In such a way, the model contains in addition to the three light Majorana neutrinos \(\nu_{i}\) having masses \(m_{i}\) two sets of heavy Majorana particles \(N_{j}\) and \(S_{k}\), \(j,k=1,2,3\), with masses \(m_{N_{j}}\ll m_{S_{k}}\). The double seesaw scenario allows the RH neutrinos \(N_{j}\) to have masses naturally at the GeV-TeV scale. In our analysis of the \(0\nu\beta\beta\) decay predictions of the model, we have considered the case of \(m_{N_{j}}\sim(1-1000)\) GeV and \(\max(m_{S_{k}})\sim(1-10)\) TeV, \(m_{N_{j}}\ll m_{S_{k}}\). Working with a specific version of the model which can be obtained by employing symmetry arguments and in which the Dirac mass terms \(M_{\rm D}^{\nu}\) and \(M_{\rm RS}\) are diagonal, \(M_{\rm D}^{\nu}=k_{d}\,{\bf I}\), \(M_{\rm RS}=k_{rs}\,{\bf I}\), \(k_{d}\) and \(k_{RS}\) being constant mass parameters and \({\bf I}\) the \(3\times 3\) unit matrix, we have studied in detail the new "non-standard" contributions to the \(0\nu\beta\beta\) decay amplitude and half-life arising from diagrams with an exchange of virtual \(N_{j}\) and \(S_{k}\). The self-consistency of the considered set-up requires that the lightest neutrino mass for the neutrino mass spectrum with normal ordering (NO), \(m_{1}\), or with inverted ordering (IO), \(m_{3}\), has to be not smaller that approximately \(10^{-4}\) eV. Moreover, the RH neutrino (\(N_{\beta R}\)) and sterile fermion (\(S_{\gamma L}\)) mixings are determined by the light neutrino PMNS mixing matrix. In the analysis of the new non-standard contributions to the \(0\nu\beta\beta\) decay amplitude we took into account the values of the nuclear matrix elements (NMEs) \({\cal M}_{N}^{0\nu}\) and \({\cal M}_{\nu}^{0\nu}\) associated with, respectively, the light and heavy Majorana neutrino exchange contributions, calculated for the four isotopes \({}^{76}\)Ge, \({}^{82}\)Se, \({}^{130}\)Te, \({}^{136}\)Xe by six different groups of authors using different methods of NME calculation (Table 3). We made use of the fact that the ratio \({\cal M}_{N}^{0\nu}/{\cal M}_{\nu}^{0\nu}\) reported by each of the six cited groups is essentially the same for the considered four isotopes - it varies with the isotope by not more than \(\sim 15\%\). For a given isotope the ratio of interest obtained by the six different methods of NME calculation varies by a factor of up to \(\sim 3.5\). In view of this, we took into account the uncertainties in the NME calculations by using the following reference range of the ratio \({\cal M}_{N}^{0\nu}/{\cal M}_{\nu}^{0\nu}=22.2-76.3\), which corresponds to \({}^{76}\)Ge. We analyzed in detail the properties of the new non-standard contributions to the \(0\nu\beta\beta\) decay amplitude arising due to the exchange of virtual heavy Majorana fermions \(N_{j}\) and \(S_{k}\), parametrized as effective Majorana masses \(m_{\beta\beta,R}^{N}\) (Eq. 5.3) and \(m_{\beta\beta,R}^{S}\) (Eq. 5.7), respectively. These analyses showed that both \(|m_{\beta\beta,R}^{N}|\) and \(|m_{\beta\beta,R}^{S}|\) are strongly enhanced at relatively small values of the lightest neutrino mass \(m_{1(3)}\sim(10^{-4}-8\times 10^{-3})\) eV. The effect of this enhancement is particularly important in the case of NO neutrino mass spectrum. The non-standard contributions are so large at the indicated small values of \(m_{1(3)}\) that for \(m_{1}\lesssim 2\times 10^{-4}\) eV in the NO case, they are strongly disfavored (if not ruled out) even for the minimal value of \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=22.2\) by the existing upper limits from the KamLAND-Zen and GERDA experiments. In the IO case, the new contributions are also strongly disfavored for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\); for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=22.2\) they are disfavored for \(\alpha=\beta=0\), while for \(\alpha=\pi\), \(\beta=0\), they are still compatible with the KamLAND-Zen and GERDA conservative upper limits. We find, in general, that in both NO and IO cases the new non-standard contributions due to \(N_{j}\) and \(S_{k}\) exchange are dominant over the standard light neutrino exchange contribution at values of the lightest neutrino mass \(m_{1(3)}\sim(10^{-4}-10^{-2})\) eV: \(|m_{ee}^{\nu+|N+S|}|>(>>)|m_{ee}^{\nu}|\), where \(m_{ee}^{\nu+|N+S|}\) is the generalised effective Majorana mass (GEMM) which accounts for all contributions to the \(0\nu\beta\beta\) decay amplitude (Eqs. (4.12), (4.13) and (4.15)), and \(m_{ee}^{\nu}\) is the effective Majorana mass associated with the standard light neutrino exchange contribution (Eq. (4.16)). The effective Majorana mass \(|m_{\beta\beta,R}^{S}|\) associated with \(S_{k}\) exchange contribution was shown to be practically independent of the Majorana phases \(\alpha\) and \(\beta\), while that due to exchange of \(N_{j}\), \(|m_{\beta\beta,R}^{N}|\), exhibits strong dependence on \(\alpha\) and \(\beta\) similar to \(|m_{ee}^{\nu}|\). For NO spectrum and \(\alpha=\beta=0\), the inequality \(|m_{ee}^{\nu+|N+S|}|>(>>)|m_{ee}^{\nu}|\) always holds in the interval of values of \(10^{-4}\) eV \(\lesssim m_{1}\lesssim 8\times 10^{-3}\) eV where the non-standard contributions are significant. In this interval and for \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\) (22.2), \(|m_{ee}^{\nu+|N+S|}|\gtrsim 0.009\) (0.007) eV. With the exception of a very narrow interval around \(m_{1}\cong 4.0\) (\(2.5\)) \(\times 10^{-3}\) eV at which the quoted minimum of \(|m_{ee}^{\nu+|N+S|}|\) takes place, we have \(|m_{ee}^{\nu+|N+S|}|\gtrsim 0.010\) eV. In most of the considered intervals of values of \(m_{1}\) the \(0\nu\beta\beta\) decay half-life \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\lesssim 10^{28}\) yrs. In the case of NO spectrum, \(\alpha=\pi\), \(\beta=0\) and \(M_{0\nu}^{N}/M_{0\nu}^{\nu}=76.3\) (22.2), we find that \(|m_{ee}^{\nu+|N+S|}|\gtrsim 0.010\) eV, and \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\lesssim 10^{28}\) yrs, at \(m_{1}\lesssim 2.0\) (\(1.5\)) \(\times 10^{-3}\) eV. For the minimal value of \(|m_{ee}^{\nu+|N+S|}|\) we get \(\min(|m_{ee}^{\nu+|N+S|}|)\cong 9\) (\(3\)) \(\times 10^{-4}\) eV. It takes place at \(m_{1}\cong 9.0\) (\(8.0\)) \(\times 10^{-3}\) eV. We note that \(|m_{ee}^{\nu}|\) goes through zero at \(m_{1}\cong 2.26\times 10^{-3}\) eV, while \(|m_{ee}^{\nu+|N+S|}|\cong 18.8\) (\(5.3\)) \(\times 10^{-3}\) eV at this value of \(m_{1}\). Thus, the strong suppression of the \(0\nu\beta\beta\) decay rate at \(m_{1}\sim 2.26\times 10^{-3}\) eV and in the interval \(m_{1}\cong(1.5-8.0)\times 10^{-3}\) eV in the case of only standard contribution due to the exchange of light Majorana neutrinos \(\nu_{i}\) (Fig. 4, upper left panel) is avoided due to the new non-standard contributions. For IO spectrum, we find that \(|m_{ee}^{\nu+|N+S|}|\gtrsim 0.015\) eV for \(m_{3}<9\times 10^{-3}\) eV, where \(|m_{ee}^{\nu+|N+S|}|>|m_{ee}^{\nu}|\) for any of the considered values of \(M_{0\nu}^{N}/M_{0\nu}^{\nu}\) and of \(\alpha\) and \(\beta\). The approximate equality \(|m_{ee}^{\nu+|N+S|}|\cong|m_{ee}^{\nu}|\) holds at \(m_{3}\gtrsim 10^{-2}\) eV. For all considered values of the parameters the predicted half-life \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\lesssim 10^{28}\) yrs, while in the case of \(\alpha=\beta=0\), we have \(T_{1/2}^{\nu+|N_{R}+S_{L}|}\lesssim 2\times 10^{27}\) yrs. It follows from our results that in most of the parameter space of the considered model, the predictions for the \(0\nu\beta\beta\) decay generalised effective Majorana mass and half-life are within the sensitivity range of the planned next generation of neutrinoless double beta decay experiments LEGEND-200 (LEGEND-1000), nEXO, KamlAND-Zen-II, CUPID, NEXT-HD (see [131, 132] and references quoted therein). ## Acknowledgements Purushottam Sahu would like to acknowledge the Ministry of Education, Government of India for financial support. PS also acknowledges the support from the Abdus Salam International Centre for Theoretical Physics (ICTP) under the "ICTP Sandwich Training Educational Programme (STEP)" SMR.3676 and SMR.3799. The work of S. T. P. was supported in part by the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No. 860881-HIDDeN, by the Italian INFN program on Theoretical Astroparticle Physics and by the World Premier International Research Center Initiative (WPI Initiative, MEXT), Japan. STP would like to thank Kavli IPMU, University of Tokyo, where part of this study was performed for the kind hospitality. Appendix : Derivation of neutrino masses and mixings in left-right double seesaw model (LRDSM) ### LRDSM mass matrix We discuss here the implementation and derivation of double seesaw mechanism in the considered left-right symmetric model. The neutral fermions needed for LRDSM are active left-handed neutrinos, \(\nu_{L}\), active right-handed neutrinos, \(N_{R}\) and sterile neutrinos, \(S_{L}\). The relevant mass terms are given by \[\mathcal{L}_{\text{LRDSM}} = \mathcal{L}_{M_{D}}+\mathcal{L}_{M_{RS}}+\mathcal{L}_{M_{S}}\] \[\mathcal{L}_{M_{D}} = -\sum_{\alpha,\beta}\overline{\nu_{\alpha L}}[M_{D}]_{\alpha\beta} N_{\beta R}+\text{ h.c.}\] \[\mathcal{L}_{M_{RS}} = \sum_{\alpha,\beta}\overline{S_{\alpha L}}[M_{RS}]_{\alpha\beta} N_{\beta R}+\text{ h.c.}\] \[\mathcal{L}_{M_{S}} = \frac{1}{2}\sum_{\alpha,\beta}\overline{S_{\alpha L}^{c}}[M_{S}] _{\alpha\beta}S_{\beta L}+\text{ h.c.} \tag{110}\] The flavour states for active left-handed neutrinos \(\nu_{\alpha L}\), right-handed neutrinos \(N_{\beta R}\) and sterile neutrinos \(S_{\gamma L}\) are defined as follows \[\nu_{\alpha L}=\begin{pmatrix}\nu_{eL}\\ \nu_{\mu L}\\ \nu_{\tau L}\end{pmatrix}\,,\ N_{\beta R}=\begin{pmatrix}N_{1R}\\ N_{2R}\\ N_{3R}\end{pmatrix}\,,\ S_{\gamma L}=\begin{pmatrix}S_{1L}\\ S_{2L}\\ S_{3L}\end{pmatrix} \tag{111}\] Similarly, their mass states can be written as, \[\nu_{iL}=\begin{pmatrix}\nu_{1L}\\ \nu_{2L}\\ \nu_{3L}\end{pmatrix}\,,\ N_{jR}^{c}=\begin{pmatrix}N_{1R}^{c}\\ N_{2R}^{c}\\ N_{3R}^{c}\end{pmatrix}\,,\ S_{kL}=\begin{pmatrix}S_{1L}\\ S_{2L}\\ S_{3L}\end{pmatrix} \tag{112}\] The \(9\times 9\) neutral lepton mass matrix in the basis \((\nu_{L},N_{R}^{c},S_{L})\) is given by \[\mathcal{M}_{LRDSM}=\left[\begin{array}{ccc|c}\mathbf{0}&M_{D}&0\\ M_{D}^{T}&\mathbf{0}&M_{RS}\\ \hline 0&M_{RS}^{T}&M_{S}\end{array}\right] \tag{113}\] where each elements of the matrix is a \(3\times 3\) matrix. Here \(M_{D}\) is the Dirac neutrino mass matrix connecting \(\nu_{L}-N_{R}\), \(M_{RS}\) is the mixing matrix in the \(N_{R}-S_{L}\) sector, \(M_{S}\) is the Majorana mass matrix for sterile neutrino \(S_{L}\). In order to diagonalize the above mass matrix we have used the following mass hierarchy, \[M_{D}<M_{RS}<M_{S}. \tag{114}\] The diagonalisation of \(\mathcal{M}_{\text{LRDSM}}\) after changing it from flavour basis to mass basis is done by a generalized unitary transformation as, \[|\;\Psi\rangle_{\text{flavor}} =V\;|\;\Psi\rangle_{\text{mass}} \tag{112}\] \[\text{or,}\;\begin{pmatrix}\nu_{\alpha L}\\ N^{S}_{\beta R}\\ S_{\gamma L}\end{pmatrix} =\begin{pmatrix}V^{\nu\nu}_{\alpha i}&V^{\nu N}_{\alpha j}&V^{NS}_{ \alpha k}\\ V^{N\nu}_{\beta i}&V^{NN}_{\beta j}&V^{NS}_{\beta k}\\ V^{Sv}_{\gamma i}&V^{SN}_{\gamma j}&V^{SS}_{\gamma k}\end{pmatrix}\begin{pmatrix} \nu_{i}\\ N^{c}_{j}\\ S_{k}\end{pmatrix}\] (113) \[V^{\dagger}\mathcal{M}_{\text{LRDSM}}V^{*} =\bar{\mathcal{M}}_{\text{LRDSM}}\] \[=\text{diag}\left(m_{i},m_{N_{j}},m_{S_{k}}\right)\] \[=\text{diag}\left(m_{1},m_{2},m_{3},m_{N_{1}},m_{N_{2}},m_{N_{3} },m_{S_{1}},m_{S_{2}},m_{S_{3}}\right)\] Here the indices \(\alpha,\beta,\gamma\) run over three generations of light left-handed neutrinos, heavy right-handed neutrinos and sterile neutrinos respectively, whereas the indices \(i,j,k\) run over corresponding mass states. ### Block Diagonalization of double seesaw mass matrix in LRDSM Let's write the matrix in eq.111 as, \[\mathcal{M}_{LRDSM}=\mathcal{M}_{\nu}=\begin{pmatrix}\mathcal{M}_ {L}&\mathcal{M}_{D}\\ \mathcal{M}_{D}^{T}&\mathcal{M}_{S}\end{pmatrix}\,,\text{ where,}\] \[\mathcal{M}_{L}=\begin{pmatrix}0&M_{D}\\ M_{D}^{T}&0\end{pmatrix},\mathcal{M}_{D}=\begin{pmatrix}0\\ M_{RS}\end{pmatrix},\mathcal{M}_{S}=M_{S} \tag{114}\] The complete block diagonalization is achieved in two steps by recursively integrating out the heavier modes as \[\mathcal{W}_{1}^{\dagger}\mathcal{M}_{\nu}\mathcal{W}_{1}^{*}=\hat{\mathcal{ M}}_{\nu}^{\prime}\text{ and }\mathcal{W}_{2}^{\dagger}\hat{\mathcal{M}}_{\nu}^{\prime}\mathcal{W}_{2}^{*}= \hat{\mathcal{M}}_{\nu} \tag{115}\] where \(\hat{\mathcal{M}}_{\nu}^{\prime}\) is block diagonalised \(9\times 9\) matrix after integrating out the heaviest mode and \(\hat{\mathcal{M}}_{\nu}\) is the block diagonalised \(9\times 9\) matrix after integrating out the next heaviest mode. The transformation matrix \(\mathcal{W}_{1}\) can be written as a general unitary matrix in the form \[\mathcal{W}_{1}^{*}=\begin{pmatrix}\sqrt{1-\mathcal{B}\mathcal{B}^{\dagger}}& \mathcal{B}\\ -\mathcal{B}^{\dagger}&\sqrt{1-\mathcal{B}^{\dagger}\mathcal{B}}\end{pmatrix} \tag{116}\] where \(\mathcal{B}\) is a \(6\times 3\) dimensional matrix. \[\sqrt{1-\mathcal{B}\mathcal{B}^{\dagger}} =1-\frac{1}{2}\mathcal{B}\mathcal{B}^{\dagger}-\frac{1}{8}\left( \mathcal{B}\mathcal{B}^{\dagger}\right)^{2}+\cdots\] \[\mathcal{B}=\sum\mathcal{B}_{i} \tag{117}\] At leading order it looks like \[\sqrt{1-\mathcal{B}\mathcal{B}^{\dagger}} \simeq 1-\frac{1}{2}\mathcal{B}\mathcal{B}^{\dagger}-\frac{1}{8} \left(\mathcal{B}_{1}\mathcal{B}_{2}^{\dagger}+\mathcal{B}_{2}\mathcal{B}_{ 1}^{\dagger}\right) \tag{118}\] The form of mixing matrix \({\cal B}_{1}^{\dagger}\) and \({\cal B}_{1}\) is given by \[{\cal B}_{1}^{\dagger}={\cal M}_{S}^{-1}\cdot{\cal M}_{\cal D}^{T} = M_{S}^{-1}\cdot\left(0\ M_{RS}^{T}\right)=\left(0\ M_{S}^{-1}\cdot M _{RS}^{T}\right)\] \[{\cal B}_{1} = \left(\begin{matrix}0\\ M_{RS}M_{S}^{-1}\end{matrix}\right)\] \[\sqrt{1-{\cal B}{\cal B}^{\dagger}} \simeq \left(\begin{matrix}1&0\\ 0&1-\frac{1}{2}M_{RS}M_{S}^{-1}\cdot M_{S}^{-1}M_{RS}^{T}\end{matrix}\right)\] \[\sqrt{1-{\cal B}^{\dagger}{\cal B}} \simeq 1-\frac{1}{2}M_{S}^{-1}M_{RS}^{T}\cdot M_{RS}M_{S}^{-1} \tag{101}\] Thus, the first block diagonalised mixing matrix \({\cal W}_{1}\) becomes, \[{\cal W}_{1}=\left(\begin{matrix}1&0&0\\ 0&1-\frac{1}{2}M_{RS}M_{S}^{-1}\cdot M_{S}^{-1}M_{RS}^{T}&M_{RS}M_{S}^{-1}\\ 0&-M_{S}^{-1}M_{RS}^{T}&1-\frac{1}{2}M_{S}^{-1}M_{RS}^{T}\cdot M_{RS}M_{S}^{-1 }\end{matrix}\right) \tag{102}\] After this diagonalization, \(\hat{\cal M}_{\nu}^{\prime}\) has the following form. \[\hat{\cal M}_{\nu}^{\prime}=\left(\begin{matrix}{\cal M}_{eff}&0\\ 0&{\cal M}_{S}\end{matrix}\right) \tag{103}\] where, \[{\cal M}_{eff} = {\cal M}_{L}-{\cal M}_{D}{\cal M}_{S}^{-1}{\cal M}_{D}^{T} \tag{104}\] \[= \left(\begin{matrix}0&M_{D}\\ M_{D}^{T}&0\end{matrix}\right)-\left(\begin{matrix}0\\ M_{RS}\end{matrix}\right)M_{S}^{-1}\left(0\ M_{RS}^{T}\right)\] \[= \left(\begin{matrix}0&M_{D}\\ M_{D}^{T}&-M_{RS}M_{S}^{-1}M_{RS}^{T}\end{matrix}\right)\] \({\cal M}_{eff}\) can be further diagonalised by \({\cal W}_{2}\) as, \[{\cal S}^{\dagger}{\cal M}_{eff}{\cal S}^{*}=\left(\begin{matrix}m_{\nu}&0\\ 0&M_{R}\end{matrix}\right), \tag{105}\] where, \[m_{\nu} = -M_{D}\left(-M_{RS}M_{S}^{-1}M_{RS}^{T}\right)^{-1}M_{D}^{T},\] \[M_{R}=-M_{RS}M_{S}^{-1}M_{RS}^{T}. \tag{106}\] The transformation matrix \({\cal S}\) is \[{\cal S}^{*} = \left(\begin{matrix}\sqrt{1-{\cal A}{\cal A}^{\dagger}}&{\cal A}\\ -{\cal A}^{\dagger}&\sqrt{1-{\cal A}^{\dagger}{\cal A}}\end{matrix}\right) \tag{107}\] such that, \[{\cal A}^{\dagger} = \left(-M_{RS}M_{S}^{-1}M_{RS}^{T}\right)^{-1}M_{D} \tag{108}\] \[=-M_{RS}^{-1^{T}}M_{S}M_{RS}^{-1}M_{D}=X^{\dagger} \tag{109}\] Thus, \(\mathcal{W}_{2}\) = \(\begin{pmatrix}\mathcal{S}&0\\ 0&1\end{pmatrix}\) (8.23) \[= \begin{pmatrix}1-\frac{1}{2}XX^{\dagger}&X&0\\ -X^{\dagger}&1-\frac{1}{2}X^{\dagger}X&0\\ 0&0&1\end{pmatrix}\] ### Complete diagonalization and physical neutrino masses After block diagonalization, the mass matrix for the three types of neutrinos are further diagonalized by respective unitary mixing matrices \(U_{\nu}\), \(U_{N}\), \(U_{S}\) resulting in physical masses for the neutrinos as follows. \[U_{9\times 9}=\begin{pmatrix}U_{\nu 3\times 3}&0_{3\times 3}&0_{3\times 3}\\ 0_{3\times 3}&U_{N3\times 3}&0_{3\times 3}\\ 0_{3\times 3}&0_{3\times 3}&U_{S3\times 3}\end{pmatrix} \tag{8.25}\] \[U_{\nu}^{\dagger}m_{\nu}U_{\nu}^{*}=\hat{m}_{\nu}=\text{diag} \left(m_{\nu_{1}},m_{\nu_{2}},m_{\nu_{3}}\right)\] \[U_{N}^{\dagger}M_{N}U_{N}^{*}=\hat{M}_{N}=\text{diag}\left(M_{N _{1}},M_{N_{2}},M_{N_{3}}\right)\] \[U_{S}^{\dagger}M_{S}U_{S}^{*}=\hat{M}_{S}=\text{diag}\left(M_{S _{1}},M_{S_{2}},M_{S_{3}}\right) \tag{8.26}\] The complete mixing matrix now becomes, \[V = \mathcal{W}_{1}\cdot\mathcal{W}_{2}\cdot\mathcal{U} \tag{8.27}\] \[= \begin{pmatrix}1&0&0\\ 0&1-\frac{1}{2}YY^{\dagger}&Y\\ 0&-Y^{\dagger}&1-\frac{1}{2}Y^{\dagger}Y\end{pmatrix}\cdot\begin{pmatrix}1- \frac{1}{2}XX^{\dagger}&X&0\\ -X^{\dagger}&1-\frac{1}{2}X^{\dagger}X&0\\ 0&0&1\end{pmatrix}\cdot\begin{pmatrix}U_{\nu}&0&0\\ 0&U_{N}&0\\ 0&0&U_{S}\end{pmatrix}\] \[= \begin{pmatrix}U_{\nu}\left(1-\frac{1}{2}XX^{\dagger}\right)&U_{N }X&0\\ -U\nu X^{\dagger}\left(1-\frac{1}{2}YY^{\dagger}\right)&U_{N}\left(1-\frac{1} {2}X^{\dagger}X\right)\left(1-\frac{1}{2}YY^{\dagger}\right)&U_{S}Y\\ U\nu X^{\dagger}Y^{\dagger}&-U_{N}Y^{\dagger}&U_{S}\left(1-\frac{1}{2}Y^{ \dagger}Y\right)\end{pmatrix}\] where \(X^{\dagger}=-M_{RS}^{-1}M_{S}M_{RS}^{-1}M_{D}\), \(Y=M_{RS}M_{S}^{-1}\) and fixing the typical magnitudes for \(M_{D}\simeq 0.1\,\text{MeV}\), \(M_{RS}\simeq 1\,\text{TeV}\), \(M_{S}\simeq 10\,\text{TeV}\) we get \(X\simeq 10^{-6}\), \(Y\simeq 0.1\). Since \(U_{\nu}\), \(U_{N}\) and \(U_{S}\) are of \(\mathcal{O}(1)\), the matrix elements of \(\mathcal{V}\) are approximated to be \[\begin{pmatrix}\mathcal{V}_{\alpha i}^{\nu\nu}&\mathcal{V}_{\alpha j}^{\nu N}& \mathcal{V}_{\alpha k}^{\nu S}\\ \mathcal{V}_{\beta i}^{N\nu}&\mathcal{V}_{\beta j}^{NN}&\mathcal{V}_{\beta k}^ {NS}\\ \mathcal{V}_{\gamma i}^{S\nu}&\mathcal{V}_{\gamma j}^{SN}&\mathcal{V}_{\gamma k }^{SS}\end{pmatrix}\simeq\begin{pmatrix}\mathcal{O}(1.0)&\mathcal{O}(10^{-6} )&0\\ \mathcal{O}(10^{-6})&\mathcal{O}(1.0)&\mathcal{O}(0.1)\\ \mathcal{O}(10^{-7})&\mathcal{O}(0.1)&\mathcal{O}(1.0)\end{pmatrix} \tag{8.28}\] which generates sizable contribution to neutrinoless double beta decay.
2308.16374
Accused: How students respond to allegations of using ChatGPT on assessments
This study explores student responses to allegations of cheating using ChatGPT, a popular software platform that can be used to generate grammatical and broadly correct text on virtually any topic. Forty-nine posts and the ensuing discussions were collected from Reddit, an online discussion forum, in which students shared their experiences of being accused (the majority falsely) and discussed how to navigate their situations. A thematic analysis was conducted with this material, and five themes were discerned: a legalistic stance, involving argument strategy and evidence gathering; the societal role of higher education as a high-stakes gatekeeper; the vicissitudes of trust in students vs. technology; questions of what constitutes cheating; and the need to rethink assessment. The findings from this study will help instructors and institutions to create more meaningful assessments in the age of AI and develop guidelines for student use of ChatGPT and other AI tools.
Tim Gorichanaz
2023-08-31T00:23:44Z
http://arxiv.org/abs/2308.16374v1
# Accused: How students respond to allegations of using ChatGPT on assessments1 ###### Abstract This study explores student responses to allegations of cheating using ChatGPT, a popular software platform that can be used to generate grammatical and broadly correct text on virtually any topic. Forty-nine posts and the ensuing discussions were collected from Reddit, an online discussion forum, in which students shared their experiences of being accused (the majority falsely) and discussed how to navigate their situations. A thematic analysis was conducted with this material, and five themes were discerned: a legalistic stance, involving argument strategy and evidence gathering; the societal role of higher education as a high-stakes gatekeeper; the vicissitudes of trust in students vs. technology; questions of what constitutes cheating; and the need to rethink assessment. The findings from this study will help instructors and institutions to create more meaningful assessments in the age of AI and develop guidelines for student use of ChatGPT and other AI tools. ## Introduction The public launch of ChatGPT by the company OpenAI on November 30, 2022, took the world by storm. News of its capabilities spread through word of mouth and viral social media posts, and ChatGPT soon became the fastest-growing consumer web application in history, seeing 13 million unique visitors each day in January (Hu, 2023). In technical terms, ChatGPT provides a conversational user interface for a large language model (LLM), in this case one called GPT (Generative Pre-trained Transformer), which was created through exposure to dozens of gigabytes of text (e.g., all of Wikipedia, millions of books, many websites such as Reddit) with human feedback during the training process. Not long after its emergence, students and educators became aware of the capacities of ChatGPT for cheating. With plain-language input called a "prompt," ChatGPT responds with grammatical and often broadly correct plain-language output. _Write a five-paragraph essay_ on the history of child labor in the United States at a college freshman level_, for example. Besides generating the essay all in one go, ChatGPT can generate outlines and thesis statements, rewrite text to provide more detail or strike a different tone, and cite sources. ChatGPT can also translate text, write code and more. It sounds too good to be true, and it is. Though ChatGPT's output is often broadly correct, it is prone to generating misinformation--including making false statements, citing sources that do not exist and creating code that doesn't work--a propensity termed "hallucination" in the literature. Though OpenAI is working to address these hallucinations, some researchers argue that they are unavoidably inherent to LLM technology (Smith, 2023). Moreover, the very existence of systems such as ChatGPT is morally problematic, as they require enormous quantities of human-generated text as input, generally used without notice, consent or compensation, and they rely on underpaid human labor during the training process (Gorichanaz, 2023). Despite all this, ChatGPT "kicked off an AI arms race," in the words of a _New York Times_ headline (Roose, 2023), one that for now shows no signs of stopping. Competitors to OpenAI have released similar products and more are on the way. That term refers to the arms race between companies, but another arms race is unfolding between students and educators as both groups navigate what all this means for higher education. The anxieties around ChatGPT in higher education center on academic integrity, learning and skill development, limitations of LLMs, policy and social concerns, and workforce challenges (Li et al., 2023; Sullivan et al., 2023). Meanwhile, ChatGPT is already prevalent among students. A survey by Study.com conducted in January 2023 found that about 90% of students used ChatGPT to help with homework and more than 50% used it to write an essay (Ward, 2023). Social media platforms such as TikTok are proving useful to students for sharing information about tactics for using ChatGPT to write essays and code and subvert detection methods (Haensch et al., 2023). An early and predominant response has been the creation of so-called "AI detectors," systems meant to reveal whether a given text was LLM-generated. GPTZero, created by a college student, was publicized in early January 2023 (Bowman, 2023), later going on to raise millions of dollars in funding. A slew of competing AI detectors have emerged since then, including Originality AI, ZeroGPT, Writer AI Content Detector, OpenAI's own AI text classifier, and many others. Of particular note, Turnitin, makers of widely used plagiarism-detection software, released its AI detector in April 2023 (Knox, 2023). But AI detectors, like LLM-generated content itself, are not reliable. They are prone to both false positives (human-generated text flagged as AI-generated) and false negatives (AI-generated text not flagged). And they can be gamed by using more precise prompts or by asking ChatGPT to rewrite the text, such as by specifying that it should use less expected language (Sadasivan et al., 2023; Wiggers, 2023). Purveyors caution that AI detection should not be used as a standalone solution but only as one datapoint among many. The trouble is that bad information can be worse than no information. Spring 2023 brought many news articles about instructors using AI detectors to punish students, sometimes wrongly. For instance, one professor failed his entire class after being told by ChatGPT that his students' work was AI-generated (Agomuoh, 2023). By June, even Turnitin admitted its AI detector wasn't reliable and should be used with caution (Fowler, 2023). The present study seeks to help inform our understanding of how AI detectors are being used and interpreted, in this case by focusing on the student perspective, which Sullivan et al. (2023) found to be largely absent from news media discourse on ChatGPT and AI detection. In particular, the study examines how students experience and respond to accusations of using AI products in their written assignments. What is it like for students to be accused of cheating in this way? How do students make sense of the situation, and help each other do so? What strategies and tactics are shared for navigating the situation? These questions are addressed through a thematic analysis of public posts made by anonymous students on Reddit, a discussion forum and one of the most visited websites on the internet. Dozens of such posts, with lengthy follow-up discussions, have been made in the months since ChatGPT's release, including both students who admitted to getting away with cheating and those who said they were falsely accused of doing so. The findings from this study first and foremost help instructors, administrators and parents understand the student perspective in this situation. In doing so, the findings will help guide instructors in creating assignment instructions and guidelines for students and further conversation about the "AI arms race," particularly the one unfolding between students and faculty. ### Literature Review To frame this study, the literature is reviewed on how educators respond to new technology, as well as cheating and strategies for addressing it. #### How Educators Respond to New Technology One stereotype says that teachers don't want to adapt to new technologies, whether out of technophobia or inertia. Indeed, research findings about teachers' resistance to new technologies go back at least to 1920 (Hannafin & Savenye, 1993). Though some educators may seem tech-averse from the outside, Howard and Mozejko (2015) propose a different explanation. New technology should not be implemented for its own sake, they argue, and widespread calls for just that lead to teacher disengagement from these technologies, creating the impression that these teachers are anti-technology. Resonant with this argument, a study by Martin et al. (2020) showed that the most important factor that educators consider when deciding whether to adopt a new technology is its potential benefit to student learning. Artificial intelligence is the emerging technology du jour, and calls are coming from all corners of society about leveraging it and limiting it, often in revolutionary and doomsaying tones. Teachers may feel forced to respond, yet the blurry and fast-moving picture makes it difficult to assess how these technologies will affect student learning. Risks to this situation have been voiced for several years, and they are encapsulated in a philosophical article by Guilherme (2019), who argues that in adopting new technologies, the relationship between students and teachers should not be overlooked. Emerging technologies, particularly AI-based ones, create a situation where teachers relate to their students more like objects than people. The justifications for implementing these technologies usually refer to efficiency, but the end goal of learning is not simply to learn more efficiently. Guilherme argues that technologies should be adopted in education only if they foster humane relationships rather than objectifying ones. When it comes to some technologies, educators' hands may simply be forced. The dawn of the internet age--with widespread information access, distributed expertise and global connection--has unsettled the institution of higher education in many ways, cutting down to its very core. Laurillard (1999) warned that universities must either integrate these new technologies and evolve (e.g., to offer meaningful online learning experiences and flexible curricula) or fall back into small-scale elitism of their medieval form. "After some 25 years of fragmented experimentation with what computers might contribute to the teaching process, we still have rather little progress, or understanding of how to drive, rather than be driven by, the new learning technologies," Laurillard (1999, p. 135) wrote. "This would be nothing worse than a lost opportunity were it not for the fact that the new technology is also driving the world outside universities." Many authors have echoed this sentiment. For example, Watty et al. (2016) position the uptake of new technology as the great challenge for higher education in the 21st century. McBride (2010) emphasizes that teachers need not act alone, but rather that the integration of new technology should be shepherded by university leadership and strategic planning, such as by investing in instructional designers and continuing training for instructors. Looking more granularly at how specific technologies have been introduced in teaching and learning, Hartley (2007) provides a literature review and conceptual framework. According to Hartley, new technologies may be integrated into teaching in the following ways: direct instruction (in which technology replaces the human teacher); adjunct instruction (in which the teacher and technology work side by side); facilitating skills of learning (giving students practice with metacognitive skills); facilitating social skills (in which students work with each other alongside technology); and widening horizons (which allows learning to happen in new places and ways). There are many examples in the literature of new technology successfully being implemented in the classroom to benefit students, including those reviewed by Hartley and others published more recently. Antonioli et al. (2014), for instance, report on introducing augmented reality in the classroom, showing that the technology taught students metacognitive skills, encouraged students to take responsibility for their learning, and was fun and engaging. But Antonioli et al. also observed challenges with budget and keeping the lessons learning-centered. Williams and Beam (2019) review the literature on new technologies in K-12 writing instruction, showing that new technologies improve students' writing while also supporting their higher-order thinking skills and also offer additional motivation to students that further drives learning--especially for reluctant writers. However, Williams and Beam point out that these new technologies suggest that the curriculum must adapt in certain ways, moving toward a process approach to writing (with iteration, feedback and collaboration, rather than the one-time submission of a finished product) and also engaging multimedia rather than only text. Some work has focused specifically on chatbots and large language models (LLMs), which are especially relevant to ChatGPT. A recent meta-analysis suggests that AI chatbots can improve student learning outcomes, particularly in higher education (Wu & Yu, 2023); and a systematic review of LLMs in education shows that they can be useful for teachers in several tasks (generating questions, providing feedback, grading essays), but that these capabilities all raise practical and ethical concerns (Yan et al., 2023). Focusing specifically on ChatGPT for education, Su and Yang (2023) describe a number of benefits the tool may have, such as offering personalized and engaging learning experiences for students, and offering suggestions and efficient question-answering for teachers. However, Su and Yang acknowledge several barriers to these ends such as data quality and accuracy, cost, opacity of LLMs, and the inchoate state of LLM technology. Similarly, Kasneci et al. (2023) discuss numerous possibilities for LLMs to enhance learning, but raise additional challenges inherent in LLMs such as copyright issues regarding training data and output, the possibility of LLMs becoming a crutch for students or teachers, stakeholders' lack of expertise in the underlying technology, difficulties distinguishing the contributions of LLMs and students in submitted work, data privacy and security, sustainability and misinformation. In the background of anxieties around new technologies are often concerns around student cheating. New technologies often mean new opportunities to cheat. Taylor (2003) relayed a story from the turn of the century, in which a teacher failed a large percentage of a class for plagiarism--resonant with a 2023 report mentioned in the introduction (see Agomuoh, 2023). Taylor writes that such reactions from instructors are a missed opportunity to teach students about what is and is not acceptable with new technology and to deepen their learning. More broadly, this work is a reminder that next technologies often stoke anxieties around cheating. Building upon this literature, Rudolph et al. (2023) discuss the implications of ChatGPT for assessments in higher education, specifically writing assignments. As they write, faculty responses may include: going low-tech (requiring students to hand-write their work), using AI-detection tools (which are of dubious reliability), and creating assignments that AI cannot complete (such as responding to specific classroom discussions or referring to recent events or niche topics). But Rudolph et al. caution that these responses may only work in the short term and do not prepare students well for 21st-century society and employment. Better approaches, they write, include doing certain assessments in class, asking students to do oral presentations, moving from text-only essays to multimedia documents, letting students choose topics of genuine interest, using authentic assessments (that is, tasks that emulate realistic professional situations), and using teach-backs and peer evaluations. ## Understanding and Addressing Cheating In an educational context, cheating is a category of behavior in which students get academic credit in a dishonest or deceptive way, such as copying another student's work, secretly using notes during an exam or plagiarizing text in an essay. The proliferation of digital technology has introduced new ways of cheating, and cheating is considered to be a bigger problem by educators, as assessments move online, where strategies to prevent cheating developed in face-to-face contexts are no longer relevant (Mellar et al., 2018; Moten et al., 2013; Williams, 2001). The major methods for cheating in today's world include plagiarism, collusion, file sharing, text spinning (using software to rewrite text), exam cheating, security breaching, and contract cheating (enlisting a third party to do an assessment) (Lancaster, 2022a). Of these, contract cheating is perhaps the most relevant in a discussion of ChatGPT. Contract cheating is the purchase of custom-made work, typically from an "essay mill," to submit for an assessment. The research on contract cheating suggests that it is growing in prevalence; a systematic review by Newton (2018) reports that since 1978, 3.52% of students on average reported using contract cheating, but looking only since 2014 that number was 15.7%. A study of solicitations on Twitter found that students were willing to pay $33.32 per 1,000 words on average (Amigud & Lancaster, 2020). With the advent of ChatGPT and other readily available systems for generating text in response to a prompt, contract cheating becomes near-instantaneous and free. How essay mills and other institutions for contract cheating will respond will remain to be seen. (Note also that beyond replacing contract cheating, ChatGPT can be used for cheating in other ways, such as text spinning.) Unfortunately, cheating appears to be common, with the majority of students reporting having cheated--many studies report over 75% (Baird, 1980; Crown & Spiller, 1998; Curtis, 2022). While there is evidence that cheating, particularly plagiarism, decreased over the period 1990-2020 (Curtis, 2022), other research suggests that cheating increased since the beginning of the Covid-19 pandemic (Jenkins et al., 2022). Before going on, it's worth reflecting on why cheating is considered to be wrong. In a philosophical essay, Bouville (2009) addresses this question, finding that the oft-cited reasons for why cheating is wrong (it is unfair, it hinders learning) are not convincing. They betray inconsistencies and an underlying philosophy that school is about enforcing competition among students rather than education. In light of this, Bouville suggests that teachers concerned about cheating look deeper than the cheating behavior: "What hinders education is not cheating but the underlying lack of motivation: fighting cheating may only address a superficial symptom" (Bouville, 2009, p. 75). ### The Student Side of Cheating To respond well to cheating, educators and institutions must understand its causes as deeply as possible. A large literature has investigated why students cheat. The reasons are numerous, including seating arrangements, knowledge of peer performance, high stakes for a given assessment (either reward or punishment), having failed before, having cheated before, having low or middling expectations for success, perceiving social norms that support cheating, perceiving assessments as unfair or irrelevant, low instructor vigilance, not having studied well, and dependence of financial support and long-term goals on good grades (Ahsan et al., 2022; Baird, 1980; Genereux & McLeod, 1995; Whitley, 1998). Regarding contract cheating specifically, the evidence suggests further that additional risk factors include speaking English as a learned language and being dissatisfied with the learning environment (Bretag et al., 2019). The research suggests that men are more likely to cheat than women, as are students with lower grades and those who believe the prevalence of cheating is high (Baird, 1980; Genereux & McLeod, 1995)--though a recent study suggests that women are more likely than men to cheat using digital technology (Krienert et al., 2021). When students who cheated were asked what would have stopped them from doing so, responses included more time, more resources, more skills to achieve the desired result, better time management, and less impact of mistakes on grades (Beasley, 2014). The literature on student cheating also suggests that students do not always know what constitutes cheating (Beasley, 2014; Burrus et al., 2007; Raines et al., 2011). For example, Beasley (2014) reported that the majority of the students who cheated said they would not have done so if they knew what they were doing was cheating. A typical respondent in Beasley's study said, "If I knew what I was doing was wrong I wouldn't have done it plain and simple.... I was unaware that my behavior was wrong" (Beasley, 2014, p. 235). Clarification here, according to Beasley's findings, would include clearer instructions, explanations of what constitutes plagiarism and how to avoid it (e.g., what "paraphrasing" means), a delineation of what behaviors are and are not acceptable (e.g., regarding collaboration on work), and explanation of the consequences for cheating. All this is even more important in settings that bring together students from multiple cultural backgrounds (Beasley, 2014). Instructors should also bear in mind that what constitutes cheating may differ from course to course (as learning goals differ) and across formats (norms for online learning are different from in-person learning) (Raines et al., 2011). Regarding how students respond to being suspected of cheating, Pitt et al. (2020) report on student experiences of undergoing the formal disciplinary process after being suspected of contract cheating. In this study, some students did cheat and others did not. Across the board, the process was experienced as among the most challenging in the student's life, it created stress and vigilance around future assignments, it was kept as a secret to the extent that was possible, and it caused reputation damage with peers and faculty. All that said, these students were also able to guide other students toward practicing academic integrity. #### 3.2.1 Instructor Responses to Cheating Faced with cheating, instructors may address it or ignore it. Some studies suggest that upwards of 40% of instructors have ignored cheating when it was detected (Coren, 2011). The reasons for ignoring cheating include difficulties gathering convincing evidence, the time and effort it takes to follow the process, fear of retaliation or legal challenge, and the perceived triviality of the offense (Coren, 2011; Keith-Spiegel et al., 1998). Coren (2011) reports that instructors who had previous bad experiences with the student disciplinary process were more likely to ignore cheating. Addressing cheating may be responsive or proleptic. Much of the literature on addressing cheating focuses on the proleptic--that is, taking measures to prevent future cheating. There are a number of approaches instructors may use for this, including education, technology, assessment design, sanctions, policy, and surveillance (Mellar et al., 2018). Given that new technologies have enabled new forms of cheating, many are turning to technology for solutions. The use of anti-plagiarism software has had some success (Ma et al., 2008) and may be partly responsible for a decrease in plagiarism in recent decades (Curtis, 2022). There is also evidence that contract cheating may be at least somewhat detectible with technology (Dawson & Sutherland-Smith, 2018; Lancaster, 2022b). But Mellar et al. (2018) emphasize that technology is not the primary means of addressing cheating but only one element among others. There are numerous non-technological strategies for addressing cheating in the literature. In a review from the turn of the century, Williams (2001) discerned four key strategies: development of a culture of honesty, continual observation of student work, ongoing review of intermediate drafts, and face-to-face discussion about the work (Williams, 2001). In his book _Cheating Lessons_, Lang (2013) suggests emphasizing process over end product, implementing lower-stakes assessments, better preparing students for assessments, and fostering intrinsic motivation. In a recent systematic review, Ahsan et al. (2022) echo these suggestions, also adding that it is important to show students the institutional support that is available for issues they may be facing, and encouraging them to take initiative. Specific to preventing contract cheating, Ahsan et al. suggest crafting clear policies, providing support to students, and crafting assessments that dissuade students from using contract cheating. As technology changes, the academic environment needs to continually adapt in relation, when it comes to cheating. Teachers play a key but not the only role. (Lancaster, 2022a). In the book _Cheating in College: Why Students Do It and What Educators Can Do About It_, the authors McCabe et al. (2012) particularly emphasize policy and culture. "We have a moral obligation to teach our students that it is possible and preferable to live and operate in an environment of trust and integrity where cheating is simply unacceptable" (McCabe et al., 2012, p. 165). This involves an educational component; sometimes students do not know or understand what constitutes cheating, especially when they receive mixed messages (e.g., about when collaborative work is allowed), so clear policies are vital. More broadly, the authors argue that students must perceive alignment between the formal systems and informal systems when it comes to academic integrity--from leadership and authority structure to mythos, norms and classroom mechanics. Doing so will require higher educational institutions to continue to reflect on the value and purpose of higher education (particularly as broader economic and political contexts change) and ensure that these are reflected in everything the institution does, from its academic integrity policies and how they are communicated, to the way online education is implemented (Rettinger & Gallant, 2022). ## Methodology To analyze student experiences of and responses to accusations of using ChatGPT for cheating, data was collected from the online discussion website Reddit, one of the most visited websites on the internet and one often used for research of this nature. Reddit hosts several million discussion forums, called "subreddits," for countless topics, including college life and digital technology. Most Reddit users are pseudonymous, without any personally identifying information on their profile. In this anonymous forum, authentic conversations can proceed on sensitive issues, making Reddit an excellent source for data collection on this topic. Data collection proceeded with searches on Google restricted to the domain "reddit.com" including terms such as "accused," "chatgpt," "essay," "professor," "AI," etc. An example of one of the complete queries used was:"site:reddit.com accused chatgpt professor." Searches were conducted periodically between May 21 and June 5, 2023. Resulting threads were included in the corpus if they were written by a college student (any posts by other students were excluded) directly involved in a case of AI writing allegation (rather than reporting on a friend's experience or commenting in general). In total, 49 threads were retrieved, details on which are given at the beginning of the Findings section below. Analysis of these threads began with completing a spreadsheet to capture the high-level characteristics of each thread; the columns in this spreadsheet included: the date of the original post (the first post in a thread), the headline, the subreddit to which the thread belonged, a brief synopsis, whether the original poster (the user who started the thread) used AI in their work, the type of assignment, the emotional valence of the post, the AI detector mentioned, and the URL. After that, the contents of each thread were analyzed through reflexive thematic analysis (Braun & Clarke, 2012), an inductive form of qualitative analysis used to discern the major themes in a corpus. This analysis began with the open coding of salient quotes and proceded through several rounds in which these codes gradually coalesced into themes. Because the data used in this study were public and no personally identifiable information was collected or discernible in the corpus, this project was not deemed to be human subjects research by the researcher's institutional review board. Still, to protect the confidentiality of the people whose words contributed to the corpus in this study, details such as usernames, post headlines, URLs, etc., are not reported here; further, direct quotes given in this paper have been lightly edited ("disguised") to make them more difficult to locate via search as a means to protect their authors from harm, according to internet research best practices (Bruckman, 2002). ## Findings The corpus for this study included 49 Reddit threads with posts ranging from December 21, 2022, to June 4, 2023, when data collection concluded. Of these threads, more than half appeared after May 1. The majority of the threads were from r/ChatGPT (n=25), followed by r/college (n=5), as well as several other subreddits with two or fewer posts each, including those for specific educational institutions. The number of upvotes and comments on these threads ranged from less than 10 to over 2,000. The majority of original posts in the corpus had less than 50 upvotes and comments. Five had over 1,000 upvotes. The most upvoted thread saw over 45,000 upvotes and over 3,000 comments and was posted in r/mildlyinfuriating; following that, the next most upvoted thread had 15,000 upvotes and 2,500 comments and was posted in r/ChatGPT. Within the corpus, 11 original posters said they had used ChatGPT on an assignment, while 38 said they were falsely accused of doing so. Of these 38, two mentioned they did use Grammarly. Most of the posts (n=43) described the type of assignment; most were essays [29], followed by discussion board posts [4]. In all cases, the original posters were seeking advice for navigating their situation. Typically, an AI detector was used to ground the accusation. In 25 posts, a specific detector was mentioned; these included Turnitin (n=15), ChatGPT itself [6], and GPTZero [4]. Five posts said the instructor had used multiple detectors. In most cases, the instructor requested a meeting with the student to discuss a high AI detection score; generally, students interpreted this request as an accusation. In some cases, instructors immediately reported a student to the student conduct office. In terms of emotional valence, about half the posts were neutral in tone, with the rest expressing a strong emotion. These emotions included hostility, anxiety, fear and defeat. Respondents often offered support and condolences, particularly in the earliest threads; later on, some respondents mentioned that these sorts of posts were "getting old." Besides offering advice and commentary, as will be discussed below, the content of many of these threads included technical discussions of how GPT works and why AI detectors are unreliable. A common refrain was that AI detectors are random number generators. In these discussions, certain misunderstandings about these issues were also evident. For example, several students and faculty used ChatGPT itself as an AI detector, asking the chatbot if it wrote a particular text. As one student shared, "One of my professors told us that a student used ChatGPT to write their essay and was promptly suspended. And all he had to do was ask ChatGPT if it wrote the essay. As a freshman that's TERRIFYING to me." But as other users pointed out, ChatGPT is not able to do the analysis required to make such an assessment. Throughout these discussions, several themes were evident, which will be discussed below in turn. ### Legislative Stance Whether students were directly accused of cheating with ChatGPT or asked to an exploratory meeting to discuss the results of an AI detector, they seemed to experience the situation as a legal proceeding, and commentators further advised they treat it as such. A few threads discussed filing lawsuits, but in most cases the legalistic stance was more metaphorical. Along these lines, the following terms were common in the corpus: convince, evidence, logic, burden of proof, process, procedure, formal complaint, and disclaimer. Much of the discussion centered on constructing arguments that would establish a student's innocence. One common tactic suggested was "deny, deny, deny," whether guilty or not, because the burden of proof is on the instructor and it is impossible to prove AI involvement in text generation except in very few cases (such as submitting work that includes phrases such as "As an AI language model," which ChatGPT uses in response to some prompts). Somewhat contrary to this, many felt that in the university setting a student is "guilty until proven innocent." Another common refrain in the corpus was "escalate," meaning to bring the issue to a higher authority than the instructor, such as the department head or dean. "Remember, you are the customer," one student wrote. "Escalate the situation until you get what you want." Students were advised to "get everything in writing," meaning to document every conversation and milestone as their case progressed. The existence of AI detectors made argument construction all the more fraught. Some students asked why AI detectors flagged their work as AI-generated when they wrote it themselves. Some suggested ways to demonstrate that AI detectors are unreliable, while others felt that their very knowledge of AI detectors implicated them. "How do I explain why I put it through an AI detector if I'm not guilty?" one student wrote. To ground these arguments, gathering evidence was suggested. Across the board, the best evidence was considered to be process materials from working on the assignment, such as notes, an outline and references, preferably written by hand. However, many students mentioned not having these materials. In such cases, the discussion often turned toward a "cover your ass" lesson for other students working on future assignments. To this end, enabling version history in Google Docs or Microsoft Word were also common suggestions; the version history would show that the text was not pasted in one step from ChatGPT. Some also counseled students to use a screen recorder when working and to share that recording should they be suspected of cheating. Several acknowledged that these tactics verge on privacy invasion and should not be necessary, but conceded to using them because they had no other options. Others acknowledged that these tactics, in the end, are also limited. "It will work until they accuse you of using a second computer or your phone. There's no way to completely solve the problem." Others mentioned that with new features soon to be implemented by Google and Microsoft, word processing software will have ChatGPT-like features built in. "So the solution is to work with these tools, rather than trying to prove or disprove use. We have a lot of work to do to get educators, at all levels, up to speed." Another stream of evidence related to establishing the unreliability of AI detectors. A common suggestion to this regard was to input some of the instructor's own writing into the detector, on the assumption that some of it will be flagged as likely AI-generated. Many shared that at least one detector claims that the Declaration of Independence is AI-generated. Some students suggested sharing news articles or a research paper about the accuracy of AI detectors. It seems that some instructors, too, suggested that students check their work with AI detectors before submitting it. As one student said, "The professor has been trying to say if you run your work through the software and it gives a false positive, rewrite it until it does not say it's AI-generated." Reflecting on this, another student wrote, "Getting accused is seriously my worst fear. I've been pasting all my work into an AI detector. My own writing comes up as 'Likely AI.' It's stupid." While some students accepted this as the new status quo, others resisted: "No, don't let it go. AI detectors are a scam, and it's not our responsibility to adjust our writing to make them happy." ### The Societal Role of Higher Education The next theme centered on the role of higher education as a gatekeeper for one's livelihood. At least in the United States, higher education is a major monetary investment, and many students perceive a college degree to be a prerequisite for gainful employment as an adult. As such, the stakes are perceived as very high in accusations of cheating. One student wrote, for example, that if universities are to use AI detectors, they need to be completely accurate and reliable. "There's no margin for error because the stakes are too high." Inevitably, a student's grades and GPA were wrapped up in these discussions. Related to the central role of higher education, some students felt professors to be overly self-important in matters of cheating allegations. Comments along these lines were often sarcastic. For example: "They see themselves as The Authority. They aren't to be questioned, especially when they could be proven wrong." Another student wrote, "The idea of sucking up to these tyrants is sickening." Given all this, several commentators expressed that they were glad to have graduated (or retired, in the case of instructors) before ChatGPT's release and therefore don't have to deal with this situation. One graduate wrote, "No one's gonna know what to do for a few years. I'm just glad I graduated already. I imagine there will be some things that are fair, others unfair. Some will skate by and graduate without lifting a finger during this time. Others will be expelled even though they didn't cheat." In this situation, some saw a possibility for higher education to be unseated from its central gatekeeper role. Some remarked that higher education was now "irrelevant." The centrality of higher education to society has another sense as well. Other students voiced that the issues unfolding in higher education are a harbinger of issues to come for society more broadly. "AI papers and people like you getting falsely punished for them, is just the tip of the iceberg. Society is screwed. A catastrophe is waiting to hit in the next few years." Issues such as political disinformation, polarization, the scientific enterprise, etc., are all susceptible to the same questions of authenticity as student assessments--and if the stakes for student work are high, the stakes for these things are much higher. ### Trust in Students and Technology Another major theme in the corpus was trust--that between human and technology as well as among humans. The discussions in several threads reflect how trust can be built or damaged through new technology. The central relationship in this theme was between instructor and student. Some students expressed that having built rapport with their instructor earlier in the term encouraged the instructor to give them the benefit of the doubt in a suspected case of AI cheating. Other students expressed surprise that even though they had felt a trusting relationship with their instructor, they still were accused. "I never would have expected to get accused by him, out of all my professors," one student wrote. In either case, students said that an accusation of cheating damaged the relationship. For some, this meant they would have to work on future assignments defensively, expecting that they may be suspected or accused of cheating. "Screen recording is a good idea, since the teacher probably won't have as much trust from now on," one commentator wrote. Trust was also evident in discussions of how instructors use AI detectors. "Of course she trusts the AI detector more than she trusts us," one student wrote. Beyond the student-instructor relationship, trust was also discussed in student-student relationships, such as in work on group projects. A handful of threads shared experiences of a group submission being flagged as AI-generated, with students suspecting their groupmates of cheating. "I know I sure as hell didn't plagiarize but unfortunately you can't always trust others," one student wrote. ### What Constitutes Cheating? The next theme reflects on the nature of cheating, which usages of AI constitute cheating (e.g., using AI to generate text vs. as a rephrasing tool), and whether AI technologies are already covered by existing intellectual integrity policies. First, several students questioned why using AI was considered cheating. In these discussions, plagiarism was generally the concept invoked. For example, arguing that submitting ChatGPT's output could not be considered plagiarism, one user wrote, "Show your professor the ChatGPT terms of use. All rights to the generated content are assigned to you, to do with as you wish. You can't plagiarize what you own." Others pointed out that such a viewpoint overlooks some aspects of plagiarism, such as the possibility to self-plagiarize. Others suggested that even if using ChatGPT's output is not plagiarism, it is still academic dishonesty. But some usages of ChatGPT were less clear. For example, one user asked, "Is using ChatGPT to rephrase parts of an essay with your own researched content considered cheating?" Considerations in this area were not just whether such usages would be considered cheating by human judgment, but also by a technological tool such as Turnitin. As one graduate student shared, "In some essays, I typed my own ideas out and used ChatGPT to refine and paraphrase them for me. Would this be considered plagiarism by Turnitin? I am freaking out." Others noted confusion and contradiction in instructions they had previously received from faculty, especially when it comes to grammar-assistance tools like Grammarly. "They used to recommend you use things like Grammarly, which use AI to correct your writing, and same with the grammar tools in Google Docs and Word, but now using those tools will get you flagged for AI-generated text." Some users mentioned they were now "scared" to use Grammarly. Relatedly, some students noted hypocrisy in students being prohibited from using AI tools but not instructors. Several pointed out that the use of AI detectors are also AI tools. In one case, a student even thought their instructor was hypocritical with regard to plagiarism: "I had an online Zoom class where the teacher gave a 30-min speech about plagiarism, and then 2 weeks later said he had a video for us. He went on to play a screen recording from a different college's online Zoom class for the same subject. It's laziness and bluffing and I don't see it getting any better." ### Rethinking Assessment In the final theme, conversations reflected a need to revise models for evaluating student learning. Some suggested that instructors would have to move toward oral or handwritten assignments done in the classroom to ensure that ChatGPT was not used; others remarked that certain types of assignments, such as online discussion posts, were overused and should be retired in favor of more authentic assessments. Still, creating those assessments and operationalizing classroom policies was seen as a challenge. One student wrote of a professor who announced a policy that ChatGPT could be used for idea generation, rephrasing, etc., so long as the resulting Turnitin AI detector score was acceptably low. But that professor then penalized the student for submitting an annotated bibliography that scored too highly for AI content. The student felt betrayed. "I even ran her assignment descriptions through the OpenAI detector, and a few of them scored higher than even my assignments," that student wrote. For some students, the impending change was welcome. "I can't wait for the entire college system to burn down over the summer because of AI," one student wrote. "Hopefully by fall, all of this will be cleared up." In the meantime, ChatGPT allowed some students to spend less time on their peripheral courses and focus on the work they feel is more important. For example, one student wrote, "I'm a junior, and this was Sociology 101 where the teacher basically wanted us to echo her opinions about society and have no opinions. If I'm being told to lie, of course I'm going to find ways to make the work easier so I can focus on the 300- and 400-level courses that will matter more for my life and career." ## Discussion and Conclusion The world is grappling with advances in AI technology, and higher education is no exception. This study has explored one aspect of this, how students respond to accusations of using ChatGPT to write essays, exam responses and other assessments. As discussed above, students take a legalistic stance to these accusations, gathering evidence and constructing arguments. Students' responses to these accusations demonstrate the high stakes of the situation and, more broadly, the societal role of higher education, as well as how new technologies impact trust. The shifting landscape of AI capabilities raises questions of what behaviors qualify as cheating and how to educate students and enforce policies with regard to cheating with AI, and they ultimately point to the need to rethink assessment for the age of AI. The corpus used in this study demonstrates how being accused of cheating with AI can be harrowing for students, whether they cheated or not, just as Pitt et al. (2020) found regarding accusations of cheating generally. And it is notable that in the corpus examined here, the majority--78 percent--of the accused students were falsely accused. At the heart of these accusations are AI detectors, suggesting hopefulness for a technological solution to the problems posed by generative AI for higher education. Given the proportion of false accusations in the corpus of this study, as well as recent press, it is clear that AI detectors are not the solution. Even if an AI detector was highly accurate and reliable, it would be insufficient for its purpose. Consider the Copyleaks AI Content Detector, which claims a 0.02% false positive rate. In a university of 20,000 students, assuming four courses per student and five assignments per course, this very low false positive rate still suggests that 80 students per year will be falsely accused of cheating with AI. Moreover, the perceived infallibility of such tools may mean that innocent students are left unable to defend themselves. Reliance on AI detectors as the primary means of addressing AI cheating also creates an environment where savvy cheaters can game the systems in place with relative ease. Already in the corpus examined here, techniques are being circulated to thwart the AI detectors on the market, such as using AI to rewrite its generated text with more syntactical change, randomized sentence length, and even inserted typos. But beyond all this, AI detectors will only ever be able to detect AI-generated text, not AI-generated ideas about which students have written in their own words. Indeed, _The Chronicle of Higher Education_ published an article in May 2023 by an undergraduate student with the headline "You Have No Idea How Much We're Using ChatGPT" (Terry, 2023), which made the point that ChatGPT can be used for intellectual tasks besides generating the text itself (e.g., generating a topic, thesis statement and supporting points), none of which can be found by an AI detector. So AI detectors are not a full solution for cheating with ChatGPT, perhaps not even a partial solution. All this points to a need for better answers. In this study, the themes of the Societal Role of Higher Education and Rethinking Assessment offer some ideas in that direction. Clearly, a single term paper or exam submitted at the end of a course is no longer a valid assessment of a student's learning during that course. Rather than attempting to use AI detectors to evaluate whether these assessments are genuine, instructors may be better off designing different kinds of assessments: those that emphasize process over product, or more frequent, lower-stakes assessments. To this end, suggestions in the literature regarding teaching to dissuade and prevent cheating are just as relevant in the age of AI (e.g., Ahsan, 2022; Lancaster, 2022a; Lang, 2013; Williams, 2001). Besides rethinking assessment, it is clear that instructors (or institutions) must establish clear policies on what usages of AI constitute cheating and why. While some usages of ChatGPT and similar tools could be considered forms of contract cheating or plagiarism, other usages are less clear. Implicitness and ambiguity are not helpful here. Further, instructors have an opportunity to educate students on the potential for wise use of these emerging AI tools, as they are likely to play a role in the professions for the foreseeable future. "Wise use" here may entail how the tools may be utilized, their limitations, and ethical considerations about their very existence. This study has offered a look at how students experience and respond to allegations of cheating with ChatGPT. As a thematic analysis, the findings here should be taken as illustrative and indicative, not exhaustive or statistically generalizable. Moreover, the corpus in this study was limited to English-language text discussions on Reddit, which may not be fully generalizable to the college student population. Further research using other methods may be used to validate, extend or challenge these findings and provide a more comprehensive view.
2309.11732
Differentiable Phylogenetics via Hyperbolic Embeddings with Dodonaphy
Motivation: Navigating the high dimensional space of discrete trees for phylogenetics presents a challenging problem for tree optimisation. To address this, hyperbolic embeddings of trees offer a promising approach to encoding trees efficiently in continuous spaces. However, they require a differentiable tree decoder to optimise the phylogenetic likelihood. We present soft-NJ, a differentiable version of neighbour-joining that enables gradient-based optimisation over the space of trees. Results: We illustrate the potential for differentiable optimisation over tree space for maximum likelihood inference. We then perform variational Bayesian phylogenetics by optimising embedding distributions in hyperbolic space. We compare the performance of this approximation technique on eight benchmark datasets to state-of-art methods. However, geometric frustrations of the embedding locations produce local optima that pose a challenge for optimisation. Availability: Dodonaphy is freely available on the web at www.https://github.com/mattapow/dodonaphy. It includes an implementation of soft-NJ.
Matthew Macaulay, Mathieu Fourment
2023-09-21T02:06:53Z
http://arxiv.org/abs/2309.11732v1
# Differentiable Phylogenetics via Hyperbolic Embeddings with Dodonaply ###### Abstract **Motivation:** Navigating the high dimensional space of discrete trees for phylogenetics presents a challenging problem for tree optimisation. To address this, hyperbolic embeddings of trees offer a promising approach to encoding trees efficiently in continuous spaces. However, they require a differentiable tree decoder to optimise the phylogenetic likelihood. We present soft-NJ, a differentiable version of neighbour-joining that enables gradient-based optimisation over the space of trees. **Results:** We illustrate the potential for differentiable optimisation over tree space for maximum likelihood inference. We then perform variational Bayesian phylogenetics by optimising embedding distributions in hyperbolic space. We compare the performance of this approximation technique on eight benchmark datasets to state-of-art methods. However, geometric frustrations of the embedding locations produce local optima that pose a challenge for optimisation. **Availability:** Dodonaply is freely available on the web at www.[https://github.com/mattapow/dodonaply](https://github.com/mattapow/dodonaply). It includes an implementation of soft-NJ. **Contact:** [email protected] ## 1 Introduction Phylogenetics provides us with the evolutionary history of a set of taxa given their genetic sequences, which is usually a bifurcating tree. However, fast optimisation relies on gradients, which are not well defined between discrete trees. Thus most tree optimisation techniques consider manual changes to the tree topology before optimising the continuous parameters (branch lengths) of each tree considered [25, 37]. Knowing which of the super-exponential number of trees to manually try is a challenging task [14, 19]. Providing a differentiable way to move between tree topologies, would allow well-developed continuous optimisation techniques to work in the space of phylogenetic trees. In this paper, we propose a novel technique to continuously move through the space of bifurcating trees with gradients. Our approach hinges on two ideas a) an embedding of the genetic sequences into a continuous space and b) an algorithm we propose called soft-NJ, which passes gradients through the neighbour joining algorithm. With these preliminaries, we can embed the tip nodes of a tree in the continuous embedding space and then optimise the locations of these nodes based on the neighbour joining tree that they decode from soft-NJ. We use hyperbolic embeddings to represent trees in a continuous manner. This is similar to embedding points in Euclidean space, where each tip node of the tree is positioned in the space with a certain location [22]. However, the metric between two points is modified to give a negative curvature between (as opposed to positive curvature for points on a sphere). Hyperbolic data embeddings offer low dimensional, efficient, and precise ways to embed hierarchically clustered data [31, 5, 26, 29, 6] or tree-like data in phylogenetics [20, 43, 8, 28, 23, 16]. Alternative continuous tree embedding methods are high dimensional, growing significantly with increasing taxa; BHV space grows double factorially [2], flattenings of sequence alignments grow exponentially [1], sub-flattenings increase quadratically [39], as with tropical space [36]. In these spaces, each point corresponds to a single tree, making them high dimensional. Additionally, they have non-differentiable boundaries between trees, making them difficult to optimise in [9]. Whereas with hyperbolic embeddings, each taxon has an embedding location and together the set of taxa locations decode to a tree. This keeps the embedding space low dimensional and the number of optimisation parameters linear in the number of taxa. The goal of our approach is to optimise the embedding locations with gradient-based optimisation, which requires a differentiable loss function (i.e. the likelihood or unnormalized posterior probability). This is easily achieved in other applications with carefully designed loss functions. However, in phylogenetics, there are well accepted Markov models of evolution (such as GTR or JC69), which rely on having a tree structure to compute their likelihood. To maximise the likelihood by changing the embedding locations, we developed soft-NJ -- a differentiable version of the neighbour joining algorithm using automatic differentiation. It allows gradients to pass from the embedding locations into a decoded tree and the likelihood function. We implemented soft-NJ in Dodonaphy, a software for likelihood-based phylogenetics using hyperbolic space. We demonstrate this newfound ability for phylogenetic optimisation with two modes of gradient based inference: maximum likelihood (ML) and Bayesian variational inference (VI). Variational inference is a Bayesian technique for approximating the posterior distribution with simple and tractable distributions, as reviewed in [3]. It indirectly finds the variational distribution that minimises the KL divergence between the unnormalised posterior and the variational distribution. This avoids the need to compute the normalising constant in Bayes theorem or to resort to time consuming Markov chain Monte Carlo sampling, potentially offering significant computational speed ups. Recently, phylogenetic variational inference has garnered increasing attention [46, 45, 19, 20] as a promising way to cope with high dimensionality inherent to Bayesian phylogenetics. Concurrently, variational approximations have extended to general manifolds, such as hyperbolic space, where the variational density sits on the manifold [44, 41, 31]. We combine these two paradigms to perform variational Bayesian phylogenetic inference on hyperbolic manifolds. To perform variational inference on the space of phylogenies, we equip each of \(n\) embedded taxon locations with a variational distribution (a projected multivariate-Normal) in hyperbolic space \(\mathbb{H}^{d}\). We optimise the set of \(n\) probability distributions in hyperbolic space. We can quickly draw samples from these distributions and compute their neighbour joining tree of the sample. This yields a distribution of phylogenetic trees that approximate the posterior distribution. It's worth noting that soft-NJ is not limited to phylogenetics; this contribution opens up a wide range of continuous gradient-based inference methods to any hierarchically structured data. Recent advances in machine learning have also pushed for learning embeddings for hierarchical data such as in natural language processing [5, 26, 29]. Recent machine learning problems also attempt to optimise tree structures. Soft-NJ provides an alternative algorithm to search through the space of trees in a differentiable manner and need not be constrained to phylogenetic problems. Additionally, a similar approach to soft-NJ could be applied to the UPGMA algorithm, which is used widely used outside of phylogenetics. ## 2 System and methods In this section, we provide the necessary background for our proposed phylogenetic embedding technique. First, we recap how phylogenetic models are used for tree inference in maximum likelihood and Bayesian approaches, in particular, variational Bayesian inference. We then introduce hyperbolic space and how phylogenies can be embedded in this space. ### Phylogenetic Inference Phylogenetic models compute the likelihood of an aligned set of genetic sequences \(D\), which are observed at the tips given a bifurcating tree \(T\)[10]. Let \(T=T(\tau,\ell_{\tau})\) denote an unrooted bifurcating tree with topology \(\tau\) and continuous branch lengths \(\ell_{\tau}\). A phylogenetic model (denoted \(\mathcal{M}\)) is a Markov model between the four nucleotide states \(A,C,G,T/U\) along the tree at each site in the alignment [40]. It has six substitution rates which sum to one and four equilibrium frequencies which also sum to one. We use the GTR model and a simplified version of it called JC69 [17] to compute the likelihood of the alignment data \(D\) given a tree \(p(D|T,\mathcal{M})\). ### Bayesian Phylogenetic Models Bayesian phylogenetics includes prior knowledge of each parameter and seeks the posterior distribution over phylogenetic trees given a multiple sequence alignment. The posterior is \(p(T,\mathcal{M}|D)\propto p(D|T,\mathcal{M})p(T)p(\mathcal{M})\), with, in general, an unknown normalising constant. We specify the prior probability of an unrooted tree \(p(T)\) using a Gamma-Dirichlet model [33]. The Gamma-Dirichlet prior invokes a Gamma distribution (shape 1, rate 0.1) over the total tree length before dividing this length into the branches with an equally weighted Dirichlet distribution [33]. The GTR model's prior \(p(\mathcal{M})\) is a flat Dirichlet for the six substitution rates and a flat Dirichlet on the four equilibrium frequencies. ### Variational Inference Variational inference minimises some measure of divergence between an approximating function \(q\) from a family of distributions \(q\in\mathcal{Q}\) and the posterior target \(p(T,\mathcal{M}|D)\). We use the standard KL-divergence between the two distributions, which after dropping the \(\mathcal{M}\) and putting it in log space is: \[\mathrm{KL}\big{(}q(T)||p(T|D) = \mathbb{E}[\log q(T)]-\mathbb{E}[\log p(T|D)]\] \[= \mathbb{E}[\log q(T)]-\mathbb{E}[\log(p(D|T))+\log(p(D))]\] where the expectations are taken with respect to \(q(T)\). The marginal likelihood of the data \(\log p(D)\) is intractable to compute, however, since the data is constant, we can simply drop this term and optimise to the same optimum. As a result, the so called evidence lower bound (ELBO) becomes the objective to maximise: \[\mathcal{L}_{\mathrm{ELBO}}=\mathbb{E}[\log p(T,D)]-\mathbb{E}[\log q(T)]\] Maximising the ELBO is equivalent to minimising the KL-divergence between the target \(p(T|D)\) and variational distributions \(q(T)\) for any given data set. #### 2.3.1 Improved VI The chosen variational distribution \(q(T)\) may be too simple to capture the true posterior distribution, so to allow for more expressive variational distributions, they can be _boosted_ with a mixture model. Boosting is the process of attaining stratified samples over multiple variational distributions \(q_{k}(T)\) each with weight \(\alpha_{k},\;k\in 1,2,...K\). Each sample can be computed with \(M\) importance samples as done in the stratified importance weighted auto-encoder (SIWAE) [27]: \[\mathcal{L}_{\mathrm{SIWAE}}=\mathbb{E}_{q}\Big{[}\log\frac{1}{M}\sum_{m=1}^{ M}\sum_{k=1}^{K}\alpha_{k}\frac{p(T,D)}{q_{k}(T)}\Big{]}\] Compared to other objectives, this version of the ELBO has improved expressivity and encourages the mixtures not to collapse onto each other [27, 4]. We optimise the parameters of the variational distribution to maximise the SIWAE. Unless otherwise stated, we selected the hyper-parameters \(M=1\) importance samples, \(K=1\) boosts (mixtures) with equal initial weights \(\alpha_{k}=1/K\). We use PyTorch's Adam optimiser with a learning rate of \(0.1\). The learning rate decayed according to \((t+1)^{-0.5}\), where \(t\) is the iteration number. ### Hyperbolic Space We model \(d\)-dimensional hyperbolic space by a hyperboloid: \[\mathbb{H}^{d}=\{u\in\mathbb{R}^{d+1}:\langle u,u\rangle=-1\},\] where the Lorentz inner product is \[\langle u,v\rangle=-u_{0}v_{0}+u_{1}v_{1}+....+u_{d}v_{d}.\] This is a sheet sitting in the ambient space \(\mathbb{R}^{d+1}\). The distance between two points on the sheet is \[d_{\kappa}(u,v)=\frac{1}{\sqrt{-\kappa}}\mathrm{arcosh}(-\langle u,v\rangle),\] where \(\kappa<0\) is the curvature of the manifold. Based on previous work, we select three dimensions \(d=3\)[23]. ### Encoding Trees in \(\mathbb{H}^{d}\) To initialise an embedding in hyperbolic space, we take a tip-tip distance matrix from a given phylogenetic tree: \(D_{T}\). Dodonaphy then uses Hydra\(+\) to embed each taxon with a location \(\vec{z}_{i}\) in hyperbolic space with \(d\) dimensions \(\vec{z}_{i}\in\mathbb{H}^{d}\). Hydra\(+\) is a recent adaption of multi-dimensional scaling to hyperbolic space [18]. It is an optimisation algorithm that minimises the stress of the embedding, that is, it minimises the difference between the given distance matrix \(D_{T}\) and the pairwise distances in hyperbolic space \(D_{ij}=d_{\kappa}(\vec{z}_{i},\vec{z}_{j})\). The result is a set of embedding locations in \(\vec{z}_{i}\in\mathbb{H}^{d}\), one for each tip \(i\) in the phylogenetic tree. Note that this is an approximate embedding technique, so an encoded tree may not decode back to the originally given tree. ### Encoding Tree Distributions in \(\mathbb{H}^{d}\) To encode a variational distribution over trees in hyperbolic space, each taxon requires a variational distribution in \(\mathbb{H}^{d}\). To initialise an embedding, for each taxon, we centred a distribution around the point \(\vec{z}_{i}\) as in the previous section. We set the covariance to be diagonal, i.e. mean-field, using a coefficient of variation of 20 compared to the smallest tip-tip distance. Each variational distribution is a multivariate Normal \(\mathcal{N}(\mu,\Sigma)\) projected from the tangent space at \((1,0,0,...)^{\intercal}\), which is Euclidean space \(\mathbb{R}^{d}\). Points \(z\in\mathbb{R}^{d}\) are projected onto the Hyperboloid by modifying the first coordinate: \[z_{0}\mapsto\sqrt{1+\sum_{i=1}^{d}z_{1}^{2}} \tag{1}\] and the remaining coordinates \(z_{1},...,z_{d}\) remain the same. The technique is computationally cheap and previously produced similar results to wrapping using an exponential transformation [23, 28]. ## 3 Algorithm We are now set up to describe our algorithm. First, we embed genetic sequences as points (or continuous distributions for VI) in hyperbolic space using Hydra+. Then we work with the embedded data to optimise the tree (or tree distribution). From a set of embedded points, we compute the neighbour joining tree and compute the cost function \(C\) (e.g. the phylogenetic likelihood or SIWAE) on that tree. The overall goal is to maximise the cost function by optimising the embedding parameters (locations or variational distributions). ### Differentiable Optimisation in Tree Space We compute the gradient of the cost function \(C\) with respect to the embedding parameters using automatic differentiation. Automatic differentiation tracks every arithmetic operation in a numerical procedure to provide the analytical derivative of the procedure. From the \(n\) embedding locations \(\vec{z}_{i}\in\mathbb{H}^{d}\) we compute the pairwise distances \(D\), then the neighbour joining tree in the space of trees \(T\in\mathcal{T}^{n}\), which has branch lengths that feed into the objective function \(C\in\mathbb{R}\): \[(\mathbb{H}^{d})^{n}\xrightarrow{d_{x}}\mathbb{R}^{\binom{n}{2}}\xrightarrow {\text{soft-NJ}}\mathcal{T}^{n}\xrightarrow{C}\mathbb{R}. \tag{2}\] Automatic differentiation computes the chain rule through this series of procedures to guide the optimiser. The impasse is that neighbour-joining is not a differentiable algorithm since it selects taxa recursively. Below we present a differentiable version of neighbour joining based on the soft-sort algorithm. ### Soft-NJ From a set of \(n\) leaf locations \(\{u_{i}\}_{i=1}^{n}\) on the hyperboloid, we decode a tree using soft neighbour-joining -- passing gradients from leaf locations into branch lengths on the tree. Neighbour-joining proceeds by recursively connecting the _closest_ two taxa according to the arg-min of [35] \[Q_{ij}=(n-2)d(u_{i},u_{j})-\sum_{k}d(u_{i},u_{k})-\sum_{k}d(u_{j},u_{k}).\] To select this minimum in a differentiable manner, we make use of the soft-sort algorithm [32]. Soft-sort is a continuous relaxation of the arg-sort operator on a vector with a temperature parameter \(\tau\) that controls the degree of approximation and impacts the gradient flow throughout the optimisation. A colder temperature, closer to zero, reverts the soft-NJ algorithm back to the discrete (hard) version. We use Soft-sort to create a relaxed permutation matrix of the flattened upper-triangle component \(\vec{Q}\) of the \(Q\) matrix as follows: \[P=\text{softmax}\Big{(}\frac{-|\text{sort}(\vec{Q})\mathbb{1}^{T}-\mathbb{1} \vec{Q}^{T}|}{\tau}\Big{)}\] where \(\mathbb{1}\) is a vector of ones. To extract the arg-min of \(\vec{Q}\) we simply multiply by the last column of the permutation matrix \(P\) by the vector \([1,2,3,...]^{T}\). This leads to a one-hot vector indexing the arg-min of \(\vec{Q}\), which is easily unravelled into row and column one-hot vectors to use in neighbour joining. Each of these steps is differentiable, allowing gradients to pass from \(Q\) into the branch lengths on the decoded tree \(T\). In a small extension to the algorithm, we break any possible ties in \(P\) by performing Soft-Sort twice. We break ties differentiably by selecting the first minimum element of \(\vec{Q}\) using the cumulative sum function. After obtaining the permutation matrix \(P\), we extract its last column denoted \(P^{l}\). We then apply soft-sort to \(P^{l}C\), where \(C\) is the cumulative sum \(C_{i}=\sum_{k=1}^{i}P_{k}^{l}\). This modification ensures that the first minimum element in \(P^{*}\) is selected, guaranteeing a well-defined output. ### Change of Variables Jacobian In light of Eq. 2, we are sampling trees by changing variables from \(\mathbb{H}^{d\times n}\) to \(\mathcal{T}^{n}\). To account for density changes, we must include the determinant of each transformation before \(\mathbb{T}^{n}\). These changes are for sampling in \(\mathbb{H}^{d\times n}\) (which is a projection from Euclidean Space as in [23, 7]), transforming by \(d_{\kappa}\) (which has no associated Jacobian), and transforming by soft-NJ. The Jacobian of neighbour-joining is analytically non-trivial because of the recursive nature of the algorithm. However, the Jacobian of this series of transformations with soft-NJ is easily computed using automatic differentiation. ## 4 Implementation This algorithm is implemented in Dodonaphy, a software for phylogenetic inference via hyperbolic embeddings. It uses several Python packages, notably, PyTorch for automatic differentiation [30] and DendroPy for some tree handling [38]. Dodonaphy is freely available at [https://github.com/mattapow/dodonaphy](https://github.com/mattapow/dodonaphy). It has an easy to use command line interface and example input data for analysis. The second release of Dodonaphy, which focuses on using gradient-based inference is available on Zenodo at: [https://doi.org/10.5281/zenodo.8357888](https://doi.org/10.5281/zenodo.8357888). Additionally, the results and figures can be reproduced using the scripts available at: [https://github.com/mattapow/vi-fig-scripts](https://github.com/mattapow/vi-fig-scripts). Discussion In this section we will demonstrate the empirical performance of gradient-based tree inference using soft-NJ. We will evaluate its performance for both maximum likelihood and variational inference. We have selected eight standard benchmark datasets in phylogenetics taken from [21, 42]. These datasets are DNA and RNA multiple sequence alignments with between 27 and 64 tip nodes. ### Maximum Likelihood Optimisation We compared the performance of our proposed hyperbolic embedding technique against two state-of-the-art maximum likelihood phylogenetic programs: IQ-TREE and RAxML-NG. We initialise an embedding in \(\mathbb{H}^{3}\) with curvature \(\kappa=-100\) by embedding the BioNJ tree distances [13]. We did this by following the hyperbolic multi-dimensional scaling approach of Hydra+ [18]. We then optimise the embedding locations, the curvature, and the parameters of the GTR Markov model for \(2\,000\) epochs. Figure 1 compares the final tree found for DS1 to IQ-TREE. Although the resulting tree is generally similar to IQ-TREE, there are notable differences. Both the topology and, on close inspection, branch lengths are slightly different. It's possible that the continuous parameters are not fully optimised by Dodonaphy because it is simultaneously dealing with optimising over tree topologies in the embedding space. To address this we propose a hybrid approach called Dodonaphy+ where we take the tree that Dodonaphy produces and optimise its continuous parameters using the BFGS optimiser available in IQ-TREE. To summarise these differences for all datasets we present the log likelihood under the model in table 2. Dodonaphy consistently outperformed BioNJ demonstrating Dodonaphy's ability to improve the likelihood. Note Figure 1: Maximum likelihood tree found by IQ-TREE compared to Dodonaphy for data set 1. that the (negative) log-scale on the vertical axis downplays the significantly poorer performance of BioNJ. Dodonaphy+ improves the maximum likelihood compared to the original Dodonaphy to varying degrees. In DS5 the improvement is slight (0.7) but the change is significant for DS7 (518.8). We note that after setting the curvature at \(\kappa=-100\), the final curvatures across all data sets ranged from \(-58.28\) (DS1) to \(-75.6\) (DS3). Previous works have quantified the tree-like of phylogenetic data [15] as well as the relationship between curvature and the error on the four-point condition [43]. These values all fall in the acceptable range previously found on these datasets [23]. Allowing the curvature to freely change in the optimisation process avoids imposing an arbitrary value. ### Geometric Frustration In practice, the state-of-the-art methods still attain better maximum likelihood estimates, indicating that the optimisation process attains a non-global optimum. To overcome this issue, stochastic algorithms like Adam and stochastic gradient descent are commonly employed. In this case, non-global optima can be interpreted as geometrically frustrated embedding sets, where the path to the global optimum is not along a monotone path. However, because the tree structure is associated with the embedding, whole sets of taxa could be rearranged whilst leaving the decoded tree unchanged. The appeal of doing this is the potentially altered neighbourhood of trees after rearrangement, providing a way out of the local optima. One way to escape such optima would be to re-embed the tree in a new configuration in a way that preserves the outputted tree. This is the pre-image of a given tree under neighbour joining. For example, isometries of hyperbolic space itself are generated by the Lorentz group and (by definition) will lead to points decoding to identical trees. However, these Figure 2: Difference in maximum log likelihood estimates compared to RAxML across all datasets DS1-8. The vertical axis is negative logarithmic below \(-1\) and linear above it. do not alleviate the embedding frustration. Exploring any other embeddings in the pre-image of a tree could produce less geometrically frustrated embeddings that can then continue to be optimised. For example, swapping the locations of two cherries could decode to the same tree. Algebraic structures on trees [12] may shed some light on this, however, determining the full pre-image of neighbour joining from the embedding space is, to our knowledge, an open question. ### Variational Bayesian Inference Next, we use embedded distributions of trees to perform variational inference over the space of phylogenies. We take the tip-tip distances from the IQ-TREE and embed each taxon using Hydra+. We then associate each taxon location with a variational distribution centred at this point. The distributions are multivariate Normals in the tangent space of the origin projected by Eq.1. We optimise the parameters of these variational distributions and a point estimate of the GTR model parameters to minimise the SIWAE. After optimising the SIWAE for 200 epochs, we drew \(10^{4}\) tree samples from the final variational distribution. #### 5.3.1 Parameter Estimation We compared our results to the state-of-the-art Metropolis Coupled Markov Chain Monte Carlo (MC\({}^{3}\)) phylogenetic software MrBayes [34]. We ran MrBayes with one cold chain and three heated chains for \(10^{7}\) iterations. We sampled \(10^{4}\) trees evenly throughout this run as an approximation of the posterior and discarded the first 10%. We use the same prior and likelihood models as in MrBayes for a fair comparison between posterior probabilities. Figure 3: Variational approximation in \(\mathbb{H}^{3}\) compared to MCMC. Comparison of the split lengths (left), showing internal splits: red diamonds, and leaf splits: blue circles. Marker opacity is set by the frequency of the split in MrBayes’ estimate of the posterior. Total tree length (kernel density) estimates (right) in the final samples. The results show moderate agreement between the branch lengths of the posterior, figure 3. The estimated split frequencies and total tree lengths compare reasonably to MrBayes when considering the standard errors shown. An exact match is not expected since VI is an approximating algorithm. The support of the inferred tree length closely resembles that of MrBayes, although it is slightly more diffuse. #### 5.3.2 Performance Evaluation We evaluated the performance of Dodonaphy in comparison to several state-of-art inference techniques in variational Bayesian phylogenetics. We build on a summary of the results recently compiled in [24] on the same eight datasets. For this section we used the same model of evolution (Jukes-Cantor [17]) and prior distribution used in these comparisons. The prior is uniform across tree topologies and exponential \(\mathrm{Exp}(10)\) in the branch lengths. We initialised Dodonaphy to the maximum likelihood tree from IQ-TREE before running the optimisation. Then we estimated the marginal likelihood of the data over the phylogenetic parameters \(\theta\) using variational Bayesian importance sampling [11]: \[p(D)=\int p(D|\theta)p(\theta)d\theta.\] This estimator uses the variational distribution as an importance distribution for importance sampling: \[\hat{p}(D)=\frac{1}{N}\sum_{i=1}^{N}\frac{p(D|\tilde{\mathbf{\theta}}_{i})p(\tilde {\mathbf{\theta}}_{i})}{q(\tilde{\mathbf{\theta}}_{i})},\] where \(q(\tilde{\mathbf{\theta}}_{i})\) is the variational distribution and \(\tilde{\mathbf{\theta}}_{i}\sim q(\tilde{\mathbf{\theta}})\). We used \(N=1000\) samples from the variational distribution to compute this marginal estimator. Table 1 presents a comparison of Dodonaphy with state-of-the-art variational inference methods. The results from stepping stone MCMC in MrBayes is also included as a baseline comparison. Note that while VBPI-GNN has excellent results it is given topologies as inputs rather than performing topological inference. Geophy and \(\phi\)-CSMC are the current state-of-art implementations performing topological and continuous parameter phylogenetic inference. Dodonaphy generally provides poorer estimates of the posterior than competing methods. Unlike the other phylogenetic variational techniques, the marginal log-likelihood was overestimated by Dodonaphy in some of \begin{table} \begin{tabular}{l r r r r r r r r} \hline Dataset & DS1 & DS2 & DS3 & DS4 & DS5 & DS6 & DS7 & DS8 \\ \hline MrBayes & -7 108.42 & -26 367.57 & -33 735.44 & -13 330.06 & -8 214.51 & -6 724.07 & -37 332.76 & -8 649.88 \\ VBPI-GNN & -7 108.41 & -26 367.73 & -33 735.12 & -13 329.94 & -8 214.64 & -6 724.37 & -37 332.04 & -8 650.65 \\ Geophy LOO(3)+ & -7 116.09 & -26 368.54 & -33 735.85 & -13 337.42 & -8 233.89 & -6 735.90 & -37 358.96 & -8 660.48 \\ \(\phi\)-CSMC & -7 290.36 & -30 568.49 & -33 798.06 & -13 582.24 & -8 367.51 & -7 013.83 & - & -9 209.18 \\ Dodonaphy & -7 006.05 & -25 786.58 & -32 982.86 & -12 862.52 & -7 211.90 & -7 054.37 & -37 804.35 & -9 605.74 \\ \hline \end{tabular} \end{table} Table 1: Comparison of marginal log-likelihood estimates. the datasets. This is consistent with a variational approximation that is concentrated on regions of heightened likelihood. It's possible that after initialisation at the embedded maximum likelihood tree, the variational distribution optimised into a local optima. The suboptimal results could also be attributed to the continuous hyperbolic variational approximation. Underlying this model is the assumption that trees with similar tip-tip distances share similar posterior likelihoods. This assumption is a heuristic that provides an efficient way to encode tree distributions but may constrain the flexibility of the distribution. These findings are also consistent with a variational distribution that is too simple, calling for a more expressiveness. We explore this by boosting the variational distribution. ### Effect of Boosting Whilst boosting improves the expressiveness of the variational distribution, it also increases the computational demand of variational inference by a factor of \(K\), so we are interested in the minimal number of mixtures required. To understand the number of boosts required to capture the embedded posterior distribution of trees, we fixed the number of importance samples at \(M=3\) and varied the number of mixtures \(K\) from one to ten. We optimised for 200 epochs starting from the IQ-TREE distances. The final SIWAE value suggests that the presence of additional mixtures improves the variational approximation, although the improvement slowly saturates after \(M=3\), figure 4. Having this flexible variational family increases the inference accuracy and opens up more complex tree distributions. ### Outlook Hyperbolic tree embeddings, through the use of soft-NJ, provide a differentiable way to efficiently encode trees and even distributions of trees. This advancement paves the way for continuous optimisation over low-dimensional representations of tree spaces. It opens up differentiable methods for a broad range of inference techniques to tackle phylogenetics. Figure 4: Effect of the number of boosts on the final SIWAE estimate for DS1. We demonstrated two applications in maximum likelihood and variational inference. However, the challenges of non-convexity and poor variational approximations pose open challenges to the research community to fully realise the potential of hyperbolic tree optimisation. Notably, finding the pre-image of the decoding process could alleviate geometric frustration aiding optimisation challenges. Additionally, exploring alternative approximating functions or transitioning to full-rank variational approximation may increase the variational quality of this approach. ## 6 Competing interests No competing interest is declared. ## 7 Author contributions statement M.M. and M.F conceived and analysed the experiments. M.M. conducted the experiments and wrote the manuscript. M.F. reviewed the manuscript. ## 8 Acknowledgments The authors thank the reviewers for their valuable suggestions. This work was supported by the Australian Government through the Australian Research Council (project number LP180100593). Computational facilities were provided by the UTS eResearch High Performance Computer Cluster.
2309.14892
Combinatorial Characterization for Global Identifiability of Separable Networks with Partial Excitation and Measurement
This work focuses on the generic identifiability of dynamical networks with partial excitation and measurement: a set of nodes are interconnected by transfer functions according to a known topology, some nodes are excited, some are measured, and only a part of the transfer functions are known. Our goal is to determine whether the unknown transfer functions can be generically recovered based on the input-output data collected from the excited and measured nodes. We introduce the notion of separable networks, for which global and so-called local identifiability are equivalent. A novel approach yields a necessary and sufficient combinatorial characterization for local identifiability for such graphs, in terms of existence of paths and conditions on their parity. Furthermore, this yields a necessary condition not only for separable networks, but for networks of any topology.
Antoine Legat, Julien M. Hendrickx
2023-09-26T12:50:05Z
http://arxiv.org/abs/2309.14892v1
# Combinatorial Characterization for Global Identifiability ###### Abstract This work focuses on the generic identifiability of dynamical networks with partial excitation and measurement: a set of nodes are interconnected by transfer functions according to a known topology, some nodes are excited, some are measured, and only a part of the transfer functions are known. Our goal is to determine whether the unknown transfer functions can be generically recovered based on the input-output data collected from the excited and measured nodes. We introduce the notion of separable networks, for which global and so-called local identifiability are equivalent. A novel approach yields a necessary and sufficient combinatorial characterization for local identifiability for such graphs, in terms of existence of paths and conditions on their parity. Furthermore, this yields a necessary condition not only for separable networks, but for networks of any topology. ## I Introduction This paper addresses the identifiability of dynamical networks in which node signals are connected by causal linear time-invariant transfer functions, and can be excited and/or measured. Such networks can be modeled as directed graphs where each edge carries a transfer function, and known excitations and measurements are applied at certain nodes. ### _Problem Statement_ We consider the identifiability of a network matrix \(G(q)\), where the network is made up of \(n\) node signals stacked in the vector \(w(t)=[w_{1}(t)\;\;\;w_{2}(t)\;\cdots\;w_{n}(t)]^{\top}\), known external excitation signals \(r(t)\), measured node signals \(y(t)\) and unmeasured noise \(v(t)\) related to each other by: \[\begin{split} w(t)&=G(q)w(t)+Br(t)+v(t)\\ y(t)&=Cw(t),\end{split} \tag{1}\] where matrices \(B\) and \(C\) are binary selections indicating respectively the \(n_{B}\) excited and \(n_{C}\) measured nodes, forming sets \(\mathcal{B}\) and \(\mathcal{C}\) respectively. Matrix \(B\) is full column rank and each column contains one \(1\) and \(n-1\) zeros. Matrix \(C\) is full row rank and each row contains one \(1\) and \(n-1\) zeros. The nonzero entries of the transfer matrix \(G(q)\) define the network topology: \(G_{ij}(q)\) is the transfer function from node \(j\) to node \(i\). It is represented by an edge \((j,i)\in\mathcal{E}\) in the graph, where \(\mathcal{E}\) is the set of all edges, each corresponding to a nonzero entry of \(G(q)\). Some of those transfer functions are known and collected in the matrix \(G^{\bullet}(q)\), and the others unknown, collected in matrix \(G^{\circ}(q)\), such that \(G(q)=G^{\bullet}(q)+G^{\circ}(q)\). The known edges (i.e. the edges corresponding to known transfer functions) are collected in set \(\mathcal{E}^{\bullet}\), the unknown ones in \(\mathcal{E}^{\circ}\), and they form a partition of the set of all edges \(\mathcal{E}=\{\mathcal{E}^{\bullet},\mathcal{E}^{\circ}\}\). We denote the number of unknown transfer functions by \(m_{\circ}\triangleq|\mathcal{E}^{\circ}|\). We assume that _the input-output relations between the excitations \(r\) and measurements \(y\) have been identified_, and that the network topology is known. From this knowledge, we aim at recovering the unknown transfer functions \(G^{\circ}(q)\). ### _State of the Art_ The model (1) has recently been the object of a significant research effort. Network identifiability was first introduced in [1], in case the whole network is to be recovered. Conditions for the identification of a single transfer function are derived in [2, 3]. Studying the influence of rank-reduced or correlated noise under certain assumptions yields less conservative identifiability conditions [4, 5, 6, 7]. It turns out that under some assumptions, identifiability of the network, i.e. the ability to recover a transfer function or the whole network from the input-output relation, is a generic notion: Either _almost all_ transfer matrices corresponding to a given network structure are identifiable, in which case the structure is called _generically identifiable_, or none of them are. A number of works study generic identifiability when all nodes are excited or all nodes are measured, i.e. when \(B\) or \(C=I\)[8, 9]. Considering the graph of the network, path-based conditions on the allocation of measurements (resp. excitations) in the case of full excitation (resp. measurement) are derived in [10] (resp. [11]). Reformulating these conditions by means of disjoint trees in the graph, the authors of [12, 13, 14] develop scalable algorithms to allocate excitations/measurements in case of full measurement/excitation. In case of full measurement, [15] derives path-based conditions for the generic identifiability of a subset of transfer functions, with noise. While the conditions in [10, 11] apply to generic identifiability i.e. for almost all transfer functions, [16] extends these results to the stronger requirement of identifiability _for all_ (nonzero) transfer matrices corresponding to a given structure, and [17] provides conditions for the outgoing edges of a node, and the whole network under the same conditions. As mentioned, the common assumption in all these works is that is that either all nodes are excited, or they are all mea sured. In [18], this assumption is relaxed and generic identifiability with partial excitation and measurement is addressed for particular network topologies. For acyclic networks, [19] gives necessary conditions, introduces the transpose network and shows that identifiability of the transpose network and its original network is equivalent. For an arbitrary topology, [20] provides for noise exploitation an elegant reformulation as an equivalent network model, where noise is cast into excitation signals. [21, 22, 23] provide necessary conditions for network identifiability, but do not handle a priori known/fixed transfer functions, which could lead to less conservative conditions. In the general case of arbitrary topology, partial excitation and measurement, we introduced in [24] the notion of local identifiability, i.e. only on a neighborhood of \(G(q)\). Local identifiability is a generic property, necessary for generic identifiability and no counterexample to sufficiency is known to the authors, i.e. no network which is locally identifiable but not globally identifiable. We derived algebraic necessary and sufficient conditions for generic local identifiability for both the whole network, and a single transfer function. The algebraic conditions of [24] allow rapidly testing local identifiability for any given network, but finding a graph-theoretical characterization, akin to what was done in the full excitation case [10], remains an open question, even though it is known that the property depends solely on the graph. Such characterization would in particular pave the way for optimizing the selection of nodes to be excited and measured, alike the work in [11] in the full measurement case. In this line of work, we introduced in [25]_decoupled_ identifiability, necessary for local identifiability, see Section II. We extended the algebraic characterization of [24] when some transfer functions are known/fixed a priori, and developed it in terms of closed-loop transfer matrices \(T(G)=(I-G)^{-1}\), which led to some necessary and some sufficient path-based conditions for decoupled identifiability. An approach different to all that precedes is to that network dynamics are known, and aim at identifying the topology from input/output data. This problem is referred to as topology identification, and is addressed in e.g. [26, 27, 28] studies diffusively coupled linear networks, which can be represented by undirected graphs. In this paper, we introduce _separable networks_, which are a generalization of the decoupled version of the network, allowing for different topologies in excited and measured subgraphs, see Section III and Fig. 1. Thanks to their particular structure, global and local identifiability are equivalent on those networks, thus global identifiability can be studied with the algebraic tools of [24]. We obtain a necessary and sufficient _combinatorial characterization_ of global identifiability, in terms of existence of paths and conditions on their parity. Since the decoupled network is a particular case of separable network, this condition can be formulated on the decoupled network of networks of any topology, _not only separable networks_. And generic decoupled identifiability is necessary for generic global identifiability [25], hence the necessary condition applies to a network of any topology. ### _Framework_ **Assumptions:** We consider model (1). Consistently with previous work (e.g. [9, 10, 17]), we assume: 1. The network is well-posed and stable, that is \((I-G(q))^{-1}\) is proper and stable. 2. All the entries of \(G(q)\) are proper transfer functions. 3. The network is stable in the following sense : \(|\lambda_{i}|<1\) for each eigenvalue \(\lambda_{i}\) of \(G\). Throughout the paper, we develop our results without exploiting noise signals. However, under some mild assumptions, noise signals \(v(t)\) can play the same role as excitation signals, as [20]: the network is reformulated as an equivalent network model, where noise is cast into excitations. We suppose that there are exactly \(n_{B}n_{C}\) unknown transfer functions, i.e. as many as the number of (excitation - measurement) pairs, and we address the identifiability of the _whole_ network, i.e. all unknown transfer functions. If there are more unknown transfer functions, then it is not identifiable since there are more unknowns than (input, output) data. **Genericity:** We will focus on _generic_ properties: we say that a property is generic if it either holds (i) for _almost all_ variables, i.e for all variables except possibly those lying on a lower-dimensional set [29, 30] (i.e. a set of dimension lower than \(m_{\circ}\), the number of unknown transfer functions), or (ii) for no variable. For example, take a polynomial \(p\). The _nonzeroness_ of \(p(x)\) is a generic property of \(x\): either (i) \(p(x)\neq 0\) for all \(x\) except its roots, or (ii) \(p\) is the zero polynomial, which returns zero for all \(x\). Consistently with [24, 25], we consider a single frequency: instead of working with transfer functions \(G_{ij}(q)\) and their transfer matrix \(G(q)\), we work with scalar values \(G_{ij}\) and their matrix \(G\in\mathbb{C}^{n\times n}\). Conceptually, our generic results directly extend to the transfer function case: if one can recover a \(G_{ij}(z)\) at a given frequency \(z\) for almost all \(G\) consistent with a network, then one can also recover it at all other frequencies. In the remainder, for the sake of simplicity, we work in the scalar setup, hence omit the \((q)\). The framework of this paper is summarized below: * Partial excitation and measurement * Allows for the presence of known transfer functions next to the unknown ones * No use of noise, scalar setup * _Global_ identifiability on _separable_ networks \(\Rightarrow\) Applies on decoupled identifiability of all networks * _Generic_ identifiability of the _whole_ network * We assume \(n_{B}n_{C}=m_{\circ}\), i.e. as much data as unknowns ## II Identifiability We remind different notions of identifiability relevant to our work [1, 10, 24, 25]. In order to lighten notations, we denote \(T(G)=(I-G)^{-1}\). **Definition 1:** A network is _globally identifiable_ at \(G\) from excitations \(\mathcal{B}\) and measurements \(\mathcal{C}\) if, for all network matrix \(\tilde{G}\) with same zero and known entries as \(G\), there holds \[C\,T(\tilde{G})\,B=C\,T(G)\,B\Rightarrow\tilde{G}^{\circ}=G^{\circ}, \tag{2}\] where matrices \(B\) and \(C\) are binary selections indicating respectively the excited and measured nodes, forming sets \(\mathcal{B}\) and \(\mathcal{C}\) respectively. \(C\,T(G)\,B\) is the global transfer matrix between input and output. The network is _generically globally identifiable_ if it is identifiable at _almost all_\(G\). This definition extends [10] to the case where some transfer functions are known (\(G^{\bullet}\)), and some are not (\(G^{\circ}\)), as in [2]. We call _global identifiability_ this standard notion of identifiability, to avoid confusion with _local_ identifiability [24], which corresponds to identifiability provided that \(\tilde{G}\) is sufficiently close to \(G\). Local identifiability is necessary for global identifiability, and no counter-example to sufficiency is known. **Definition 2:** A network is _locally identifiable_ at \(G\) from excitations \(\mathcal{B}\) and measurements \(\mathcal{C}\) if there exists \(\epsilon>0\) such that for any \(\tilde{G}\) with same zero and known entries as \(G\) satisfying \(||\tilde{G}-G||<\epsilon\), there holds \[C\,T(\tilde{G})\,B=C\,T(G)\,B\Rightarrow\tilde{G}^{\circ}=G^{\circ}. \tag{3}\] The network is _generically locally identifiable_ if it is locally identifiable at _almost all_\(G\). Local identifiability can be characterized algebraically based on the matrix \(K\): \[K(G)\triangleq\left(B^{\top}T^{\top}(G)\otimes C\,T(G)\right)I_{G^{\circ}}, \tag{4}\] where we remind that \(T(G)=(I-G)^{-1}\), symbol \(\otimes\) denotes the Kronecker product and the matrix \(I_{G^{\circ}}\in\{0,1\}^{n^{2}\times m_{\circ}}\) selects only the columns of the preceding \(n_{B}n_{C}\times n^{2}\) matrix corresponding to unknown modules [25]. **Theorem II.1**: (Corollary 4.1 in [24])__ Exactly one of the two following holds: 1. \(\operatorname{rank}\,K=m_{\circ}\) for almost all \(G\) and \(G^{\circ}\) is locally identifiable at almost all \(G\); 2. \(\operatorname{rank}\,K<m_{\circ}\) for all \(G\) and \(G^{\circ}\) is locally _non_-identifiable at all \(G\), therefore globally non-identifiable at all \(G\). Moreover, \(\operatorname{rank}\,K=m_{\circ}\) is equivalent to the following implication holding for all \(\Delta\) with same zero entries as \(G^{\circ}\): \[C\,T(G)\,\Delta\,T(G)\,B=0\Rightarrow\Delta=0. \tag{5}\] Equation (5) suggests the definition of a new notion, where the two matrices \(T(G)\) of (5) do not need to have the same parameters anymore: this notion is _decoupled_ identifiability. **Definition 3:** A network is _decoupled-identifiable_ at \((G\), \(G^{\prime})\), with \(G\) and \(G^{\prime}\) sharing the same zero entries, if for all \(\Delta\) with same zero entries as \(G^{\circ}\), there holds: \[C\,T(G)\,\Delta\,T(G^{\prime})\,B=0\Rightarrow\Delta=0. \tag{6}\] Decoupled identifiability, initially introduced for purely algebraic reasons, can be interpreted in terms of the identifiability of a larger network: the _decoupled network_. **Definition 4:** Consider a network of \(n\) nodes with excitation matrix \(B\), measurement matrix \(C\) and network matrix \(G=G^{\bullet}+G^{\circ}\), where \(G^{\bullet}\) collects the known modules and \(G^{\circ}\) collects the unknown modules. Its _decoupled network_ is composed of \(2n\) nodes: \(\{1,\dots,n,1^{\prime},\dots,n^{\prime}\}\). Its network matrix is defined by \[\hat{G}(G,G^{\prime})\triangleq\begin{bmatrix}G&G^{\circ}\\ 0&G^{\prime}\end{bmatrix},\] where \(G^{\prime}\) has the same zero entries as \(G\). Transfer matrices \(G\) and \(G^{\prime}\) are considered as known, while \(G^{\circ}\) contains the unknown modules. Excitations are applied on the first subgraph (\(G^{\prime}\)), and measurements on the second one (\(G\)), i.e. its excitation and measurement matrices are \[\hat{B}\triangleq\begin{bmatrix}0&0\\ 0&B\end{bmatrix},\qquad\qquad\hat{C}\triangleq\begin{bmatrix}C&0\\ 0&0\end{bmatrix}.\] The proposition below relates the notion of decoupled identifiability with the decoupled network we just introduced. **Proposition II.1**: (Proposition 3.3 in [25]) The network \(G\) is generically decoupled-identifiable if and only if its decoupled network \(\hat{G}\) is generically globally identifiable. Generic decoupled identifiability is necessary for generic local identifiability (which is itself necessary for generic global identifiability) [25]. No counterexample to sufficiency is known to the authors, despite extensive systematic numerical tests, available in [31]. **Proposition II.2**: (Proposition 3.2 in [25]) If a network is generically locally identifiable, then it is generically decoupled-identifiable. ## III Separable Networks We now introduce separable networks, which are a generalization of decoupled networks, where the excited and measured subgraphs are not required to have the same topology anymore. A _separable network_ is a network for which excitations and measurement can be isolated in two distinct subgraphs, with no known transfer function linking the two subgraphs. Between the two subgraphs lie all the unknown transfer functions, going from the excited subgraph to the measured one. A formal definition in terms of network matrices is given below, and an example is given in Fig. 1. **Definition 5:** A _separable network_ is a network whose matrices have the following block structure: \[G=\begin{bmatrix}G_{C}&G^{\odot}\\ 0&G_{B}\end{bmatrix},\quad G^{\bullet}=\begin{bmatrix}G_{C}&0\\ 0&G_{B}\end{bmatrix},\quad G^{\circ}=\begin{bmatrix}0&G^{\odot}\\ 0&0\end{bmatrix},\] \[B=\begin{bmatrix}0&0\\ 0&B^{\dagger}\end{bmatrix},\qquad C=\begin{bmatrix}C^{\dagger}&0\\ 0&0\end{bmatrix}. \tag{7}\] We have the following important property: on separable networks, global and local identifiability are equivalent. **Proposition 3.1**: _A separable network is locally identifiable at \(G\) at if and only if it is globally identifiable at \(G\)._ **Proof:** Consider a separable network: its matrices have the block structure described in (7). From Definition 1, the network is generically identifiable at \(G\) if, for all \(\tilde{G}^{\circ}\) with same zero entries as \(G^{\circ}\), there holds \[C\left(I-\tilde{G}\right)^{-1}B=C\left(I-G\right)^{-1}B\Rightarrow\tilde{G}^ {\circ}=G^{\circ} \tag{8}\] where matrices \(C,B,G\) and \(G^{\circ}\) have the block structure of (7), and \(\tilde{G}=\begin{bmatrix}G_{C}&\tilde{G}^{\odot}\\ 0&G_{B}\end{bmatrix}\). Developing (8) from the block structure of (7) yields \[C^{\dagger}\,T(G_{C})\,\tilde{G}^{\odot}\,T(G_{B})\,B^{\dagger}= C^{\dagger}\,T(G_{C})\,G^{\odot}\,T(G_{B})\,B^{\dagger}\] \[\Rightarrow\tilde{G}^{\odot}=G^{\odot},\] and bringing out common terms gives \[C^{\dagger}\,T(G_{C})\,\underbrace{(\tilde{G}^{\odot}-G^{\odot })}_{\triangleq\Delta}\,T(G_{B})\,B^{\dagger}=0\] \[\Rightarrow\underbrace{\tilde{G}^{\odot}-G^{\odot}}_{=\Delta}=0,\] which is exactly what we obtain by developing (5) with the block structure of (7). To conclude, we know from Theorem 2.1 that (5) is a necessary and sufficient condition for local identifiability. Since \(m_{\circ}=n_{B}n_{C}\), \(K\) introduced in (4) is a square matrix, hence \(\operatorname{rank}\,K=m_{\circ}\) is equivalent to \(\det K\neq 0\). Theorem 2.1 can then be rewritten in terms of the determinant. In addition, since we work on separable networks, global and local identifiability are equivalent, hence the theorem below characterizes global identifiability. **Theorem 3.1**: _Consider a separable network. Exactly one of the two following holds:_ 1. \(\det K\neq 0\) _for almost all_ \(G\) _and_ \(G^{\circ}\) _is globally identifiable at almost all_ \(G\)_;_ 2. \(\det K=0\) _for all_ \(G\) _and_ \(G^{\circ}\) _is globally non-identifiable at all_ \(G\)_._ _Moreover, \(\det K=m_{\circ}\) is equivalent to the following implication holding for all \(\Delta\) with same zero entries as \(G^{\circ}\):_ \[C^{\dagger}\,T(G_{C})\,\Delta\,T(G_{B})\,B^{\dagger}=0\Rightarrow\Delta=0. \tag{9}\] ## IV Combinatorial Characterization We are now going to derive a combinatorial characterization based on a re-expression of the determinant of \(K\). The closed-loop transfer function \(T_{ji}\) is expressed in terms of transfer functions \(G_{ji}\) in the following way. The analytic matrix \(T(G)\) can be expanded in the Taylor series \[T(G)=(I-G)^{-1}=I+\sum_{k=1}^{\infty}G^{k}, \tag{10}\] which converges since the spectral radius of \(G\) is strictly smaller than one, see Assumption 3) in Section I-C. A classical result in graph theory is that \([G^{k}]_{ji}\) is the sum of all paths from \(i\) to \(j\) of length \(k\): \[[G^{k}]_{ji}=\sum_{\begin{subarray}{c}\text{all $k$-paths}\\ i\to j\end{subarray}}\underbrace{G_{j*}\,\,\ldots\,G_{*i}}_{k\text{ terms}}, \tag{11}\] where the notation \(*\) denotes some node of the network, which can be different for each occurence of \(*\). Combining (10) and (11) gives the following lemma, which extends Lemma 2 of [19] for networks with cycles. **Lemma 4.1**: _[_19_]_ _Consider the closed-loop transfer matrix \(T(G)=(I-G)^{-1}\). Its entries are given as follows:_ 1. _if there is no path from_ \(i\) _to j,_ \(T_{ji}=0\)__ 2. _otherwise,_ \(T_{ji}\) _is the (possibly infinite) sum of all the paths from_ \(i\) _to j:_ \[T_{ji}=\sum_{\begin{subarray}{c}\text{all $\text{paths}$}\\ i\to j\end{subarray}}G_{j*}\,\,\ldots\,\,G_{*i}.\] (12) In the sequel we refer to the unknown transfer functions \(G_{ji}^{\circ}\) as unknown edges \(\alpha\). We develop \(\det K\) as the sum over all possible row-column permutations by Leibniz formula1: Footnote 1: \(T_{\alpha,\sigma_{B}(\alpha)}\) is the transfer function between \(\sigma_{B}(\alpha)\), and _start node of_ edge \(\alpha\), and \(T_{\sigma_{C}(\alpha),\alpha}\) is the one between _end node of_ edge \(\alpha\) and \(\sigma_{C}(\alpha)\). \[\det K=\sum_{\sigma\in S}\operatorname{sgn}(\sigma)\prod_{\alpha\in\mathcal{E }^{\circ}}T_{\sigma_{C}(\alpha),\alpha}\,\,T_{\alpha,\sigma_{B}(\alpha)}, \tag{13}\] where each row-column permutation corresponds to a bijective assignation \(\sigma:\mathcal{E}^{\circ}\rightarrow\mathcal{B}\times\mathcal{C}\), i.e. between unknown edges and (excitation - measurement) pairs. \(S\) denotes the set of all such bijective assignations, \(\sigma_{B}(\alpha)\) is the excitation assigned to edge \(\alpha\) by assignation \(\sigma\) and \(\sigma_{C}(\alpha)\) is its measurement. The \(\operatorname{sgn}(\sigma)\) equals \(+1\) if the number of transpositions in assignation \(\sigma\) is even, and \(-1\) otherwise. A transposition is the swap of two elements, and each \(\sigma\) is obtained by combining a certain number of transpositions. Fig. 1: An example of separable network: the excitations \(\mathcal{B}\) are isolated in one subgraph, and the measurements \(\mathcal{C}\) in another subgraph. The unknown transfer functions, in dashed orange, link the excited subgraph to the measured one. The Leibniz formula (13) can be further developed by plugging (12) in the equation. The analysis is no longer made on the \(T_{ji}\), \(\det K\) is now expressed in terms of paths of transfer functions \(G_{ji}\)2: Footnote 2: \(G_{\alpha,*}\) stands for the transfer function from a node \(*\) to _start node of_ edge \(\alpha\), and \(G_{*,\alpha}\) denotes the one from _end node of_ edge \(\alpha\) to a node \(*\). \[\det K=\sum_{\sigma\in S}\mathrm{sgn}(\sigma)\prod_{\alpha\in \mathcal{E}^{\circ}} \Bigg{[}\bigg{(}\sum_{\begin{subarray}{c}\text{all paths}\\ \alpha\rightarrow\sigma_{C}(\alpha)\end{subarray}}G_{\sigma_{C}(\alpha),*}\ \ \ldots\ G_{*,\alpha}\bigg{)}\] \[\cdot\bigg{(}\sum_{\begin{subarray}{c}\text{all paths}\\ \sigma_{B}(\alpha)\rightarrow\alpha\end{subarray}}G_{\alpha,*}\ \ \ldots\ G_{*,\sigma_{B}(\alpha)}\bigg{)}\Bigg{]}\] This expression can be conveniently re-expressed using the notion of _collections of paths_, that we introduce below. **Definition 6:** A collection of paths \(\pi\) is a set of \(m^{\circ}\) paths \(\pi_{i}\) (one \(\pi_{i}\) for each unknown edge), where each path \(\pi_{i}\) is a sequence of connected edges that starts at an excited node (i.e. in \(\mathcal{B}\)), and ends at a measured node (i.e. in \(\mathcal{C}\)). Since we work on separable networks, each \(\pi_{i}\) goes through one and only one unknown edge. We say that \(\pi\) is _bijective_ if no two paths of \(\pi\) start at the same excitation and end at the same measurement. Distributing the product over all unknown edges \(\alpha\) gives \[\det K=\sum_{\sigma\in S}\mathrm{sgn}(\sigma)\sum_{\pi\in\Pi_{e}}\mu(\pi),\] where \(\pi\) is a bijective collection of paths, as defined in Definition 6, \(\Pi_{\sigma}\) is the set of all \(\pi\) corresponding to assignation \(\sigma\) and \(\mu(\pi)\) is the _monomial_ corresponding to \(\pi\): \[\mu(\pi)\triangleq\prod_{\alpha\in\mathcal{E}^{\circ}}G_{\pi_{C}(\alpha),*} \...\ G_{*,\alpha}\cdot G_{\alpha,*}\...\ G_{*,\pi_{B}(\alpha)} \tag{14}\] where \(\pi_{B}(\alpha)\) and \(\pi_{C}(\alpha)\) are respectively the starting and ending node of the path going through \(\alpha\) in \(\pi\), and the transfer functions \(G_{ji}\) may appear several times in \(\mu(\pi)\), which would mean that it is taken several times by \(\pi\). From (14), we group by monomials of same \(\pi\): instead of summing over \(\sigma\) and then over \(\pi\), we sum directly over \(\pi\): \[\det K=\sum_{\pi\in\Pi}\mathrm{sgn}(\pi)\,\mu(\pi), \tag{15}\] where \(\Pi\) is the set of all bijective collection of paths \(\pi\). As \(\pi\) derives from an assignation \(\sigma\), its monomial has a sign, denoted by \(\mathrm{sgn}(\pi)\). Observe that from a same group of edges it may be possible to build different bijective \(\pi\) (this will be the case e.g. when two paths cross multiple times), so that these different \(\pi\) will have the same monomial \(\mu\). Hence we regroup the same monomials together: \[\det K=\sum_{\mu\in M}\bigg{(}\sum_{\pi\in\Pi_{\mu}}\mathrm{sgn}(\pi)\bigg{)} \ \mu, \tag{16}\] where \(M\) is the set of all monomials \(\mu\) corresponding to bijective collections of paths, and \(\Pi_{\mu}\) is the set of all bijective collections of paths \(\pi\) corresponding to monomial \(\mu\). We define the _repetition_\(r(\mu)\triangleq\sum_{\pi\in\Pi_{\mu}}\mathrm{sgn}(\pi)\), which allows to rewrite (16) in a compact way: \[\det K=\sum_{\mu\in M}r(\mu)\,\mu. \tag{17}\] As seen in Theorem III.1, \(\det K\) encodes the global (non)-identifiability of separable networks. Hence equation (17) allows providing a necessary and sufficient combinatorial characterization of global identifiability, in terms of the repetition \(r(\mu)\). **Theorem IV.1:** Consider a separable network. It is generically globally identifiable if and only if there is at least one monomial \(\mu\in M\) such that its repetition \(r(\mu)\neq 0\). **Proof:** From Theorem III.1, we know that a separable network is generically globally identifiable if and only if \(\det K\neq 0\) for almost all \(G\). Equation (17) expresses \(\det K\) as a sum over all monomials \(\mu\), weighted by their repetition \(r(\mu)\). A sum of distinct monomials is generically nonzero if and only if at least one of the monomials has a nonzero coefficient. Therefore, for this sum to be nonzero for almost all \(G\), at least one \(\mu\) must have a nonzero repetition \(r(\mu)\)\(\blacksquare\) Since the decoupled network is a particular case of separable network, this condition can be formulated on the decoupled network of a network \(G\) of any topology, _not only separable networks_. And generic decoupled identifiability is necessary for generic global identifiability [25], hence the necessary condition of Theorem IV.1, formulated on decoupled network \(\tilde{G}\), applies to a network \(G\) of any topology. Theorem IV.1 gives a necessary and sufficient combinatorial characterization, but building an efficient algorithm to check this condition remains an open question. ## V Conclusion This work was motivated by one main open question: determining path-based conditions for generic identifiability of networked systems. Introducing separable networks allowed to address global identifiability of such graphs using the algebraic tools of local identifiability. A new approach led to a necessary and sufficient combinatorial characterization of identifiability, in terms of existence of paths and conditions on their parity. Furthermore, this necessary condition not only applies to separable networks, but to networks of _any topology_. It follows from the fact that the decoupled network is a particular case of separable networks, and generic decoupled identifiability is necessary for generic global identifiability. A further open question is whether our necessary and sufficient combinatorial characterization can be algorithmically checked. Also, we would like to establish the equivalence (or not) between the notions introduced here and local identifiability for a general network. ## VI Acknowledgments The authors gratefully acknowledge Federica Garin (Inria, Gipsa-lab, France) for the interesting, insightful discussions.
2309.15554
Direct Models for Simultaneous Translation and Automatic Subtitling: FBK@IWSLT2023
This paper describes the FBK's participation in the Simultaneous Translation and Automatic Subtitling tracks of the IWSLT 2023 Evaluation Campaign. Our submission focused on the use of direct architectures to perform both tasks: for the simultaneous one, we leveraged the knowledge already acquired by offline-trained models and directly applied a policy to obtain the real-time inference; for the subtitling one, we adapted the direct ST model to produce well-formed subtitles and exploited the same architecture to produce timestamps needed for the subtitle synchronization with audiovisual content. Our English-German SimulST system shows a reduced computational-aware latency compared to the one achieved by the top-ranked systems in the 2021 and 2022 rounds of the task, with gains of up to 3.5 BLEU. Our automatic subtitling system outperforms the only existing solution based on a direct system by 3.7 and 1.7 SubER in English-German and English-Spanish respectively.
Sara Papi, Marco Gaido, Matteo Negri
2023-09-27T10:24:42Z
http://arxiv.org/abs/2309.15554v1
# Direct Models for Simultaneous Translation and Automatic Subtitling: FBK@IWSLT2023 ###### Abstract This paper describes the FBK's participation in the Simultaneous Translation and Automatic Subtitling tracks of the IWSLT 2023 Evaluation Campaign. Our submission focused on the use of direct architectures to perform both tasks: for the simultaneous one, we leveraged the knowledge already acquired by offline-trained models and directly applied a policy to obtain the real-time inference; for the subtitling one, we adapted the direct ST model to produce well-formed subtitles and exploited the same architecture to produce timestamps needed for the subtitle synchronization with audiovisual content. Our English-German SimulST system shows a reduced computational-aware latency compared to the one achieved by the top-ranked systems in the 2021 and 2022 rounds of the task, with gains of up to 3.5 BLEU. Our automatic subtitling system outperforms the only-existing solution based on a direct system by 3.7 and 1.7 SubER in English-German and English-Spanish respectively. ## 1 Introduction In recent years, the advances in natural language processing and machine learning led to a surge of interest in developing speech translation (ST) systems that can translate speech from one language into text in another language without human intervention. Significant progress has been specially made toward end-to-end ST models (Berard et al., 2016; Weiss et al., 2017) trained to directly translate speech without the intermediate steps of transcription (through automatic speech recognition - ASR) and translation (through machine translation - MT). Along with this growing interest in direct ST, also accompanied by a reduction of the performance gap with respect to cascaded architectures (Bentivogli et al., 2021), other trends have emerged thanks to deep learning advancements, which made it possible to deploy direct solutions to perform the task in real-time (i.e. to produce partial translations while continuing to process the input audio) or to automatically generate subtitles for audiovisual content (i.e. pieces of translated text which have to conform to specific spatiotemporal constraints and be synchronized with the video). The International Workshop on Spoken Language Translation (IWSLT) is playing an important role in advancing the state-of-the-art in these fields by organizing a series of evaluation campaigns (Ansari et al., 2020; Anastasopoulos et al., 2021, 2022) focused on simultaneous speech translation (SimulST) and, this year for the first time, automatic subtitling. These campaigns provide a unique opportunity for researchers to compare their systems against others, share their findings, and identify areas for further improvement. In this paper, we describe FBK's participation in the IWSLT 2023 Evaluation Campaigns (Agarwal et al., 2023) for simultaneous translation and automatic subtitling. Motivated by the promising results reported in previous works (Ren et al., 2020; Papi et al., 2023a), our approach is characterized by the use of direct ST models to address both tasks. For the simultaneous speech-to-text translation (SimulST) task, we participated in the English\(\rightarrow\)German track and leveraged an offline-trained direct model without performing any adaptation to the real-time scenario, as this has recently been shown not to be necessary to achieve competitive results (Papi et al., 2022a). For the automatic subtitling task, we participated in both the English\(\rightarrow\)German and English\(\rightarrow\)Spanish tracks by adapting a direct ST model to produce well-formed subtitles and exploiting the same architecture to produce the timestamps needed for their synchronization with audiovisual contents, as in (Papi et al., 2023a). Our results demonstrate the effectiveness of our approach. In SimulST, the computational-aware latency of our models is lower compared to the winning systems of the last two rounds (2021, and 2022) of the IWSLT SimulST Evaluation Campaign, with gains up to 3.5 BLEU. In automatic subtitling, our systems improve the results reported in (Papi et al., 2023) which, to the best of our knowledge, represents the only-existing solution based on a direct model. Specifically, on average among the various dev sets available for the task, we achieve 3.7 SubER on en-de and 1.7 SubER on en-es. ## 2 Applied Direct Models For this year's submission, we applied the direct ST models to the two different scenarios of simultaneous translation and automatic subtitling. ### Simultaneous Translation Recent trends in SimulST consist of using offline-trained models for simultaneous inference (Papi et al., 2022). There are several motivations for this choice: _i)_ it avoids re-training or building specific architectures for SimulST, saving time and computational resources; _ii)_ only one model has to be trained and maintained to perform both offline and simultaneous ST; and _iii)_ there is no need to train several models, each specialized to support different latency regimes. A key aspect of SimulST, also critical when approaching the task with offline models at inference time, is the so-called _decision policy_: the mechanism that is in charge of deciding whether to read more information or to emit a partial hypothesis. One of the first and most popular policies is the wait-k (Ma et al., 2019), initially introduced for simultaneous MT, and then applied to the speech scenario (Ma et al., 2020; Chen et al., 2021; Zeng et al., 2021; Karakanta et al., 2021). The wait-k, which prescribes waiting for an initial number of \(k\) words before starting to translate, is defined as a "fixed" policy (Zheng et al., 2020) because the decision is taken independently from the source input content. However, as the actual information contained in the input (e.g. in terms of ambiguity, completeness, and syntactic/semantic cohesion) is also important for the sake of good-quality incremental translations, several "adaptive" policies have been introduced, which instead adapt their decisions to the input content. Some adaptive policies require system re-training or the development of ad-hoc modules (Liu et al., 2021; Chang and Lee, 2022; Zhang and Feng, 2022), while some others do not (Liu et al., 2020; Nguyen et al., 2021; Papi et al., 2023). Since our objective is to avoid any modifications to the offline-trained model, we pointed our attention to the latter, more conservative category. Among these policies, we analyzed the three following alternatives: * **Local Agreement (LA)**(Liu et al., 2020): this policy prescribes generating a partial hypothesis from scratch at every newly received audio segment, and emitting it (or only a part of it) if it coincides with one of those generated in the previous time step; * **Encoder-Decoder Attention (EDAtt)**(Papi et al., 2023): it exploits the cross-attention scores modeling the audio-translation relation to decide whether to emit the words of a partial hypothesis or not. If, for the current word, the sum of the attention scores of the last \(\lambda\) received speech frames exceeds a certain threshold \(\alpha\) (both \(\lambda\) and \(\alpha\) are hyperparameters), the emission is delayed because the system needs more context to translate that word. Otherwise, the word is emitted and we proceed to the next word of the hypothesis; * **AlignAtt**(Papi et al., 2023): as for EDAtt, the cross-attention scores are leveraged to decide what to emit but, in this case, instead of summing the attention scores of the last speech frames, each word is uniquely assigned (or aligned) to the frame having the maximum attention score. If the aligned frame corresponds to one of the last \(f\) frames (\(f\) being a hyperparameter that controls the latency) the emission is stopped. Otherwise, we proceed to the next word. ### Automatic Subtitling So far, the adoption of direct ST architectures to address the automatic subtitling task has only been explored in (Papi et al., 2023). As a matter of fact, all previous works on the topic (Piperidis et al., 2004; Melero et al., 2006; Matusov et al., 2019; Koponen et al., 2020; Bojar et al., 2021) rely on cascade architectures that usually involve an ASR component to transcribe the input speech, a subtitle segmenter that segments the transcripts into subtitles, a timestamp estimator that predicts the start and times of each subtitle, and an MT model that translates the subtitle transcripts. Cascaded architectures, however, cannot access information contained in the speech, such as proosody, which related works proved to be an important source of information for the segmentation into subtitles (Oktem et al., 2019; Federico et al., 2020; Virkar et al., 2021; Tam et al., 2022). The importance of such information has been further verified in (Karakanta et al., 2020), which proved that the direct ST models are better in subtitle segmentation compared to the cascade ones. Another study by Karakanta et al. 2021, also pointed out the importance of consistency between captions (segmented transcripts) and subtitles (segmented translations), showing that the predicted caption content can also be useful for the translation. Specifically, the authors obtained significant improvements by using a Triangle Transformer-based architecture (Anastasopoulos and Chiang, 2018) composed of one encoder and two decoders: the first decoder is in charge of emitting the transcripts and the second one is in charge of emitting the translation by also attending to the output embeddings of the predicted transcript. Therefore, in our submission, based on the findings of the aforementioned work, we inspected the use of both a classic single encoder-single decoder architectures, as in (Papi et al., 2023), and of the Triangle architecture for automatic subtitling. ## 3 Experimental Setup ### Data SimultaneousWe developed a pure offline model trained on the same data used for our last year's (constrained) submission (Gaido et al., 2022). SubtitlingWe used the same data settings of (Papi et al., 2023), for which we leverage the multimodal segmenter by Papi et al. (2022) to segment into subtitles ST and machine-translated ASR corpora as per (Gaido et al., 2021, 2022).1 No OpenSubtitles or text-only data were used to train our models. Footnote 1: All the corpora used in (Papi et al., 2023) are allowed ASR and ST training data for the Subtitling task ([https://iwslt.org/2023/subtitling#training-and-data-conditions](https://iwslt.org/2023/subtitling#training-and-data-conditions)). Therefore, our submission has to be considered “Constrained”. ### Training Settings All the models used for our participation were implemented using the newly released implementation of the Conformer architecture by Papi et al. (2023)2 based on Fairseq-ST (Wang et al., 2020). In their paper, the authors analyzed the most popular open-source libraries for speech recognition/translation and found at least one bug affecting all the existing Conformer implementations, therefore claiming the importance of testing code to avoid the propagation of unreliable findings masked by good results. Footnote 2: Code available at [https://github.com/hlt-mt/FBK-fairseq](https://github.com/hlt-mt/FBK-fairseq) SimultaneousWe tested a Conformer-based architecture (Gulati et al., 2020) with two configurations: 12 encoder layers and 16 encoder layers. The number of Transformer decoder layers is 6, we set 512 features for the attention layers and 2,048 hidden units for the feed-forward layers. We used 0.1 dropout for the feed-forward layers, attention layers, and convolutional modules. The kernel size was set to 31 for the point- and depth-wise convolutions. We trained with the Adam optimizer (Kingma and Ba, 2015) by setting \(\beta_{1}=0.9\) and \(\beta_{2}=0.98\), a weight decay of \(0.001\), the learning rate to 0.002 using the inverse square-root scheduler with 25,000 warm-up steps. Label smoothed cross-entropy loss (0.1 smoothing factor) was used together with the CTC loss (Graves et al., 2006) with weight 0.5. We experimented also by applying the CTC compression mechanism (Gaido et al., 2021) to the source input to shrink its dimension and reduce RAM consumption. Utterance Cepstral Mean and Variance Normalization (CMVN) was applied during training. Also, we leveraged SpecAugment (Park et al., 2019) with frequency mask (\(F=27\), and \(N=2\)), time mask (\(N=10\), \(T=300\), and \(p=0.05\)), and no time warp. Both ST training and ASR pre-training were performed with the same settings. The target vocabulary is of size 16,000, and the source vocabulary is of size 10,000, and are both based on SentencePiece (Kudo and Richardson, 2018). We differentiate between original and machine-translated training data by pre-pending a tag (nomt and mt, respectively) to the target text as in all our last years' submissions (Gaido et al., 2020; Papi et al., 2021; Gaido et al., 2022). The total batch size was set to 1,280,000 and was performed on 4 NVIDIA A40 GPUs with 40GB of RAM by setting the mini-batch update frequency to 8 and 40,000 maximum tokens. Maximum updates were set to 100,000. Automatic SubtitlingBoth the classic encoder-decoder architecture and the triangle architecture are composed of 12 layers of Conformer encoder and 6 layers of Transformer decoder (which is replicated twice in the triangle model). The dimension of the feed-forward layers is 2,048 and \(d=512\) in the attention. The kernel size of the point- and depth-wise convolutions in the convolutional modules is 31. The dropout was set to 0.1. CTC loss with compression is added with weight 0.5 to the cross entropy loss with label smoothing (0.1 of smoothing factor) and optimized with Adam (\(\beta_{1}=0.9\), \(\beta_{2}=0.98\)). The source vocabulary is of size 8,000 and the target vocabulary of size 16,000 (<eob> and <eol> included); both are obtained by SentencePiece models. The ST pre-training was done by setting the learning rate to 0.002 with inverse square-root scheduler and 25,000 warm-up updates. The SubST fine-tuning was done by setting a constant learning rate of 0.001. A second fine-tuning was done with the same setting of (Papi et al., 2023), but we restored the punctuation of the ASR datasets which do not contain any (i.e., the TEDLIUM corpus (Hernandez et al., 2018)) by using bert-restore-punctuation,3 before machine-translating and segmenting the target texts into subtitles. We trained the standard architecture with 40,000 maximum tokens on 4 NVIDIA A100 GPUs with 40GB of RAM and we set the update frequency to 2. For the triangle architecture, we set maximum tokens to 20,000 to fit the architecture in memory and the update frequency to 4 to hold the same total batch size of 320,000 tokens. Maximum updates were set to 100,000 for both the pre-training and training phases. Footnote 3: [https://huggingface.co/felflare/bert-restore-punctuation](https://huggingface.co/felflare/bert-restore-punctuation) Footnote 4: case:mixeddeff:notlock:13alssmooth:explversion:1.5.1 ### Evaluation Settings SimultaneousWe exploit the SimulEval tool (Ma et al., 2020). To be comparable with the previous years, all the results except this year's submission are shown for the SimulEval v1.0.2, which adopts BLEU (Post, 2018)5 to measure translation quality and Average Lagging or AL (Ma et al., 2019) to measure latency. Instead, for this year's submission, we adopt the latest version of SimulEval (1.1.0) with BLEU measured with sacrebleu 2.3.0 and we also report Length-Adaptive Average Lagging or LAAL (Papi et al., 2022) and Average Token Delay or ATD (Kano et al., 2022) as additional latency metrics. All the evaluations were run on a single NVIDIA K80 with 12GB of RAM, by applying global CMV to audio input, whose features were estimated on the MuST-C v2 training set. Computational aware metrics ("_CA") refer to the single NVIDIA K80 setting and consider also the model computational time in the delay calculation. Footnote 5: case:mixeddeff:notlock:13alssmooth:explversion:2.0.0 Automatic SubtitlingWe adopt the following metrics: SubER-cased (henceforth, SubER) (Wilken et al., 2022) for overall subtitle quality, Sigma (Karakanta et al., 2022) for the subtitle segmentation quality, and BLEU6 for translation quality. We also compute the conformity percentage of 42 characters per line (CPL) and 21 characters per second (CPS) or reading speed, as suggested on the track website.6 We neglected the conformity computation of the subtitles with more than two lines since our model only produces subtitles with two lines or less, thus being always 100% conform. Conformity scores are computed by using the script released for the paper (Papi et al., 2023).7 Dev/test audios are segmented with SHAS (Tsiamas et al., 2022). No audio cleaning is applied. Footnote 6: [https://iwslt.org/2023/subtitling#automatic-evaluation](https://iwslt.org/2023/subtitling#automatic-evaluation) Footnote 7: Script available at: [https://github.com/hlt-mt/FBK-fairseq/blob/master/examples/speech_to_text/scripts/subtitle_compliance.py](https://github.com/hlt-mt/FBK-fairseq/blob/master/examples/speech_to_text/scripts/subtitle_compliance.py) ## 4 Results ### Simultaneous Translation Since we directly employ an offline model for the simultaneous inference, we show in Table 1 the results of the offline ASR pre-training and ST training. Although the model with 12 encoder layers (row 0) obtains lower - hence better - WER compared to the 16 encoder-layers model (row 1), the highest - hence better - BLEU in ST is achieved by the bigger architecture. The performance is also slightly enhanced by adding the CTC compression (row 3) during training, which is particularly useful also for the SimulST scenario since it speeds up inference (of about 12/15%). Therefore, we select this model for the final submission. Compared to our last year's submission (row 5), our 16 encoder-layers model scores +0.4 BLEU even if, at this time, we have not fine-tuned it on the in-domain (TED talks) datasets. Our model also performs better than the NAIST last year's system (+11.1 BLEU) while is worse (-1.0 BLEU) compared to the last year's SimulST task winner CUNI-KIT whose model, however, leveraged large pre-trained models such as wav2vec 2.0 and mBART50. Compared to last year's cascade model by UPV, we score -1.7 BLEU. This system, however, also outperformed the CUNI-KIT system by 0.7 BLEU points, indicating that a gap between direct and cascade architectures still exists. In Figure 1, we show the simultaneous results of the different policies mentioned in Section 2.1 applied to our offline model. The differences in terms of quality-latency trade-off between the LA and both EDAtt and AlignAtt are evident: the last ones outperform the former with an improvement peak of 1.5 BLEU at lower latency (approximately \(1s\)). Moreover, when the computationally aware AL is considered, EDAtt and AlignAtt are the only policies able to reach a latency \(\leq 2s\). Regarding the comparison between EDAtt and AlignAtt, AlignAtt can span a latency between 1 and 2.6\(s\) ideally (when unlimited computing resources are available), and between 1.8 and 3.7\(s\) computationally aware, while EDAtt is limited to a latency of 1.4 to 2.5\(s\) ideally, and 2.3 to 3.6\(s\) computationally aware. We hence select AlignAtt as it is able to reach a wider range of latency. Lastly, we compare our policy with the two winning systems of the last two years (2021, and 2022). The 2021 winner (Liu et al., 2021) was based on an architecture named Cross Attention Augmented Transducer (CAAT), which was specifically tailored for the SimulST task (Liu et al., 2021) and still represents the state of the art in terms of low latency (considering ideal AL only). The 2022 winner (CUNI-KIT (Polak et al., 2022)) was based on the wav2vec 2.0 + mBART50 offline architecture reported in Table 1, row 4. They applied the LA policy, the same we analyze in Figure 1, to the aforementioned architecture for simultaneous inference. The comparison is reported in Figure 2. As we can see, there is a 1.0-2.0 BLEU difference between our approach and the IWSLT 2022 winner, which is expected since their offline system is superior compared to ours, as already observed in Table 1. Compared to the IWSLT 2021 winner, we observe a performance drop in our system with AL \(\leq 1.5s\), while the situation is opposite with AL \(>1.5s\). However, when we look at the computationally aware metrics, the results completely change. Our system clearly outperforms the 2021 winner, with a maximum improvement of about 2 BLEU points. Moreover, our system is the only one able to reach a computational aware latency of 2\(s\) while, instead, the IWSLT 2022 winner curve starts only at around 3\(s\). Therefore, our system is significantly faster and, at around 3\(s\), we achieve a relative improvement of more than 3.5 BLEU compared to the IWSLT 2022 winner. To sum up, when the computationally aware metric is considered, our approach outperforms the winners of both the 2021 and 2022 rounds of the SimulST task. In addition, in this year's round, the systems are evaluated with the threshold AL \(=2s\) and with the new version of SimulEval.8 With respect to these settings, our submitted system scores \(30.7\) BLEU with AL \(=1.89s\) (LAAL \(=2.07s\), ATD \(=1.80s\)). \begin{table} \begin{tabular}{c|l|c c} **id** & **Model** & **WER\% (\(\downarrow\))** & **BLEU (\(\uparrow\))** \\ \hline \hline 1 & 12 encoder layers & **9.7** & 31.6 \\ 2 & 16 encoder layers & 9.9 & **31.9** \\ 3 & + CTC compress. & - & **32.1** \\ \hline 4 & CUNI-KIT 2022\({}^{\dagger}\) & - & 33.1 \\ 5 & FBR 2022 & - & 31.7 \\ 6 & NAIST 2022\({}^{\ddagger}\) & - & 21.0 \\ 7 & UPV 2022 (Cascade)* & 9.5 & 33.8 \\ \end{tabular} \end{table} Table 1: Offline results of our Conformer-based architectures on MuST-C v2 tst-COMM together with the available results of the last year’s SimulST competitors. \({}^{\dagger}\)(Polak et al., 2022), \({}^{\ddagger}\)(Fukuda et al., 2022), *(Iranzo-Sánchez et al., 2022). Figure 1: Comparison between the LA, EDAtt, and AlignAtt policies described in Section 2.1 on MuST-C v2 en\(\rightarrow\)de tst-COMM. Solid curves represent AL, dashed curves represent AL_CA. ### Automatic Subtitling In Table 2, we show a comparison between the standard encoder-decoder and the Triangle architectures for automatic subtitling. The results are computed on MuST-Cinema (Karakanta et al., 2020), the only existing corpus for SubST. Unfortunately, in contrast with the results achieved by (Karakanta et al., 2021), we found that the standard architectures perform better on all the considered metrics. While the differences in terms of translation quality are not so big (0.8-9 BLEU drop in both languages), there is a huge gap in the quality of the segmentation into subtitles, with the standard model improving by 3.3 and 4.7 Sigma the scores obtained by the Triangle respectively on en-de and en-es. This is also reflected by a worse SubER score (the lower, the better) of the Triangle, exhibiting a performance drop of, respectively, 0.9 and 1.6 SubER for en-de and en-es compared to the standard architecture. Therefore, we can conclude that the generated captions seem not to help with subtitle generation. Rather, they negatively affect subtitle generation to the detriment of segmentation quality. For this reason, we decided to employ the standard encoder-decoder architecture for our participation in the automatic subtitling task. In the following, we present the results of our model on the four dev sets released for the task,9 namely: **MuST-Cinema or TED** containing TED talks videos, **EuroParIV or EPTV** containing recordings related to the European Parliament activities, **Peloton** containing online fitness classes, and **ITV Studios or ITV** containing videos from a broad range of programming (drama, entertainment, factual). For both language pairs (en-de and en-es), Table 3 shows the results computed with SubER, which is the primary metric used for the task.10 As we can see, the models fine-tuned on data with restored punctuation score the best results in both languages. Across the four dev sets, there is a 3.7 SubER improvement for en-de, and 1.7 for en-es. Moreover, coherently among languages, the TED talks scenario results in the easiest one for our model, as it is in-domain (e.g., MuST-Cinema, based on TED talks, was used to train the model). Conversely, the ITV scenario is the most difficult one since it contains TV series, which is a completely unseen domain for our model. Indeed, its data contain a larger amount of background music/noise, as well as dialogues with multiple speakers which are not present in our training data. In light of the results obtained by the fine-tuned models, we select them for our submission to the automatic subtitling task. Footnote 10: [https://iwslt.org/2023/subtitling#](https://iwslt.org/2023/subtitling#) automatic-evaluation \begin{table} \begin{tabular}{l|c c c c c} \hline \hline \multicolumn{6}{c}{**en-de**} \\ \hline **Model** & **SubER** & **BLEU** & **Sigma** & **CPL** & **CPS** \\ \hline Papi et al. (2023) & **59.9** & **23.4** & **77.9** & **86.9** & **68.9** \\ Triangle & 60.8 & 22.6 & 74.6 & 84.5 & 67.7 \\ \hline \hline \multicolumn{6}{c}{**en-es**} \\ \hline **Model** & **SubER** & **BLEU** & **Sigma** & **CPL** & **CPS** \\ \hline Papi et al. (2023) & **46.8** & **37.4** & **81.6** & **93.2** & **74.6** \\ Triangle & 48.4 & 36.5 & 76.9 & 90.3 & 71.7 \\ \hline \hline \end{tabular} \end{table} Table 2: Results of the direct ST models standard and Triangle architectures described in Section 2.2 on MuST-Cinema test set for en\(\rightarrow\){de, es}. \begin{table} \begin{tabular}{l|c c c c|c} \hline \hline \multicolumn{6}{c}{**en-de**} \\ \hline **Model** & **TED** & **EPTV** & **Peloton** & **ITV** & **Avg** \\ \hline Papi et al. (2023) & 72.7 & 82.3 & 84.7 & 88.0 & 81.9 \\ + fine-tuning & **69.4** & **80.6** & **79.1** & **83.7** & **78.2** \\ \hline \hline \multicolumn{6}{c}{**en-es**} \\ \hline **Model** & **TED** & **EPTV** & **Peloton** & **ITV** & **Avg** \\ \hline Papi et al. (2023) & 54.8 & 75.3 & 82.3 & 84.1 & 74.1 \\ + fine-tuning & **52.5** & **73.7** & **80.3** & **82.2** & **72.4** \\ \hline \hline \end{tabular} \end{table} Table 3: SubER (\(\downarrow\)) scores for en\(\rightarrow\){de, es} of the direct ST models on the four dev sets of the competition. “fine-tuning” represents the second fine-tuning on data with restored punctuation mentioned in Section 3.2. Figure 2: Comparison with the 2021 and 2022 winners of the SimulST Evaluation Campaigns MuST-C v2 en\(\rightarrow\){de, es} tst-COMM. Solid curves represent AL, dashed curves represent AL_CA. ## 5 Conclusions We presented the FBK's systems built to participate in the IWSLT 2023 Evaluation Campaigns for simultaneous speech translation (en-de) and automatic subtitling (en-{de, es}). Our submissions are characterized by the use of direct speech translation models to address both tasks, without any further modification nor adaptation for the simultaneous task, and with a fine-tuning on subtitle-like translations for the automatic subtitling task. Our SimulST system achieves a lower computational-aware latency with up to 3.5 BLEU gain compared to the last two years' winners. Our automatic subtitling system achieves 3.7 and 1.7 SubER improvement on en-de and en-es respectively, compared to the only solution published in the literature based on a direct system. ## Acknowledgements This work has been supported by the project "AI@TN" funded by the Autonomous Province of Trento, Italy.
2309.13125
Specification and design for Full Energy Beam Exploitation of the Compact Linear Accelerator for Research and Applications
The Compact Linear Accelerator for Research and Applications (CLARA) is a 250 MeV ultrabright electron beam test facility at STFC Daresbury Laboratory. A user beam line has been designed to maximise exploitation of CLARA in a variety of fields, including novel acceleration and new modalities of radiotherapy. In this paper we present the specification and design of this beam line for Full Energy Beam Exploitation (FEBE). We outline the key elements which provide users to access ultrashort, low emittance electron bunches in two large experiment chambers. The results of start-to-end simulations are reported which verify the expected beam parameters delivered to these chambers. Key technical systems are detailed, including those which facilitate combination of electron bunches with high power laser pulses.
E. W. Snedden, D. Angal-Kalinin, A. R. Bainbridge, A. D. Brynes, S. R. Buckley, D. J. Dunning, J. R. Henderson, J. K. Jones, K. J. Middleman, T. J. Overton, T. H. Pacey, A. E. Pollard, Y. M. Saveliev, B. J. A. Shepherd, P. H. Williams, M. I. Colling, B. D. Fell, G. Marshall
2023-09-22T18:18:17Z
http://arxiv.org/abs/2309.13125v1
# Specification and design for Full Energy Beam Exploitation of the ###### Abstract The Compact Linear Accelerator for Research and Applications (CLARA) is a 250 MeV ultra-bright electron beam test facility at STFC Daresbury Laboratory. A user beam line has been designed to maximise exploitation of CLARA in a variety of fields, including novel acceleration and new modalities of radiotherapy. In this paper we present the specification and design of this beam line for Full Energy Beam Exploitation (FEBE). We outline the key elements which provide users to access ultrashort, low emittance electron bunches in two large experiment chambers. The results of start-to-end simulations are reported which verify the expected beam parameters delivered to these chambers. Key technical systems are detailed, including those which facilitate combination of electron bunches with high power laser pulses. ## I Introduction The Compact Linear Accelerator for Research and Applications (CLARA) is an ultra-bright electron beam test facility at STFC Daresbury Laboratory. The facility was conceived to test advanced Free-Electron Laser (FEL) schemes that could be implemented on existing and future short wavelength FEL facilities [1]. The CLARA front-end, producing 50 MeV, 250 pC electron bunches from a 10 Hz S-band photoinjector gun and linac, was successfully commissioned in 2018 [2]. Installation of accelerator modules to raise the beam energy to 250 MeV will be complete by the end of 2023. The front-end photoinjector gun will be replaced with a novel 100 Hz high repetition rate gun (HRRG) [3] which has been commissioned on an adjacent beam line. The remaining beam line consists of three 4 m S-band (2998.5 MHz) linacs, X-band fourth harmonic cavity (4HC) phase-space lineariser, dielectric dechirper, variable magnetic bunch compressor (VBC) and dedicated diagnostics line including a transverse deflecting cavity (TDC) for 6D phase space characterisation. The original CLARA concept included a laser heater (which can be installed in future, if required) and reserved space within the electron hall for a seeded FEL, including seeding laser, modulators, undulators and photon diagnostics; although the FEL has not been funded, the space has been reserved for future applications. Beginning in 2018, access to electron beam from the CLARA front-end has been made available to users from academia and industry. This has enabled the testing of novel concepts and ideas in a wide range of disciplines, including the development of advanced accelerator technology [4], medical applications, [5] and novel particle beam acceleration [6] and deflection [7; 8] concepts. Based on increasing user demand for access, a decision has been made to design and build a dedicated beam line for user applications at the full CLARA beam energy of 250 MeV. As shown in Fig. 1, the beam line for Full Energy Beam Exploitation (FEBE) will be installed parallel to the space originally allocated for an FEL. There are only handful of test facilities worldwide which provide user access to mid-energy range (less than 300 MeV), high brightness electron beams to test proof of principle novel applications. A survey of beam dynamics challenges of such mid-energy high brightness facilities in Europe has recently been carried out and was presented at IPAC'23 [9]. In addition to CLARA, there are three other facilities in Europe; CLEAR@CERN [10; 11], ARES@DESY [12; 13], and SPARC_LAB@INFN [14] in this energy range. The CLEAR facility provides bunch trains of beam energy up to 230 MeV at a maximum repetition rate of 10 Hz; the number of micro-bunches in each train can be varied from 1-150 with spacing of 1.5 or 3 GHz, and bunch charge can be varied from 5 pC to 3 nC. The ARES facility provides single bunches up to 160 MeV at a maximum repetition rate of 50 Hz, and bunch charge can be varied from 3 fC to 280 pC. The SPARC_LAB facility provides single bunches up to 180 MeV at a maximum repetition rate of 10 Hz, and the bunch charge can be varied from 10 pC to 2 nC. In addition to CLARA, SPARC_LAB is the only facility with access to a high power laser (FLAME [15], 200 TW) to allow combined electron-laser experiments. A survey of CLARA stakeholders was performed to inform the design of the new beam line; the results of this survey identified three key design principles: 1. FEBE should provide access to a dedicated shielded experiment area ('hutch'), accessible to users without switching off CLARA. 2. The hutch should incorporate large experiment chambers compatible with a wide range of possible experiments. 3. The beam line should allow the synchronized interaction of electron bunches with a high-power (\(\sim\)100 TW) laser. The decision to provide a dedicated shielded hutch was taken following consultation with other medium-energy accelerator facilities. This arrangement allows on demand user access to the experimental area without fully switching off the accelerator, which reduces disruption, improves machine stability, and allows experiments to resume promptly after access. This type of access is not currently possible at other similar facilities in Europe, although CLEAR has developed robotic systems to minimise user access requirements during some types of experiment. In this article we report on the specification and design of the FEBE beam line, which is currently under construction and will begin commissioning in 2024. The article is broken down as follows: the layout of the machine and beam specification is presented in Sec. II; Sec. III reports the results of beam dynamics simulations from the CLARA photoinjector through to the FEBE beam dump; and Sec. IV details the key technical accelerator systems expected to underpin future user exploitation. The article concludes with a summary in Sec. V. ## II Layout and Beam Specification FEBE has been designed to support a variety of experiments across the fields of accelerator applications and accelerator technology. A user survey performed in 2018 established a particular interest in the exploitation for novel acceleration R&D including: external injection of electron bunches into a plasma accelerator stage (using both beam and laser-driven configurations); structure wakefield acceleration, encompassing use of metallic, dielectric and novel (e.g. metamaterial or photonic crystal based) structures; and dielectric laser acceleration, including both direct optical laser coupling to a solid structure or prior conversion to longer wavelength (THz-band). A schematic of the beam line is shown in Fig. 2; the position of the beam line within the CLARA facility is shown in Fig. 1. The requirements of novel acceleration techniques have been identified as the most challenging of the anticipated user requests, and has been used to drive the FEBE beam specification and underpinning accelerator technology (the latter outlined in Sec. IV). Characteristics of electron drive beams required for novel acceleration include: high charge and peak currents (order 1 kA) [16], short bunch lengths (order 10 fs), and small transverse beam sizes (\(<\)10 \(\mu\)m) [17]. More demanding experiments and applications may require a combination of multiple characteristics simultaneously [18]. To verify and optimise the various acceleration techniques, diagnostics for the characterisation of electron bunches both before and after interactions are required [19]. FEBE also expects to host a variety of target irradiation experiments, including R&D in Very High Electron Energy (VHEE) therapy, radiation generation, and electron detectors. This is a broad category and experiment chambers must be suitably flexible to accommodate a wide variety of samples (both vacuum and in-air), with suitable motion control for accurate positioning. Diagnostics to validate beam parameters delivered to target are also required. A schematic of the FEBE beam line is shown in Fig. 2 and is broken down into three sections: an arc and matching section connected to the main CLARA beam line; the FEBE experiment hutch, which brings the electron beam to a focus at two possible interaction points (IPs); and post-hutch transport line and beam dump. The FEBE experiment hutch is a \(10\times 5.4\times 3\) m\({}^{3}\) dedicated area for users to perform electron beam experiments. Figure 1: Schematic of the CLARA linear accelerator test facility, including the FEBE beam line, shielded FEBE hutch, 100 TW laser system, and space reserved for potential future applications (shaded blue area). The hutch is transversely offset from the main CLARA beam line using a FODO structure to provide -I transform between two dipoles (dipole angles of 14\({}^{\ast}\)), and optimised to minimise emittance growth due to Coherent Synchrotron Radiation (CSR).[20] This solution leads to a strong focusing, achromatic and non-isochronous arc with large natural second-order longitudinal dispersion, requiring correction by sextupole magnets at positions of high dispersion. Six quadrupole families allow matching to the main beam line for a range of electron beam configurations. The arc has a static \(R_{56}\) value of 7.7 mm with no residual dispersion. Longitudinal bunch compression of the electron beam within the hutch can be achieved using a combination of the FEBE arc and the upstream VBC. The FEBE arc includes a mask array positioned at a point of high dispersion for shaping of the bunch longitudinal distribution, including generation of: drive/main bunch-pairs with variable delay; a single ultra-short (order 1 fs duration) low charge bunch; and, as shown in Fig. 3, a drive bunch with a train of witness bunches. The mask is made from 5 mm tungsten and can be changed to meet user requirements. Alternatively, multiple electron bunches can be generated at the photocathode via manipulation of the photoinjector laser, before acceleration and transport to FEBE. The beam transport is designed to deliver a strong focus to two possible IPs (IP1/2), each located within a large-volume (\(\sim\) 2 m\({}^{3}\)) experiment chamber designated FEBE Experiment Chamber (FEC) 1/2. The double-IP design provides flexibility in experiment design and implementation. For example: the interaction between the electron beam and laser generated in FEC1 can be captured and probed with beam diagnostics installed in FEC2. The design also allows multiple independent experiments to be installed in FEC1 and FEC2 where compatible, minimising downtime for setup. To meet novel acceleration requirements, FEC1 includes the capability to combine electron beams with high power lasers at the IP. The laser is introduced into the beam line in a dedicated mirror box, whereupon it can be transported directly to FEC1, or focused and co-propagated (using a \(\sim\)3.5 m effective focal length off-axis parabola) with the electron beam to the IP. A second transport line bypassing the off-axis parabola and connecting to FEC1 allows shorter focusing geometries to be generated directly within the chamber. A downstream mirror box is used to separate and dump the laser following interaction. Further detail on the laser system and laser transport is given in Sec. IV. Apertures throughout the beam line must accept both the high power laser and the electron beam. This includes, for example, the four quadrupoles around FEC1, which must be both large aperture (radius \(\sim\)68 mm) and high gradient to achieve a tight focus at the IP. Some experiments (e.g., novel acceleration including plasma and dielectric) will aim to produce beams of higher energy than 250 MeV. All post-FEC1 magnets are therefore specified for up to 600 MeV, allowing high energy beam capture measurements in FEC2 and transport to the beam dump. The post-hutch beam line is designed to provide trans Figure 3: Left: photograph of the mask to be installed in FEBE for variable longitudinal shaping. Right: simulated 250 pC longitudinal profile at the FEBE FEC IP using the bottom hole of the mask, including triangular drive bunch (\(\sim\)1 ps RMS) and four trailing bunches (\(\sim\)100 fs RMS duration with 1 ps separation). Figure 2: Schematic of the new beam line for Full Energy Beam Exploitation (FEBE) by users, including arc (connecting to the upstream CLARA main beam line, Fig. 1), experiment hutch, and post-hutch beam dump with energy and emittance diagnostics. port to the beam dump housed in the main CLARA accelerator hall. A 20"dipole magnet is used to bend the beam to a large aperture Yttrium Aluminium Garnet (YAG) scintillation screen for beam imaging and energy spectrometry. The dispersion at the spectrometer YAG station is modified by a single quadrupole specified to achieve the zero-dispersion condition at the screen position. To optimise the beam imaging, the horizontal and vertical beta-functions are minimised at the YAG location, to maximise the energy resolution whilst also maximising the image intensity; this is achieved by utilising post-IP hutch quadrupoles as part of the dump line matching. Access to the hutch with the accelerator running is made possible via interlock of the FEBE arc dipoles to the machine personal safety system. The total beam power within the hutch is limited to 6.25 W, which offers sufficient flexibility with available bunch charge (maximum 250 pC), bunch repetition rate (maximum 100 Hz), \(\sim\)100 TW laser repetition rate (5 Hz) and final beam energy (250-2000 MeV). Radiation shutters on either side of the enclosure (in the CLARA tunnel) are used to shield the hutch from radiation generated from the main CLARA accelerator. The beam specification at the FEC1 IP is presented in Tab. 1 for four possible accelerator configurations. The Commissioning target high and low charge modes will form the baseline parameters made available to users for the initial beam exploitation period. Progression towards more demanding parameters including higher peak current and improved transverse quality will be made through periods of machine development. Machine development will also include development of appropriate diagnostic systems required to verify those parameters.[21] ## III Start-to-end simulations Start-to-end simulations were performed to evaluate and optimise the electron beam properties at the FEC1 IP. Simulations targeted machine configurations that deliver one or more beam properties relevant to the anticipated user experiments, including high peak current and charge density. Table 1 details the four main accelerator operating modes addressed through simulations. Particle tracking simulations were carried out using ASTRA [22], ELEGANT [23] and GPT [24], accounting for the non-linear effects (both longitudinal and transverse) of space charge and CSR. The simulation codes were accessed via a python-based framework (SimFrame) developed at STFC Daresbury Laboratory, which allows a single human-readable lattice file to be deployed consistently across several codes. ASTRA and GPT were primarily used to simulate the CLARA front end at low energy (below 35 MeV), where transverse and longitudinal space charge forces are the dominant emittance-diluting processes. Tracking through the injector with 256,000 macroparticles was found to produce good agreement between codes within a reasonable computation time. Above 35 MeV, ELEGANT was primarily used due to its processing speed and the inclusion of CSR effects in the bunch compressor and FEBE arc. These high-energy sections (above 35 MeV) of the machine were simulated with 32,000 macroparticles (down sampling the output of the injector simulations); high-fidelity simulations used for machine optimisation required 256,000 macro-particles. Extensive comparisons were made between ELEGANT and ASTRA at higher energies, showing small differences due to transverse space-charge effects (not included in ELEGANT) but similar longitudinal space-charge forces. ASTRA does not include adaptive space-charge meshing, which complicates particle tracking under bunch-compression scenarios. Bunch properties (for both low and high charge configurations in Tab. 1) at the FEC1 IP were inferred from the statistical distribution of particles from tracking simulations. To characterise the longitudinal profile of the bunch, the peak current, current full-width at half (FWHM) and quarter max (FWQM), and charge fraction (integrated charge within the current FWQM) were extracted. Due to the statistical nature of particle tracking, particularly with space charge and CSR effects, a smoothing algorithm (Kernel Density Estimator) was applied to the longitudinal charge distribution. The probability density function and associated cumulative density function were obtained and used to evaluate the \begin{table} \begin{tabular}{l c c c c} & \multicolumn{2}{c}{Commissioning} & \multicolumn{2}{c}{Machine Development} \\ Parameter & High Charge & Low Charge & High Charge & Low Charge \\ \hline Charge (pC) & 250 & 5 & 250 & 5 \\ \(\sigma_{t}\) (fs) & 100 & 50 & \(\leq 50\) & \(\ll 50\) \\ \(\sigma_{x}\) (\(\mu\)m) & 100 & 20 & 50 & \(\sim 1\) \\ \(\sigma_{y}\) (\(\mu\)m) & 100 & 20 & 50 & \(\sim 1\) \\ \(\sigma_{E}\) (\%) & \(<5\) & \(<1\) & 1 & 0.1 \\ \(\epsilon_{N,x}\) [\(\mu\)m-rad] & 5 & 2 & \(<5\) & \(<1\) \\ \(\epsilon_{N,y}\) [\(\mu\)m-rad] & 5 & 2 & \(<1\) & \(<1\) \\ \end{tabular} \end{table} Table 1: FEBE beam parameters at the FEC1 IP. All beam parameters are specified for 250 MeV. Symbol \(\sigma_{i}\) indicates RMS value; \(\epsilon_{N,i}\): normalised projected RMS emittance. Parameters for initial commissioning and those targeted thereafter following required periods of machine development are presented. full-width and charge fraction values. Each configuration was tuned to maximise the peak beam current, which was sensitive to the length of each slice in the time domain and average number of macro-particles per slice. For high peak current generation, the most effective accelerator actuators were those modifying the longitudinal phase space of the bunch: the amplitudes and phases of radio-frequency (rf) linacs, wakefield structures (including dielectric energy dechirping), and transverse bending structures which lead to coherent effects like CSR. The FEBE beam line contains only one of these structures, in the form of the FEBE arc and its associated CSR effects, where the arc is designed to minimise the induced CSR kicks, as described in [20]. In principle, the \(R_{56}\) of the arc can be adjusted by varying the strengths of the two quad families allowing \(R_{56}\) values between \(\pm\) 20mm; this however produces non-zero dispersion at the exit of the arc. The Twiss parameters for the nominal FEBE beam line are shown in Fig. 4. The main optimisation actuators for the FEBE beam are found in the preceding CLARA beam line, shown in Fig. 1. The first two-metre S-band injector linac (Linac 1) can act as either a standard low energy accelerating structure or a longitudinal bunching structure for short single-spike operation. The remaining three four-metre long S-band linacs (Linacs 2-4) provide acceleration up to a nominal beam energy of 250 MeV. A chicane-type VBC is located between Linac 3 and Linac 4, with a X-band 4HC immediately before the VBC for longitudinal phase space curvature compensation. The VBC is located at a nominal energy of \(\sim\)180 MeV, to maximise its effectiveness for the moderate compression required for the original CLARA FEL concept. The nominal FEBE arc \(R_{56}\) of +8 mm is of the opposite sign from the main bunch compressor \(R_{56}\) of \(-\)42 mm. For standard operating modes the FEBE arc is decompressing for a standard longitudinal chirp. Reversing the chirp using Linac 4 after the VBC is difficult; achieving maximum compression in the FEBE hutch therefore requires over-compression in the VBC, followed by re-compression during transport in the FEBE arc. This has the additional benefit of reducing the CSR effects in the FEBE arc itself, at the cost of a longitudinal cross-over in the VBC. The CLARA beam line also contains a dielectric-based dechirper structure used to minimise the projected energy spread at the FEL [25]. As shown in Fig. 2, this device is located a few metres upstream of the FEBE extraction dipole. The dechirper was modelled in ELEGANT as a 1D longitudinal wakefield element using a theoretically calculated Green's function, which has been experimentally verified for this structure [8]. Future work will incorporate full 3D simulations of the wakefield dynamics. Three different algorithms were used for optimisation of the combined CLARA FEBE beam line: a SciPy-based genetic algorithm, a SciPy-based Nelder-Mead simplex algorithm, and a custom-written Nelder-Mead simplex algorithm. The optimisation constraints are shown in Tab. 2. At high peak currents CSR effects can drastically increase the horizontal projected emittance, making beam transport through narrow experimental apertures difficult and restricting the minimum beam focus that can be achieved at each IP. Emittance growth was therefore introduced as an additional constraint. Optimisation of the transverse lattice parameters was found to vary with rf focusing from Linacs 1-4. Lattice matching was not performed on an iteration-by-iteration basis, but re-matching conducted when the tracked beta functions strayed too far from design values. In general, re-matching was only performed between optimisation sessions. Figure 5 shows projections of the bunch charge distribution at the FEC1 IP, calculated for the nominal simulation of the high charge (250 pC) operating mode. We utilize low bandwidth KDE functions to smooth the noisy longitudinal charge distribution and present both the linear charge density and cumulative density functions. Relevant length measurements are shown for each plane, indicating the spatial and temporal dimensions of the distributions. To evaluate the possibility of micro-bunching of the bunch longitudinal phase space, a semi-analytic model [26] was developed to compute the micro-bunching gain and energy modulation. This model computes the \begin{table} \begin{tabular}{c c} Constraint & Value \\ \hline Linac gradients & \(<\)25 MV/m \\ Beam energy & 240-260 MeV \\ Peak current @ IP & \(>\)2.5 kA \\ Slice \(\epsilon_{N,x}\) @ IP & \(<\)10 \(\mu\)m-rad \\ Slice \(\epsilon_{N,y}\) @ IP & \(<\)1 \(\mu\)m-rad \\ FWQM & 0.03 ps \\ FWQM charge fraction & \(>\)75\% \\ \end{tabular} \end{table} Table 2: Optimisation constraints used for particle tracking. Figure 4: Twiss parameters for the full FEBE beam line starting from the first dipole in the FEBE arc. longitudinal space charge and CSR impedance in the drift spaces, linacs and bunch compressors, using simulated beam parameters at various points. It allows for intra-beam scattering (IBS) effects to be included or excluded to demonstrate the potential impact on the damping of micro-bunching gain. The micro-bunching gain was calculated in stages with CLARA separated into sections. The longitudinal space charge-induced energy modulation was calculated for each linac and long drift section iteratively, as this parameter depends on the beam energy and beam size. Average values for the transverse beam size were used for each machine section and a linear increase in beam energy applied for the linacs. The final bunching factor at the exit of the VBC (as a function of uncompressed modulation wavelength) was then given by the impedance in the integral summed over all preceding sections. This process was repeated for the remaining linacs, drifts and the arc compressor. Micro-bunching gain is highly dependent on the uncorrelated energy spread (\(\sigma_{E,0}\)), which causes the exponential damping of modulations. ASTRA simulations of the HRRG and Linac 1 were used to determine \(\sigma_{E,0}=1.06\) keV. The gradient and phase of the gun and Linac 1 were set to (\(-9^{\circ}\), 120 MV/m) and (\(-16^{\circ}\), 21 MV/m) respectively. The value of \(\sigma_{E,0}\) was calculated as the mean value between \(\pm 2\) ps. A full summary of the main lattice parameters used for the calculation of the micro-bunching gain factor is shown in Tab. 3. The output from the semi-analytic model is shown in Fig. 5(a), which shows the micro-bunching gain as a function of the initial modulation wavelength and the final bunching factor, after compression in both the variable bunch compressor and the FEBE arc. Even without including the damping effect caused by IBS, the maximum final bunching factor is around 7%. This level of bunching in the longitudinal plane is expected to be tolerable for most FEBE experiments. The final bunching factor drops by around a factor of ten when IBS is included. The energy modulation amplitude is shown in Fig. 5(b). The damping caused by scattering is only around a factor of two at the level of maximum energy modulation, but the calculation without scattering gives this maximum value at approximately 0.1% of the average beam energy (240 MeV). Figure 5: Charge densities and cumulative probability functions for an optimised 250 pC solution given at the FEBE FEC1 IP, for (a) the transverse horizontal, (b) vertical and (c) longitudinal planes. Beam parameters extracted from the density functions are presented. \begin{table} \begin{tabular}{c c} Parameter & Value \\ \hline Bunch charge [pC] & 250 \\ Initial beam energy [MeV] & 35 \\ Final beam energy [MeV] & 240 \\ Initial uncorrelated energy spread [keV] & 1.06 \\ Initial bunch length [ps] & 2.42 \\ Initial peak current [kA] & 0.05 \\ Normalised emittance [\(\mu\)m-rad] & 0.42 \\ VBC compression factor & 10 \\ Arc compression factor & 4 \\ \end{tabular} \end{table} Table 3: Beam and lattice parameters used for the computation of the micro-bunching gain. Beam parameters as measured after Linac 1, taken from simulation. ## IV Accelerator technology ### Beam diagnostics Beam diagnostics for FEBE are designed to verify to the targeted commissioning beam parameters as defined in Tab. 1 and the requirements of anticipated experiments. The beam line includes an array of well developed or commercially available diagnostic systems including: Ce:YAG screens, strip-line beam position monitors, integrated current transformers, and Faraday cup. The most demanding requirements on diagnostic systems are set by the beam parameters generated in novel acceleration techniques, including measurement of micrometer-scale transverse profiles, 10 fs bunch duration, emittance and broadband energy spectra at high resolution. Shot-by-shot characterisation is required, particularly in techniques were inherent instabilities may manifest variation of the output beam parameters; non-invasive diagnostics are required for control and optimisation. Meeting these challenges has required a dedicated diagnostics R&D programme, key elements of which are detailed below. #### iv.1.1 High dynamic range charge measurement The flexible beam delivery of FEBE will require both accurate and precise measurement of charges from \(<\)5 pC to 250 pC. Low charge machine setups will be important for the initial phases of some novel acceleration experiments, as the transverse spot sizes and bunch lengths will be significantly lower than at high charge, as shown in Tab. 1. A new electronics front end has been developed for the Faraday cups on CLARA which will maintain accuracy and a high signal-to-noise ratio across this charge range. This system is comprised of three components: an analog signal chain which converts signals from charge devices into an output pulse proportional to charge; a charge injection circuit for onboard calibration of the analog signal chain; and digital control circuitry to enable operators to adjust the settings of the analog front end. Further details are given in [27]. #### iv.1.2 Spectrometer dipole An in-vacuum permanent magnet dipole spectrometer has been designed to measure the energy spectra of beams generated in novel acceleration experiments. The spectrometer can measure beam energies in the range 50-2000 MeV to accommodate both beams with significant shot-to-shot instabilities and bunches with high energy spread. The spectrometer will nominally be installed in FEC2 as per experiment requirements. A long C-core dipole disperses electrons of different energies via the open side onto a long screen. The screen is positioned just below the spectrometer and angled at 45degto allow viewing by cameras positioned in air. The design is modular, consisting of up to five identical sections with lengths of 200 mm which co-locate using preci Figure 6: Semi-analytic model predictions of the (a) micro-bunching gain and (b) energy modulation amplitude as a function of initial modulation wavelength (\(\lambda_{0}\)) after compression in both the VBC and the FEBE arc, with and without the energy spread added by intra-beam scattering (IBS). Figure 7: Plot of the predicted trajectories of electrons in the full energy range through the 1 m long five-section spectrometer. \(z=0\) is located at the spectrometer mid-point, \(y=0\) is the beam entry height. The region of magnetic flux is shown in blue between \(\pm\)500 mm; trajectories for 2 GeV (solid blue), 600 MeV (orange dashed), 250 MeV (green dotted) and 50 MeV (red dash-dot) are plotted. sion metal dowels. All five sections (total magnet length of 1000 mm) will be used for GeV-scale experiments. A shortened version consisting of three segments at 600 mm total length may be employed for experiments at lower energy gain (\(\leq 600\) MeV). The magnetic flux is provided using blocks of Neodymium Iron Boron (NdFeB) with a typical remnant field of 1.41 T and Ni-cu-Ni coating for vacuum compatibility. Blocks are positioned on either side of the C-core gap by an Aluminium lattice. A total of 80 blocks will be used with dimensions \(49\times 38.5\times 18\) mm, with a predicted maximum central flux density of 0.72 T. The magnet horizontal aperture is 20 mm which is the maximum size required to maintain sufficient flux density for GeV-scale use. The predicted trajectories at different electron energies, encompassing a variety of foreseen post-IP beam configurations, are shown in Fig. 7. #### iii.1.3 Longitudinal diagnostics FEBE will utilise multiple longitudinal diagnostic systems to provide measurements of relative bunch compression, RMS bunch length, and detailed longitudinal profile reconstruction. A bunch compression monitor based on coherent transition radiation will be installed at the end of the FEBE arc. The device uses mesh filters in order to act as a rudimentary spectrometer and provide indicative bunch duration information. The system will operate across a bandwidth of \(0.1-5\) THz, measuring minimum RMS bunch lengths \(\lesssim 50\) fs for charges as low as 100 pC. The detector is a bespoke pyroelectric system, employing variable gain, active noise cancellation, and Winston cones for radiation capture. Both solid and holed (for coherent diffraction radiation) targets can be used; the latter offers potential for non-invasive measurements of shot-to-shot compression jitter. Combined with accurate measurements using the upstream CLARA TDC, the bunch compression monitor will ensure maximally compressed beams are delivered to FEBE for user experiments. A passive streaker based on a dielectric wakefield (DLW) accelerator structure reported in [7] will be used to make bunch length measurements across the range of \(\sim\)50 fs to \(>1\) ps. As compared to the previous design, the FEBE streaker uses two orthogonal waveguides mounted as a pair to compensate the effect of quadrupole-like wakefields which lead to nonlinear streaking forces; this improves the resolution of the device by a factor of \(\sim\)3. The simulated resolution as a function of bunch length (at 250 pC) is plotted for single and paired structure arrangements and compared in Fig. 9. The DLW streaker resolution does not scale favourably with reducing beam charge and will not be able to resolve the bunch length Figure 8: Engineering diagram of FEBE beam line, highlighting areas with diagnostic systems undergoing R&D. These areas are supported by standard beam diagnostics including Ce:YAG screens, beam position monitors and integrated current transformers. Figure 9: Simulated average resolution of the DLW streaker at 250 pC for varying bunch length \(\sigma_{t}\) for Gaussian (red lines) and flat-top (black) profiles, where flat-top has total length \(4\sigma_{t}\). The improvement in resolution from using a pair of orthogonal oriented structures (solid lines) as compared to a single structure (dashed) is shown. at low charges. #### iv.1.4 Emittance diagnostics Single-shot emittance diagnostics will have high impact in novel acceleration experiments where potential beam instabilities hinder application of conventional multi-shot techniques (e.g. quadrupole scan technique). FEBE will utilise emittance diagnostics based on imaging of optical transition radiation (OTR), which is a well established technique for measuring transverse beam sizes and beam divergences. As both spatial and angular information from the electron beam is encoded in OTR, direct measurement of beam emittance is possible: this requires localising divergence measurements to discrete regions of the spatial OTR image, analogous to making an emittance measurement using a mechanical slit or "pepper-pot." Two techniques are under investigation to produce an OTR based "optical pepper-pot": optical masking using a digital micro-mirror device (DMD) [28], and imaging with a micro-lens array (MLA) [29]. #### iv.1.5 Virtual diagnostics Machine learning (ML) based algorithms can be used to predict beam parameters at a given location from a set of input machine parameters the ML model has been trained on. Such 'virtual diagnostics' are particularly relevant in experiments where diagnostics cannot be installed, e.g. due to spatial and mechanical constraints, or where a non-invasive measurement is required but not available [30]. As part of the FEBE design, virtual diagnostics for prediction of the IP beam size have been developed, which make an inference based on an image of the beam either up- or downstream of the IP [31]. This allows, for example, non-invasive measurements of the electron beam to be performed with the FEBE 100 TW laser running through the FEC1 IP, which prevents the insertion of any screen. Future development will focus on virtual diagnostics for longitudinal phase space prediction, building upon recent work [32] and utilising data from the FEBE bunch compression monitor. The use of information in the form of non-invasive shot-to-shot spectral measurements has been shown to improve the accuracy of phase space predictions [33], while recent work based solely on experimental data has demonstrated predictions at significantly higher resolution to previous studies [34]. ### Laser system and laser transport The FEBE beam line will have access to a high power (100 TW: \(\sim\)2.5 J, \(\sim\)25 fs pulse duration, 5 Hz) Ti:Sapphire laser which can be combined with electron beam in FEC1. The laser requirements are compatible with the demands of plasma acceleration and can be used for ionization and/or wakefield excitation. The laser system including vacuum compressor is housed in a dedicated laser room immediately on top of the FEBE hutch. Light from the compressor is transported in vacuum from the laser room to the hutch via a radiation-shielded periscope; this allows personnel to access the laser room with electron beam in the FEBE hutch. The laser transport system inside the hutch is shown in Fig. 10 and facilitates two primary transport arrangements: 1) collinear propagation of laser light with the electron beam to a focus in FEC1, and 2) delivery of the laser without focusing directly to FEC1 for general exploitation, which can also be used for a probe line. Laser focusing on the co-linear path is achieved using a \(\sim\)3.5 m off-axis parabola housed in FMBOX1, with laser and electron beam combined on a hold mirror. This is the longest focus which can be generated while keeping the laser within the footprint of the FEBE hutch. The mirror box includes an adaptive optic system for focal spot optimisation, consisting of a deformable mirror and wavefront sensor mounted at the conjugate plane to the deformable mirror. Leakage paths into air will be utilised for other laser diagnostics, including shot-by-shot measurements of pulse energy, spectrum and beam pointing. The laser is allowed to expand after the focus to a safe intensity before being separated from the electron beam using a holed mirror in FMBOX2, which also includes space for laser exit mode diagnostics and laser termination. ### Timing and synchronization FEBE laser synchronization will be performed using systems developed for other systems on CLARA, aiming to deliver \(<\)10 fs laser-electron beam synchronization. The timing architecture is similar to that employed at larger x-ray FEL facilities [35] and is split into three Figure 10: Laser transport within the FEBE hutch, with electrons from CLARA travelling FMBOX1-FEC-FMBOX2 (left to right). The laser is brought to a common focus with the electron beam at IP1. OAP: off-axis parabola; IP: interaction point; DM: deformable mirror; WFS: wavefront sensor; LD: laser dump. key systems: 1. Ultrastable optical clock based on a commercial low-noise fibre laser system (1560 nm, 250 MHz Er/Yb fiber oscillator; Origami, OneFive), phaselocked to a rf master oscillator for long-term stability. 2. The CLARA stabilized optical timing network to deliver the optical clock to several clients on the accelerator, with active correction of the fiber length via measurement of the round-trip time. 3. End-station synchronization (rep rate locking) to the optical clock, including laser-laser (via optical cross-correlation) and laser-rf synchronization. Two stabilized links will be routed from the CLARA optical timing network to the FEBE beam line; one for synchronization of the 100 TW laser, and a second for a Bunch Arrival-time Monitor (BAM). To improve the synchronisation of the laser, a two-colour balanced optical cross-correlator (TC-BOXC) based on periodically-poled lithium niobate waveguide [36] has been developed. The TC-BOXC will be used to rep rate lock the FEBE Laser master oscillator to the CLARA optical timing network. Rep rate locking will be based on a hybrid locking configuration with that uses an RF mixer and photo-detector to perform coarse locking prior to locking the laser with the more sensitive TC-BOXC. The waveguide based TC-BOXC has a measured resolution of 0.97 mV/fs and a theoretical resolution of 4.2 mV/fs. The theoretical resolution of this device provides an order of magnitude improvement over conventional bulk crystal cross correlators. The TC-BOXC is an all fibre device which is more robust against environmental fluctuations and requires less (valuable) optical table space compared to its free space counterpart. The BAM will be based on PCB substrate with rod-shaped pick-ups in a similar design to [37]. The initial performance target is 10 fs resolution for bunch charges as low as \(<\)5 pC. Initial tests are expected to be performed with the BAM mounted within FEC2. ### Vacuum management The FEBE vacuum design accommodates a wide range of possible experiments. Of these experiments, those targeting plasma acceleration are the most demanding on vacuum management due to the associated gas injection at high pressure. The FEBE design transfers vacuum management to outside of the experiment chambers by introducing aperture restrictions throughout the FEBE beam line. This provides maximum flexibility to the user as required, however results in gas particles propagating up and downstream of the source. This increases the pressure in the FEBE arc and main CLARA accelerator above nominal levels. High gas pressures lead to strong beam-gas scattering effects which increase the electron beam normalised emittance. Following the work in [38] we have assessed the effects of high gas loading on beam emittance for representative experimental gas profiles. The approach utilised modelled Twiss parameters throughout the line via linear interpolation; the gas profile was built from a series of defined pressure points and was similarly interpolated. The incoming horizontal and vertical electron beam emittance was 5 and 0.4 \(\mu\)m-rad. The assessment is performed for 0.01 mbar of gas pressure at the IP (worst case estimate, based on a gas jet positioned at \(z=0\) m with 100 bar backing pressure and 100 ms opening time), with various aperture restrictions in the FEC1, as well as beam apertures in FMBOX1/2 due to mirrors. Simulations were performed using MolFlow [39]. These demonstrate \(\sim 10^{-6}\) mbar at the beam shutter on the outside wall of the FEBE hutch. The impact of Hydrogen and Argon gas species were compared. Fig. 11 shows the gas density profile and matched beta-functions for the FEBE beam line between the dipole magnet and the end of the hutch, and the vertical normalised emittance for both gas species. As expected, due to the higher molecular weight, there is a significant increase in emittance for Argon as compared to Hydrogen. The vertical emittance increases by almost an order of magnitude, primarily due to the combination of relatively high beta-function and gas pressure seen between FMBOX1 and the IP. Emit Figure 11: (a) Gas pressure profile for Hydrogen and Argon and (b) normalised vertical emittance and vertical beta-functions \(\pm\)10 m from the FEC1 IP (located at \(s=0\) m). Base gas pressure of 0.01 mbar at the IP is assumed, with a pressure reduction to \(10^{-6}\) mbar at either end of the vacuum line. tance blow-up inside the FEC1 chamber is minimal as the transverse beta-functions rapidly decrease in this region. Due to the significantly larger expected horizontal emittance for these experiments, \(\sim\)5 mm-mrad, only small changes are seen in the horizontal plane. The vacuum management system will be updated following the results of initial operations and testing of gas targets. Should gas loading exceed expected values in practice, an alternative design solution based on management close to target (similar to as deployed by [40]) will be used. ### Longitudinal profile shaping Optimisation of the longitudinal profile of the photoinjector laser provides two key benefits to the electron beam optics. The first benefit is flexible variation and correction of the laser profile to improve the electron bunch longitudinal and transverse optics and mitigate space-charge and other collective effects in the beam transport. The second benefit is modification of the longitudinal profile to maximise the potential of novel acceleration experiments through creation of high transformer ratio profiles, or for the generation of drive and witness bunches in a single RF bucket. The capability to shape the longitudinal profile of electron bunches has therefore been developed for use in FEBE and, while predominantly targeted to drive-witness wakefield beam experiments, will be made available to all users. Longitudinal laser shaping is performed via control of the photoinjector laser profile,[41] and can be used either standalone or in combination with other methods described in Secs. II and III above, including use of the mask installed in the FEBE arc. Control of the photoinjector laser is performed using an acousto-optic modulator integrated into a 4-f spectral filter [42]. To shape the laser pulse temporally, the spectral phase of the pulse can be adjusted by varying the temporal phase of the acoustic wave (created using an RF pulse generator) which drives the modulator. Pulses can also be shaped temporally by varying the temporal amplitude of the acoustic wave, however this approach reduces the output pulse energy and is undesirable when maintaining high charge operation from the photoinjector. In order to produce a particular target pulse temporal intensity profile, a suitable spectral phase mask must first be identified. This is nontrivial for arbitrary shapes, requiring full knowledge of both the phase and amplitude in either the spectral or temporal domain to fully define the pulse. With only temporal and the spectral intensity fully known, a suitable phase mask must be identified to fully specify the output pulse. For maximum flexibility and rapid customisation, a machine learning model has been developed to find the required phase mask to achieve a (user defined) target pulse temporal profile. Training data was generated from \(10^{6}\) pairs of spectral phase profiles and matched temporal intensity profiles, with a further \(10^{5}\) pairs generated for the test data set. To encode the physical limitation of the acousto-optic modulator bandwidth into the network, the model includes a regulariser that acts to limit the gradient of the spectral phase profile to within physical limits. High quality (and physically realisable) matches to target have been achieved for a range of pulse profiles, as shown in Fig. 12. The system will in future be trained using live data to explore and account for expected deviations between simulation and practice. Deployment of the ML system for control of the laser pulse shape has begun, with machine operators able to specify arbitrary pulse shapes for which the ML system produces an appropriate spectral phase profile. This generated profile is then sent to the laser control system and applied to the acousto-optic modulator, with a total time Figure 12: Demonstration of solutions to deliver arbitrary photoinjector laser temporal profiles via spectral phase manipulation, comparing target (red dashed lines) and predicted (black solid) profiles. Results for different profiles are compared, including: a) double-ramp, b) triple flat-top, c) cut sine-wave, and d) triangular. between user request and laser activation of less than 100 ms. ## V Summary A new beam line for Full Energy Beam Exploitation (FEBE) has been designed and is currently undergoing installation on the CLARA test facility at STFC Daresbury Laboratory. The goal of this beam line is to support a wide variety of user-driven experiments utilising 250 MeV ultrabright electron bunches delivered at repetition rates up to 100 Hz. The beam line incorporates two large volume experiment chambers with a shielded user hutch, for ease of user access and flexibility in setup of novel experiment apparatus. A key component of the foreseen future experiment programme is novel acceleration, with expressions of interest for plasma acceleration (laser and beam-driven) and structure wakefield acceleration. This has driven key components of the beam line design, including beam diagnostics for GeV-scale, 10 fs duration and micrometer-scale transverse profile electron bunches. The beam line includes the infrastructure for combining electron bunches with a high power (100 TW) laser, housed immediately above the beam line and brought into the hutch via a dedicated vacuum laser transport. The laser will be synchronized to the CLARA bunches using an optical timing architecture similar to that applies at larger x-ray FEL facilities. CLARA will exit shutdown for installation of accelerator modules third quarter of 2023, and will proceed to enter a period of technical and machine commissioning running through to the second half of 2024; installation and commissioning of the 100 TW laser will then take place, running through to early 2025. An open call to the community is expected to be issued mid 2024 for beam time early 2025, but will be contingent on the results of machine commissioning. Future operations are expected to be divided between user access and ongoing machine development. Machine development will focus on achieving more challenging electron bunch parameters and configurations, as well as improving reliability in beam delivery. Time will also be used to ensure CLARA can continue to be a testing ground for future UK accelerator facilities, such as UK XFEL [43; 44].
2309.04459
Subwords as Skills: Tokenization for Sparse-Reward Reinforcement Learning
Exploration in sparse-reward reinforcement learning is difficult due to the requirement of long, coordinated sequences of actions in order to achieve any reward. Moreover, in continuous action spaces there are an infinite number of possible actions, which only increases the difficulty of exploration. One class of methods designed to address these issues forms temporally extended actions, often called skills, from interaction data collected in the same domain, and optimizes a policy on top of this new action space. Typically such methods require a lengthy pretraining phase, especially in continuous action spaces, in order to form the skills before reinforcement learning can begin. Given prior evidence that the full range of the continuous action space is not required in such tasks, we propose a novel approach to skill-generation with two components. First we discretize the action space through clustering, and second we leverage a tokenization technique borrowed from natural language processing to generate temporally extended actions. Such a method outperforms baselines for skill-generation in several challenging sparse-reward domains, and requires orders-of-magnitude less computation in skill-generation and online rollouts.
David Yunis, Justin Jung, Falcon Dai, Matthew Walter
2023-09-08T17:37:05Z
http://arxiv.org/abs/2309.04459v1
# Subwords as Skills: Tokenization for ###### Abstract Exploration in sparse-reward reinforcement learning is difficult due to the requirement of long, coordinated sequences of actions in order to achieve any reward. Moreover, in continuous action spaces there are an infinite number of possible actions, which only increases the difficulty of exploration. One class of methods designed to address these issues forms temporally extended actions, often called skills, from interaction data collected in the same domain, and optimizes a policy on top of this new action space. Typically such methods require a lengthy pre-training phase, especially in continuous action spaces, in order to form the skills before reinforcement learning can begin. Given prior evidence that the full range of the continuous action space is not required in such tasks, we propose a novel approach to skill-generation with two components. First we discretize the action space through clustering, and second we leverage a tokenization technique borrowed from natural language processing to generate temporally extended actions. Such a method outperforms baselines for skill-generation in several challenging sparse-reward domains, and requires orders-of-magnitude less computation in skill-generation and online rollouts. ## 1 Introduction Reinforcement learning (RL), the learning paradigm that allows an agent to interact with an environment and collect its own data, is a promising approach to learning in many domains where human data is too financially expensive or otherwise intractable to collect. Though it began with dynamic programming in tabular settings, the recent use of neural networks as function approximators has lead to great success on many challenging learning tasks (Mnih et al., 2013; Silver et al., 2017; Gu et al., 2017). These successful tasks tend to have some particular properties. In some cases, it is Figure 1: A sample of some of the “skills” that our method identifies for (a) the AntMaze and (b) Kitchen environments, where the transparency is higher for poses earlier in the trajectory. simple to define a reward function that yields reward at every step of interaction (the "dense" reward setting), like directional velocity of a robot learning to walk (Haarnoja et al., 2018). In other cases, the environment dynamics are known, as in the case of Chess or Go (Silver et al., 2017). However, for many natural tasks like teaching a robot to make an omelet, it is much more straightforward to tell when the task is completed without knowing how to automatically supervise each individual step. Learning in these "sparse" reward settings, where reward is only obtained extremely infrequently (e.g., at the end of successful episodes) is notoriously difficult. In order for a learning agent to improve its policy, the agent needs to explore its environment for long periods of time, often in a coordinated fashion, until it finds any reward. One class of solutions to this problem involves including additional task-agnostic dense rewards as bonuses that encourage agents to explore the state space (Pathak et al., 2017; Burda et al., 2018). These methods have seen success in settings where it is extremely computationally cheap to collect interactions because they encourage exploration optimistically to all novel states. Another class of solutions to the exploration issue is to jumpstart the function approximator to be used in reinforcement learning by training it on some pretext task (Yarats et al., 2021; Liu and Abbeel, 2021). Such solutions make sense particularly in visual domains, where learning visual features from scratch at the same time as RL may lead to poor generalization. However, given the poor performance that neural networks typically exhibit on out-of-distribution examples (Cobbe et al., 2019, 2020), such methods either require access to observations in the same environment as they will eventually be deployed, which may be impossible for some new task, or they require significantly many samples, which gets back to the intractability of collecting data. A third class of methods aims to create temporally extended actions, or "skills", from interactions or data. A particular subclass of methods learns skills that are conditioned on the observations (Singh et al., 2020; Pertsch et al., 2021; Ajay et al., 2020; Sharma et al., 2019; Eysenbach et al., 2018; Park et al., 2022, 2023), which means that the deployment scenario needs to match the data. Others relax this assumption (Lynch et al., 2020; Pertsch et al., 2021; Bagatella et al., 2022) so that such skills can easily be transferred to some new domain as long as the action space remains the same. This has the potential to speed up exploration in new tasks for which it is not easy to collect data a priori (i.e., few-shot), which can lead to faster task adaptation. However, these recent efforts in skill learning all require lengthy pretraining phases due to their reliance on neural networks in order to learn the skills. Inspired by the recent cross-pollination of natural language processing (NLP) techniques in offline RL (Chen et al., 2021; Janner et al., 2021; Shafullah et al., 2022), we take a different approach. Like the long-range coordination required for exploration in sparse-reward RL, language models must model long range dependencies between discrete tokens. Early on, these tokens typically took the form of characters or words. Character input led to extremely long sequences, which are computationally expensive to process, and require language models to both spell correctly and model inter-word relations. On the other hand, word-level input ultimately results in the model poorly capturing certain rare and unseen words. The solution was to create "subword" tokens somewhere in between individual characters and words, so that models would not be required to spell, but would be able to express anything in the vocabulary (Gage, 1994; Sennrich et al., 2015; Provikov et al., 2020; Kudo, 2018; Schuster and Nakajima, 2012; He et al., 2020). In light of this development on tokenization, we propose a tokenization method for learning skills. Following prior work (Dadashi et al., 2021; Shafullah et al., 2022), we discretize the action space and use a modified byte-pair encoding (BPE) scheme (Gage, 1994; Sennrich et al., 2015) to obtain temporally extended actions. As we demonstrate, such a method benefits from extremely fast skill-generation (minutes, compared to hours for neural network-based methods), significantly faster rollouts and training due to open-loop subword execution that does not require an additional neural network, and strong results in several sparse-reward domains. ## 2 Related Work **Exploration in RL:** Exploration is a fundamental problem in RL, particularly when reward is sparse. A common approach to encouraging exploratory behavior is to augment the (sparse) environment reward with a dense bonus term that biases toward exploration. This includes the use of state visitation counts (Poupart et al., 2006; Lopes et al., 2012; Bellemare et al., 2016) and state entropy objectives (Mohamed and Jimenez Rezende, 2015; Hazan et al., 2019; Lee et al., 2019; Pitis et al., 2020; Liu and Abbeel, 2021; Yarats et al., 2021) that incentivize the agent to reach "novel" states. Related, "curiosity"-based exploration bonuses encourage the agent to take actions in states the effect of which is difficult to predict using a learned forward (Schmidhuber, 1991; Chentanez et al., 2004; Stadie et al., 2015; Pathak et al., 2017; Achiam and Sastry, 2017; Burda et al., 2018) or inverse dynamics model (Haber et al., 2018). Burda et al. (2018) propose a random network distillation exploration bonus based upon the error in observation features predicted by a randomly initialized neural network. **Temporally Extended Actions:** Another long line of work explores temporally extended actions due to the potential for such abstractions to improve learning efficiency. These advantages are particularly pronounced for difficult learning problems such as sparse reward tasks for which action abstractions enable more effective exploration (Nachum et al., 2018) and simplify the credit assignment problem. Hierarchical reinforcement learning (HRL) Dayan and Hinton (1992); Kaelbling (1993); Sutton (1995); Boutilier et al. (1997); Parr and Russell (1997); Parr (1998); Sutton et al. (1999); Dietterich (2000); Barto and Mahadevan (2003); Bacon et al. (2017); Vezhnevets et al. (2017) considers the problem of learning policies with successively higher levels of abstraction (typically two), whereby the lowest level considers actions directly applied in the environment while the higher levels reason over temporally extended transitions. **Hierarchical RL:** The options framework (Sutton et al., 1999) provides a standardization of HRL in which an option is a terminating sub-policy that maps states (or observations) to low-level actions. At the next level of abstraction, a higher-level policy reasons over which of these options to call until termination at which point the policy chooses the next option to execute. Options are often either prescribed as predefined low-level controllers or learned via subgoals or explicit intermediate rewards (Dayan and Hinton, 1992; Dietterich, 2000; Sutton et al., 1999). Konidaris and Barto (2009) learn a two-level hierarchy by incrementally chaining options ("skills") backwards from the goal state to the start state. Nachum et al. (2018) propose a hierarchical learning algorithm (HIRO) that learns in an off-policy fashion and, in turn, is more sample-efficient than typical HRL algorithms, which learn on-policy. Achieving these sample efficiency gains requires addressing the instability typical of off-policy learning, which are complicated by the non-stationarity that comes with jointly learning low- and high-level policies. Levy et al. (2017) use different forms of hindsight (Andrychowicz et al., 2017) to address similar instability issues that arise when learning policies at multiple levels in parallel. Kulkarni et al. (2016) combine hierarchical DQN network with a goal-based intrinsic reward bonus to further improve exploration in difficult RL domains. Some simple instantiations of options include repeated actions (Sharma et al., 2017) and self-avoiding random walks (Amin et al., 2020). **Skill Learning from Interaction:** Option-like temporal abstractions are not only useful as primitives for hierarchical RL, but are also benefit learning in sparse reward environments as well as for long horizon tasks. In addition to the methods mentioned above in the context of HRL, there is an existing body of work that seeks to learn abstract actions as more generally useful "skills". A number of techniques define the utility of skills in terms of their diversity and coverage of the space of behaviors or states (i.e., effective skills are those that can be distinguished from one another) (Daniel et al., 2012; Gregor et al., 2016; Eysenbach et al., 2018; Warde-Farley et al., 2018; Park et al., 2022; 2023). Without access to reward, Eysenbach et al. (2018) learn skills that maximize a self-supervised information theoretic objective with maximum entropy policies that encourage diversity in the resulting skill set. Sharma et al. (2019) identify skills that are not only diverse, but also give rise to predictable behavior according to a learned dynamics model. **Skill Learning from Demonstrations:** Most related to our setting is a line of work that explores better exploration from demonstration data (Lynch et al., 2020; Ajay et al., 2020; Singh et al., 2020; Pertsch et al., 2021; Bagatella et al., 2022). As an example, Lynch et al. (2020) learn a VAE on chunks of action sequences in order to generate a temporally extended action by sampling a single vector. Ajay et al. (2020) follow a similar approach, but use flow models on top of entire trajectories, and only rollout a part of the generation at inference time. Some of these methods (Ajay et al., 2020; Singh et al., 2020; Pertsch et al., 2021) condition on the observations when learning skills, which leads to more efficient exploration, but such conditioning means that any skill that is learned will need to be deployed in the same environment as the data was collected. Others Lynch et al. (2020); Bagatella et al. (2022) simply condition on actions, which means that the skills can be reused in any domain that shares the same action space. In an effort to learn more generalizable skills, we follow this latter example. There is also a related prior work that applies grammar-learning to online RL Lange and Faisal (2019), but such a method learns an ever-growing number of longer actions, which poses significant issues in the sparse-reward setting, as we discuss later. ## 3 Method ### Byte-Pair Encoding Byte-pair encoding (BPE) was first proposed as a simple method to compress files (Gage, 1994), but it has recently been used to construct vocabularies for NLP tasks in between the resolution of characters and whole-words (Sennrich et al., 2015). With character vocabularies, the vocabulary is small, but the sequence lengths are large. Such long sequences are extremely computationally burdensome to process for previous language models, but especially for the current generation of Transformers. In addition, making predictions at the character level imposes a more difficult task on the language model: it needs to spell everything correctly, or make a long-coordinated set of predictions, not unlike the requirement on action sequences for sparse-reward exploration. Whole-word vocabularies shorten the sequence lengths and make the prediction task easier, but if a word is rare or, even worse, unseen in the training data, the outputs of the language model may not be correct in many cases. Subword vocabularies have emerged as a sweet-spot between these two extremes and widely used in large language models (Schuster and Nakajima, 2012; Sennrich et al., 2015; Kudo, 2018; Provilkov et al., 2020; He et al., 2020). Given a long sequence of tokens and an initial fixed vocabulary, BPE consists of two core operations: (i) compute the most frequent pair of tokens and add it to the vocabulary, and (ii) merge all instances of the pair in the string. These two steps of adding tokens and making merges alternate until a fixed maximum vocabulary size is reached. ### Discretizing the Action Space However, in order to run BPE, it is necessary to have an initial vocabulary \(\mathcal{V}\) as well as a string of discrete tokens. In a continuous action space, one simple way to form tokens is through clustering. Prior work has leveraged these ideas in similar contexts (Janner et al., 2021; Shafiullah et al., 2022; Jiang et al., 2022) and we follow suit. Suppose we are given a dataset of \(N\) trajectories that involve the same action space as our downstream task \[\mathcal{D}=\left\{(o_{ij},a_{ij})_{i}|i\in\mathbb{N}\cap[0,N),\;j\in\mathbb{N }\cap[0,n_{i}),\;o_{ij}\in\mathbb{R}^{d_{\text{act}}},\;a_{ij}\in\mathbb{R}^{ d_{\text{act}}}\right\},\] where \(a_{ij}\) and \(o_{ij}\) denote actions and observations, respectively. For simplicity, we perform the \(k\)-means clustering algorithm with the euclidean metric on the action space to form a vocabulary of \(k\) discrete tokens \(\mathcal{V}=\{v_{0},\dots,v_{k}\}\). Our default choice for \(k\) will be two times the number of degrees-of-freedom of the original action space, or \(2\cdot d_{\text{act}}\). This is very similar to the action space of Shafiullah et al. (2022) without the residual correction. Figure 2: Abstract representation of our method. Given demonstrations in the same action space as our downstream task, we discretize the actions and apply tokenization techniques to recover “subwords” that form a vocabulary of skills. We then train a policy on top of theses skills for some new task. Because we do not condition on observations during skill generation, we only require a common action space. ### Scoring Merges In NLP, we often have access to a large amount of text data from (mostly) correct human authors. However, for most robotics applications we do not have the same quantity of near-optimal (or even suboptimal) demonstrations. As a result, it is undesirable to merge tokens based on frequency alone like BPE does. In particular, if the demonstrations are suboptimal or contain no reusable chunks, such subtrajectories would not be useful for reinforcement learning. Instead, we will consider merging on a proxy for the distance traveled in the observation space in order to encourage the creation of skills that are useful for many tasks. More formally, suppose that two neighboring subwords \(w_{1}\) and \(w_{2}\) correspond to the trajectories \(\tau_{1}=\{(o_{1},a_{1}),\ldots,(o_{n},a_{n})\}\) and \(\tau_{2}=\{(o_{n+1},a_{n+1}),\ldots,(o_{m},a_{m})\}\). For an instance of the subword \(w=\text{concat}(w_{1},w_{2})\) consisting of the entire trajectory \(\tau=\text{concat}(\tau_{1},\tau_{2})\), we associate the vector \(q_{\tau}=\frac{1}{m}\sum_{i=1}^{m}(o_{i}-o_{1})\). This vector is analogous to the average "heading" of the subword, which ignores short-term, high-frequency motion. In order to obtain a vector that summarizes \(w\), we compute the mean of such instances \(q_{w}=\mathbb{E}_{(\tau_{1},\tau_{2})\in\mathcal{D}}\left[q_{\tau}\right]\), which takes into account possible observation noise at different instances. Given an existing vocabulary of subwords \(\mathcal{W}=\{w_{0},\ldots,w_{n-1}\}\) and their corresponding vectors \(\mathcal{Q}=\{q_{0},\ldots,q_{n-1}\}\), we can compute the mean \(\bar{q}=\mathbb{E}_{q\in\mathcal{Q}}[q]\) and covariance matrix \(\Sigma_{q}=\text{Cov}_{q\in\mathcal{Q}}(q)+\epsilon I\) for some small \(\epsilon\). Now, we associate a score to each possible new subword according to the Mahalanobis distance between the candidate subword and the set of existing subwords: \(d_{w}=(q_{w}-\bar{q})^{\top}\Sigma_{q}^{-1}(q_{w}-\bar{q})\). We add the subword with maximum distance \(d_{w}\) to our vocabulary. We update \(\Sigma_{q}\) and \(\bar{q}\) at every iteration. This results in a growing vocabulary of subwords that not only achieve high distance in observation space, but are diverse. Such a scoring function also accounts for the fact that different parts of the observation space may have different natural scales. We merge up to a maximum vocabulary size \(|\mathcal{W}|=N_{\text{max}}\). ### Pruning the Subwords If we stopped after merging to a maximum size, the final vocabulary would contain the intermediate subwords that make up the longest units. In the context of NLP, this redundancy may not be particularly detrimental. In reinforcement learning, however, redundancy in the action space of a new policy will result in similar actions competing for probability mass, making exploration and optimization more difficult. In order to deal with this issue, we prune the set of subwords using the same metric as was used to merge. In particular, we find \(w^{\prime}=\text{arg min}_{w}d_{w}\), update \(\mathcal{W}\leftarrow\mathcal{W}\setminus\{(w^{\prime}\}\), and recompute \(\Sigma_{q}\) and \(\bar{q}\). We continue pruning in this fashion until reaching a minimum vocabulary size \(|\mathcal{W}|=N_{\text{min}}\). Finally, \(\mathcal{W}\) becomes the action space for a new policy. Algorithm 1 provides the pseudocode for the full method, and Figure 2 provides a graphical representation. Implicit in our method is an assumption that portions of the demonstrations can be recomposed to solve a new task, i.e., that there exists a policy that solves the new task with this new action space. One can imagine a counter-example where the subwords we obtain lack some critical action sequence without which the task cannot be solved. Still, we will show that this is a reasonable assumption for several sparse-reward tasks. ## 4 Experiments In the following sections, we explore the empirical performance of our proposed method. Before evaluating the entire method, we first investigate the quality of the discrete actions that we can form from data. ### Discrete Actions from Data Methods like Trajectory Transformer (Janner et al., 2021), Behavior Transformer (Shafiullah et al., 2022), and TAP (Jiang et al., 2022) perform supervised learning on top of trajectory data using a discrete action-space derived from the data. Dadashi et al. (2021) go further and learn a state-dependent discretization of the action space. One of the primary motivations for discretizing is to capture the multimodality of demonstrations. When using mean squared error (MSE) as the loss function, the policy is implicitly a unimodal Gaussian. If there are two demonstrations that maneuver around an obstacle in opposite directions, the mean action will run into the obstacle and fail to complete the task. Such an example was explained in detail by Shafiullah et al. (2022). Still, prior work combines discretization with many additional architectural and optimization components. To test the behavior of discrete actions in isolation, we perform behavior cloning with a simple fully-connected neural network on demonstration data from the D4RL (Fu et al., 2020) dataset. To be clear, our objective is not to show that simple \(k\)-means on demonstrations outperforms contemporary methods. Instead, we are investigate whether behavioral cloning with these actions achieves modest performance in which case there is the potential for further tokenization to be effective in sparse-reward domains. We compare to CQL (Kumar et al., 2020), an offline Q-learning algorithm that encourages staying close to the demonstration distribution; Diffuser (Janner et al., 2022), a diffusion model conditioned on an initial and final state; and Diffusion-QL (Wang et al., 2022), an offline Q-learning algorithm that uses a diffusion model on top of actions to stay close to the demonstration distribution. For more details on the experimental setting, see Appendix B. In Table 1, we see that the dense-reward locomotion domains suffer from discretization, which makes sense as locomotion policies may require fine-grained control to move at high speed and achieve high reward. On AntMaze, however, we see that simple \(k\)-means discretization significantly boosts performance. This can be due to the fact that, at a given position, there are many possible motions that can move the body, but they are completely distinct in action space, which a unimodal policy may fail to capture. In the Kitchen domain, a policy that reasons over discrete actions achieves modest performance. The data in this domain was collected from expert human demonstrations, and there is very low variability in the executions, so it may be the case that multimodality is simply not necessary. ### Subwords as Skills Armed with the knowledge that \(k\)-means discovers discrete actions that are capable of completing sparse-reward tasks, we can build vocabularies of temporally extended actions on top of these. After forming skills from demonstrations with our method, we perform RL on several challenging sparse-reward downstream tasks. For tasks, we consider AntMaze and Kitchen from D4RL (Fu et al., 2020), two challenging sparse-reward tasks. AntMaze is a maze navigation task with a quadrupedal robot, and Kitchen is a manipulation task in a kitchen setting. We also consider CoinRun (Cobbe et al., 2019), a platforming game. For baselines, we consider SAC (Haarnoja et al., 2018); SAC-discrete on top of \(k\)-means actions (Christodoulou, 2019); Skill-Space Policy (SSP), a VAE trained on sequences of 10 actions at a time (Pertsch et al., 2021); State-Free Priors (SFP) (Bagatella et al., 2022) a sequence model of actions that is used to inform action-selection during SAC inference, which takes the last action as context; and OPAL (Ajay et al., 2020), a flow model for entire trajectories of actions that is conditioned on the observation. Only OPAL conditions generation on the observations from demonstrations. For SAC, SAC-discrete, SSP, and SFP, we implement or run the official code with the default hyperparameters listed in the respective papers. For OPAL, we only report numbers as the code is currently closed-source. Complete results are available in Table 2. All numbers are taken from the end of training. We report mean and standard deviation across five seeds. For more experimental details see Appendix C. We see in Table 2, that even in these challenging sparse-reward tasks, our method is the only one that is able to achieve nonzero reward across all tasks. The large standard deviations are due to the fact that some seeds fail to achieve any reward. In the case of OPAL, the small standard deviation must be \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Task & BC & \(k\)-means BC & CQL & Diffuser & Diffusion-QL & \(k\)-means BC + goals \\ \hline hopper-medium & 52.9 & 8.3\(\pm\)1.9 & 58.0 & 74.3\(\pm\)1.4 & 90.5\(\pm\)4.6 & — \\ hopper-medium-replay & 18.1 & 8.3\(\pm\)1.6 & — & 93.6\(\pm\)0.4 & 10.3\(\pm\)0.6 & — \\ hopper-medium-expert & 52.5 & 10.2\(\pm\)1.7 & 111.0 & 103.3\(\pm\)1.3 & 111.1\(\pm\)1.3 & — \\ walker2d-medium & 75.3 & 9.8\(\pm\)2.7 & 79.2 & 79.6\(\pm\)0.5 & 87.0\(\pm\)0.9 & — \\ walker2d-medium-replay & 26.0 & 7.9\(\pm\)0.7 & 0 & 70.6\(\pm\)1.6 & 95.5\(\pm\)1.5 & — \\ walker2d-medium-expert & 107.5 & 9.7\(\pm\)0.6 & 98.7 & 106.9\(\pm\)0.2 & 110.1\(\pm\)0.3 & — \\ halfccheath-medium & 42.6 & 27.2\(\pm\)3.5 & — & 42.8\(\pm\)0.3 & 51.1\(\pm\)0.5 & — \\ halfcCheath-medium-replay & 36.6 & 8.6\(\pm\)2.8 & — & 37.7\(\pm\)0.5 & 47.8\(\pm\)0.3 & — \\ halfcCheath-medium-expert & 55.2 & 15.1\(\pm\)4.4 & 62.4 & 88.9\(\pm\)0.3 & 96.8\(\pm\)0.3 & — \\ \hline antmaze-maze & 54.6 & 84.0\(\pm\)3.8 & 74.0 & — & 93.4\(\pm\)3.4 & 82.6\(\pm\)6.6 \\ antmaze-maze-diverse & 45.6 & 93.8\(\pm\)4.7 & 84.0 & — & 66.2\(\pm\)8.6 & 89.0\(\pm\)7.2 \\ antmaze-medium-play & 0.0 & 0.0 & 61.2 & — & 76.6\(\pm\)0.8 & 15.2\(\pm\)9.8 \\ antmaze-medium-diverse & 0.0 & 0.0 & 53.7 & — & 78.6\(\pm\)0.3 & 14.4\(\pm\)7.5 \\ antmaze-large-play & 0.0 & 0.0 & 15.8 & — & 46.4\(\pm\)3.3 & 2.6\(\pm\)2.8 \\ antmaze-large-diverse & 0.0 & 0.0 & 14.9 & — & 56.6\(\pm\)7.6 & 10.8\(\pm\)5.6 \\ \hline kitchen-complete & 65.0 & 54.0\(\pm\)3.5 & 43.8 & — & 84.0\(\pm\)7.4 & — \\ kitchen-partial & 38.0 & 14.8\(\pm\)0.2 & 49.8 & — & 60.5\(\pm\)6.9 & — \\ kitchen-mixed & 51.5 & 18.0\(\pm\)4.6 & — & — & 62.6\(\pm\)5.1 & — \\ \hline \hline \end{tabular} \end{table} Table 1: D4RL offline learning results. BC numbers are from Emmons et al. (2021), Diffusion-QL numbers are from Wang et al. (2022), CQL numbers are from Kumar et al. (2020). \(k\)-means BC numbers are from the best checkpoint during training. Figure 3: All skills generated for antmaze-medium-diverse where the transparency is higher for poses earlier in the trajectory. We see a range of different behaviors across the skills. due to the fact that all seeds achieve around an \(80\%\) success rate, which could be attributed to the extra observation conditioning that OPAL receives. Such conditioning allows access to information about which skills to employ at which place within the maze, which leads to significantly simpler exploration. SPiRL (Pertsch et al., 2021) leverages a similar idea. Though the results are strong, such a method that relies on observation conditioning will fail to generalize to new domains, as in the case of CoinRun where the visual style of the observations differs between demonstrations and downstream tasks. All settings with zero reward fail to achieve any reward during training. Figure 3 visualizes 200-step rollouts of all of the discovered subwords for antmaze-medium-diverse. We provide mean and standard deviations for subword lengths in extracted vocabularies in Table 4. Due to the simplicity of our method, it also enjoys significant acceleration compared to the baselines. To quantify this, we measure the time required to generate skills and the wall-clock time of inference in Table 3. We see that our method achieves extremely significant speedups compared to prior work, which leads to both faster and more efficient learning, as well as faster inference during execution. Such fast execution is beneficial to quick iteration and the large budgets that certain tasks can require. ### Exploration Behavior on AntMaze Medium The stringent evaluation procedure for our sparse-reward RL equally penalizes poor learning and exploration. The many zeros in Table 2 may be cause for concern. In order to shed light on this, we examine the exploration behavior of our method on AntMaze Medium. We choose this domain because it is particularly straightforward to interpret what good and bad exploration looks like: coverage of the maze. In Figure 4 and Figure 5 we plot state visitation from the replay buffer for the first \(1\) million of \(10\) million steps of RL. We show the approximate start position in grey in the bottom left and the approximate goal location in green in the top right. Higher color intensity (saturation) \begin{table} \begin{tabular}{l r} \hline \hline **Task** & Subword length \\ \hline antmaze-medium-diverse & \(8.5\pm 5.0\) \\ antmaze-large-diverse & \(12.5\pm 5.3\) \\ kitchen-mixed & \(9.2\pm 4.5\) \\ CoinRun & \(9.1\pm 5.6\) \\ \hline \hline \end{tabular} \end{table} Table 4: Subword length across domains. These numbers are intended to match the length 10 skills of baselines, but it is difficult to precisely control length due to the merging and pruning process. \begin{table} \begin{tabular}{l c c c c c c} \hline \hline Task & SAC & SAC-discrete & SSP-p & SSP-r & SFP & OPAL & Ours \\ \hline antmaze-medium-diverse & 0.0 & 0.0 & — & 0.0 & 0.82\(\pm 0.04\) & 0.40\(\pm 0.55\) \\ antmaze-large-diverse & 0.0 & 0.0 & — & 0.0 & 0.0 & 0.34\(\pm 0.46\) \\ kitchen-mixed & 0.0 & 0.0 & \(0.8\pm 0.2\) & 0.0 & \(0.12\pm 0.07\) & — & \(0.72\pm 0.40\) \\ CoinRun & — & 0.0 & — & \(5.3\pm 4.4\) & — & — & \(2.9\pm 2.9\) \\ \hline \hline \end{tabular} \end{table} Table 2: Main comparison (unnormalized scores). OPAL numbers are taken from their paper (Ajay et al., 2020) since there is no publicly available implementation. SSP-p corresponds to published numbers estimated from their paper (Pertsch et al., 2021, Figure 4). SSP-r corresponds to reproduced results from official code. We report last numbers in training run for consistency. SFP takes so long it is unmanageable on many domains. AntMaze is scored \(0\)\(-1\), Kitchen is scored \(0\)\(-4\) in increments of \(1\), CoinRun is scored \(0\)\(-100\) in increments of \(10\). \begin{table} \begin{tabular}{l c} \hline \hline Method & Skill Generation & Online Rollout \\ \hline SSP & \(130000\pm 1800\) & \(0.9\pm 0.05\) \\ SFP & \(8000\pm 500\) & \(4.1\pm 0.1\) \\ Ours & \(210\pm 10\) & \(0.007\pm 0.0006\) \\ \hline \hline \end{tabular} \end{table} Table 3: Timing in seconds for antmaze-medium-diverse. All methods measured on the same Nvidia RTX 3090 GPU with 8 Intel Core i7-9700 3 GHz CPUs @ 3.00 GHz. For human-readable numbers, SSP takes around 36 hours for skill generation and SFP takes around 2 hours. corresponds to a higher probability of that state. Color is scaled nonlinearly according to a power law between \(0\) and \(1\) for illustration purposes. Thin white areas between the density and the walls can be attributed to the fact that we plot the center body position, and the legs have a nontrivial size limiting the proximity to the wall. In Figure 4, we show the exploration behavior across methods, averaged over \(5\) seeds. We see that the \(0\) values for the final reward in Table 2 (main paper) for SSP and SFP are likely due not to poor optimization, but rather poor exploration early in training, unlike our method. One reason for this could be due to the fact that our subwords are a discrete set, so policy exploration does not include small differences in a continuous space, as well as the fact that our subwords only model skills that go a long distance, while SFP and SSP have to model all behavior in the dataset, so in the case of bad demonstrations, they may be suboptimal. In Figure 5, we show the individual seed visitation of our method in the first \(1\) million steps. This is to demonstrate that, even though individual seeds may have some bias, they all are able to explore much more widely than the collective exploration of either baseline method. Indeed, this suggests that the large error bars of our method are a result of an optimization failure, as suggested by Zhou et al. (2022), and not poor exploration due to bad skill-encoding. ### Ablations Certainly the level of discretization, and the size of the vocabulary will have an effect on performance. In the following sections we perform ablations over the primary hyperparameters on AntMaze-Medium and Kitchen. #### 4.4.1 Number of Discrete Primitives All of our results in Table 2 use the simple rule-of-thumb that \(k=2\times\) degrees-of-freedom. Such a choice may not be optimal depending on the setting. In Table 5 we see that this choice seems to be a simple sweet spot across the two domains, though the method can achieve reward with significantly different values of \(k\). Figure 4: A visualization of the state visitation for RL on antmaze-medium-diverse in the first \(1\) million timesteps for (a) SFP, (b) SSP, and (c) our method. The start state position is indicated by the grey circle in the bottom left, while the goal is indicated by the green circle in the top right. All methods are averaged over \(5\) seeds. Notice that our method explores the maze much more extensively. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(k\) & 4 & 1 \(\times\) DOF & **2 \(\times\) DOF** & 4 \(\times\) DOF & 8 \(\times\) DOF \\ \hline antmaze-medium-diverse & 0.0 & 0.0 & \(0.40\pm 0.55\) & \(0.20\pm 0.45\) & 0.0 \\ kitchen-mixed & \(0.16\pm 0.35\) & \(0.08\pm 0.18\) & \(0.72\pm 0.40\) & 0.0 & \(0.20\pm 0.45\) \\ \hline \hline \end{tabular} \end{table} Table 5: Results for different numbers of clusters in terms of the number of degrees-of-freedom (DOF). AntMaze DOF = \(8\), Kitchen DOF = \(9\). The default setting is in bold. #### 4.4.2 Maximum Vocabulary Size A crucial property of the vocabulary is the length of the subwords within. Long subwords lead to more temporal abstraction and easier credit-assignment for the policy, but long subwords can also get stuck for many transitions, leading to poor exploration. In Table 6, we vary the value of \(N_{\text{max}}\), which is a proxy for the length of the subwords in the vocabulary. Our default setting for each environment targets an average length of around \(10\) to match the baselines, but we see that different domains may have different optimal choices for length, which makes sense given the episode length for Kitchen is around a quarter of that of AntMaze. #### 4.4.3 Minimum Vocabulary Size Ultimately, the dimensionality of the action space will make exploration easier or harder. A large vocabulary results in too many paths for the policy to explore well, but a vocabulary that is too small may not include all the subwords necessary to represent a good policy for the task. We see in Table 7 that even if AntMaze can be accomplished with fewer subwords (a smart handcrafted action space might consist of one action for turning and one for moving forward), Kitchen performance suffers significantly at low values. ### Notes on Reproducibility One observation from the above results is that even at the default settings, the results are not always stable. Such inconsistency goes beyond our work alone: the disagreement of dense-reward offline RL (Fu et al., 2020; Emmons et al., 2021; Janner et al., 2021; Wang et al., 2022) numbers; the failure to reproduce SSP baseline results Table 2; and our results across Tables 2, 5, 6, and 7. In our case, there is some nondeterminism in multithreading and the RL training code. Such nondeterminism is particularly salient in sparse-reward settings because so much of the policy optimization hinges on successful exploration. In particular, the library we use for RL, Stable Baselines 3 (Raffin et al., 2021), has subroutines that cannot be tightly controlled on the GPU. In addition, we often observe \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(N_{\text{max}}\) & 32 & 64 & **128** & 256 & 512 \\ \hline antmaze-medium-diverse & 0.0 & \(0.25\pm 0.5\) & \(0.28\pm 0.43\) & \(0.61\pm 0.48\) & \(0.07\pm 0.08\) \\ kitchen-mixed & 0.0 & \(0.50\pm 0.57\) & 0.0 & \(0.04\pm 0.10\) & 0.0 \\ \hline \hline \end{tabular} \end{table} Table 6: Results for maximum vocabulary size (proxy for length). In bold is the default setting. \begin{table} \begin{tabular}{l c c c c c} \hline \hline \(N_{\text{min}}\) & 4 & 6 & 8 & 12 & **16** \\ \hline antmaze-medium-diverse & 0.0 & \(0.24\pm 0.49\) & \(0.71\pm 0.41\) & \(0.41\pm 0.49\) & \(0.39\pm 0.53\) \\ kitchen-mixed & 0.0 & 0.0 & 0.0 & \(0.01\pm 0.01\) & \(0.06\pm 0.14\) \\ \hline \hline \end{tabular} \end{table} Table 7: Results for minimum vocabulary size \(N_{\text{min}}\). In bold is the default setting. Figure 5: State visitation achieved with our method for each of the \(5\) individual seeds. Notice the diversity of exploration behavior. This is true even for seeds like \(2\) and \(3\) that fail to reach the goal, and as reflected in the standard deviations in Table 2 (main paper), that eventually finish with a final reward of \(0\). collapse of the policy during training, which is not an unfamiliar issue in RL. This could be due to the design of SAC (Haarnoja et al., 2018), which may not easily adapt to the discrete setting (Zhou et al., 2022), leading to further instability. All the above suggests that five random seeds is not enough to quantify performance (Henderson et al., 2018), however running more samples incurs a significant computational burden, particularly for the baseline methods. We hope that a method like ours may lessen the burden of this expense in the future. ## 5 Conclusion **Limitations:** As discussed, there are a few key limitations to our method. Discretization removes resolution from the action space, which will be detrimental in settings like fast locomotion that require the full range, but this may potentially be fixed by a residual correction (Shafiullah et al., 2022). In addition, execution of our subwords is currently open loop, so exploration can be inefficient (Amin et al., 2020) and unsafe (Park et al., 2021). Finally, in order to operate on the CoinRun domain, we downsample inputs from \(64\times 64\) resolution to \(32\times 32\) to make matrix inversion during merging less expensive (2 hours vs. 2 minutes). In high-dimensional visual input domains, our merging may be too computationally expensive to perform. However, such inputs are also an issue for deep learning methods. We speculate that higher-quality demonstrations could allow us to generate skills simply by merging based on frequency. **Broader Impacts**: Our contributed method is somewhat domain-agnostic, so it is difficult to comment on negative impacts, but the added speed and efficiency of our method may lead to faster sparse-reward learning in not only simulation as demonstrated, but in the real world as well. In such settings safety during execution will be a critical consideration (Park et al., 2021). Architectures from NLP have made their way into Offline RL (Chen et al., 2021; Janner et al., 2021; Shafiullah et al., 2022), but as we have demonstrated, there is a trove of further techniques to explore. We showed that simple discretization can be helpful in offline RL, then leveraged that discretization to form skills through a simple tokenization method. Such a method is much faster both in skill generation and in policy inference, and leads to strong performance in a relatively small sample budget on several challenging sparse-reward tasks. Moreover, the discrete nature of our skills lends itself to interpretation: one can simply look at the execution to figure out what has been extracted (Appendix A). We believe that such a method is the first step along a new avenue for efficient reinforcement learning. #### Acknowledgments We thank Takuma Yoneda and Jiading Fang as well as other members of the RIPL lab at TTIC for helpful discussions throughout the process. This material is based upon work supported by the National Science Foundation Graduate Research Fellowship Program under Grant No. 1754881. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation. ### Division of Labor DY came up with and implemented the method, adapted baseline code, ran most of the experiments and generated Kitchen visualizations. JJ adapted the SFP baseline to the settings considered and generated AntMaze subword visualizations. FD ran the initial CoinRun experiments to test feasibility, collected CoinRun trajectory data for use by all methods and made Figure 2. MW advised the project at every stage. DY and MW were responsible for most of the writing.
2309.11054
Design of Chain-of-Thought in Math Problem Solving
Chain-of-Thought (CoT) plays a crucial role in reasoning for math problem solving. We conduct a comprehensive examination of methods for designing CoT, comparing conventional natural language CoT with various program CoTs, including the self-describing program, the comment-describing program, and the non-describing program. Furthermore, we investigate the impact of programming language on program CoTs, comparing Python and Wolfram Language. Through extensive experiments on GSM8K, MATHQA, and SVAMP, we find that program CoTs often have superior effectiveness in math problem solving. Notably, the best performing combination with 30B parameters beats GPT-3.5-turbo by a significant margin. The results show that self-describing program offers greater diversity and thus can generally achieve higher performance. We also find that Python is a better choice of language than Wolfram for program CoTs. The experimental results provide a valuable guideline for future CoT designs that take into account both programming language and coding style for further advancements. Our datasets and code are publicly available.
Zhanming Jie, Trung Quoc Luong, Xinbo Zhang, Xiaoran Jin, Hang Li
2023-09-20T04:17:28Z
http://arxiv.org/abs/2309.11054v2
# Design of Chain-of-Thought in Math Problem Solving ###### Abstract Chain-of-Thought (CoT) plays a crucial role in reasoning for math problem solving. We conduct a comprehensive examination of methods for designing CoT, comparing conventional natural language CoT with various program CoTs, including the _self-describing program_, the _comment-describing program_, and the _non-describing program_. Furthermore, we investigate the impact of programming language on program CoTs, comparing _Python_ and _Wolfram Language_. Through extensive experiments on GSM8K, MathQA, and SVAMP, we find that program CoTs often have superior effectiveness in math problem solving. Notably, the best performing combination with 30B parameters beats GPT-3.5-turbo by a significant margin. The results show that self-describing program offers greater diversity and thus can generally achieve higher performance. We also find that Python is a better choice of language than Wolfram for program CoTs. The experimental results provide a valuable guideline for future CoT designs that take into account both programming language and coding style for further advancements. Our datasets and code are publicly available1. Footnote 1: [https://github.com/lqtrung1998/mwp_cot_design](https://github.com/lqtrung1998/mwp_cot_design) ## 1 Introduction Math problem solving is an ideal task to assess the multi-step reasoning abilities of large language models (LLMs). LLMs exhibit remarkable reasoning abilities with the use of chains-of-thought, surpassing previous methods in various reasoning tasks (Lightman et al., 2023; Wei et al., 2022; Tourron et al., 2023a). The challenge of producing reliable chains-of-thought (CoT) (Wei et al., 2022b) remains, however, particularly in the nuanced and complex cases of mathematical problem solving (Golovneva et al., 2022). Recent research has focused on refining prompt engineering strategies or developing new CoT representations, such as program CoTs (Gao et al., 2023; He-Yueya et al., 2023). Although existing approaches can boost overall performance (Lu et al., 2023), a thorough comparison of various CoTs remains absent in the literature. In this paper, we conduct a comprehensive examination of multiple CoT designs, including natural language (NL) and program CoTs, such as the _self-describing program_, the _comment-describing program_, and the _non-describing program_. Figure 1 illustrates different CoT representations for solving a multi-choice math problem. For program CoTs, besides the popular programming language _Python_, we also use _Wolfram Language_(Wolfram, 2015), a scientific programming language known for its ability to naturally express complex mathematical expressions. One advantage of the program CoTs is that their validity can be easily verified by executing the programs. For instance, we can easily represent an equation in one line (e.g., Figure 1 Wolfram line \(1\)) and solve it with built-in functions (i.e., Solve[]). The NL CoTs do not have this capability, but they can better translate the questions in language into descriptions of reasoning by leveraging the power of LLMs. We consider three types of program CoTs. The self-describing program (SDP) is similar to PAL (Gao et al., 2023) in which the variable names are extracted from the questions. In contrast, the non-describing program (NDP) only uses abstract variable names (e.g., \(v1\) and \(v2\)). In SDP, programs can be created more easily from the questions, while in NDP, programs can be used more effectively in reasoning. To combine the strengths of both types, we introduce the comment-describing program (CDP), a new CoT design that blends abstract variable names with natural language comments. Following the common practice (Uesato et al., 2022; Lightman et al., 2023), we conduct fine-tuning, reranking, and majority-voting experiments to compare the CoTs on GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and SVAMP (Patel et al., 2021) datasets. Under the best setting, the method using the 30B model with reward model reranking is able to outperform the GPT-3.5-turbo's few-shot performance by approximately \(2.9\)% on GSM8K, \(18\)% on MathQA and \(8\)% on SVAMP. We make the following main conclusions from the experiments. 1. Program CoTs generally perform better than natural language CoTs, indicating that the use of more rigid CoTs is better. 2. The presence of natural language in SDP and CDP is crucial for achieving high performance compared with NDP. SDP is generally superior to CDP, because it can generate more diverse CoTs and thus achieve higher performance in majority voting and reranking. 3. Program CoTs in Python perform better than those in Wolfram, when the CoTs are in the same type. 4. By combining the use of different types of CoTs, we can enhance overall performance, showing the potential for further CoT design that takes advantage of the strengths of all CoT types. Our findings offer valuable insights for designing CoTs in math problem solving and more broadly reasoning with LLMs. Figure 1: Examples of CoT representations: Natural Language (NL) CoT, Comment-Describing Program (CDP) and Self-Describing Program (SDP) in both Wolfram and Python. ## 2 Chain-of-Thought Design ### Natural Language CoTs (NL) Wei et al. (2022b) propose the chain-of-thought prompting technique to enhance the complex reasoning abilities of LLMs. This method endeavors to simulate the thought process of addressing multi-step reasoning problems. As depicted in the second block of Figure 1, this chain-of-thought approach for math problem solving produces step-by-step reasoning descriptions in natural language and provides the final answer at the end of the reasoning process. ### Program CoTs We focus on two distinct programming languages: Wolfram Language (Wolfram, 2015) and Python (Van Rossum & Drake Jr, 1995). Recent work (Wang et al., 2023a) also uses these two languages in tool-based Transformers (Schick et al., 2023). The Wolfram Language, with the _Wolfram Mathematica_ as its execution engine2, is an expressive and versatile language that can effectively represent complex mathematical concepts. It has rich built-in mathematical functions for algebra, equation solving, etc., and an intuitively designed and relaxed syntax. On the other hand, Python is a general-purpose language that has gained widespread adoption in recent literature for mathematical problem solving (Gao et al., 2023; He-Yueya et al., 2023). Given the contrasting nature of Wolfram and Python, we conduct a comprehensive comparison across all program CoT types in the two languages. Next, we describe the design of the CoT types, with Figure 1 showcasing their instances in the two languages. Footnote 2: [https://www.wolfram.com/engine/](https://www.wolfram.com/engine/) Self-Describing Program (SDP)The first design we consider is self-describing program (SDP) as shown in the bottom right of Figure 1. It presents a solution in a step-by-step manner and defines variable names using natural language, similar to that of Gao et al. (2023). One advantage of SDP is that one can solve the problem by directly executing the program. Another advantage is that the variable names are from the question, making it easier to generate the reasoning steps for the LLM. When labeling programs, we follow several general guidelines: (1) using high-level operations to make the program concise and intuitively understandable, (2) listing variable names according to their order in the question, and (3) ensuring that variable names are meaningful, descriptive, and written in snake case naming convention (e.g., lower-cased and separated by underscores). Comment-Describing Program (CDP)Although the design is concise, SDP has several problems. The self-describing names may not be sufficiently general across problems and sufficiently informative to provide rich context in CoTs. Therefore, we consider comment-describing program (CDP) using standardized variable names, e.g., \(v_{1}\), \(v_{2}\), and brief comments that describe the step of reasoning and problem solving. Figure 1 (bottom left) shows an example in Python and Wolfram. The comment in a declaration line is a brief problem statement that provides details. The comment in a reasoning line explains the purpose of the step, displayed as a command or instruction. Since the Python language often requires stricter syntax, extra declaration lines, such as the Sympy symbol declaration line, must be included in the program to make it executable. In such lines, the comment is omitted. Non-Describing Program (NDP)We also consider a variant where the comments of CDP are discarded. NDP can also be considered as an approach contrary to SDP whereas in the former variable names are defined in natural language and in the latter variable names are defined as abstract symbols. ## 3 Data Collection We consider three datasets in this work, GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019), and SVAMP (Patel et al., 2021). Given the questions, we develop a method to semi-automatically annotate the CoTs in the training set. Generally, we use the few-shot prompting technique to obtain CDPs and SDPs in both Python and Wolfram, as well as NL CoTs. Our LLM-empowered annotation approach works in the following way. We first manually create a small number of CoT annotations, and then let the LLM to retrieve similar CoT annotations as examples and generate CoTs based on the examples in a few-shot manner. We then automatically execute the programs and take the correctly verified annotations as the annotation results. The process is repeated three to five times and finally, we manually annotate those that still cannot pass the verification. We use the Wolfram CoT as an example to illustrate the annotation details. 1. **Initial manual seed annotation** We randomly select \(20\) samples from the dataset for self-describing program annotations and comment-describing program annotations, respectively. The annotated programs must follow the CoT definition and Wolfram grammar. We conduct cross-verification among the authors, execute the programs by Wolfram Mathematica, and obtain the annotation results of the samples that are successfully executed. The 20 samples and their correct annotations are considered as the initial _completion set_, and the other samples in the dataset are considered as the initial _working set_. 2. **Question embeddings acquisition** We acquire all the embeddings of questions in the dataset by directly calling the API of "text-embedding-ada-002" from OpenAI3. Footnote 3: [https://platform.openai.com/docs/guides/embeddings](https://platform.openai.com/docs/guides/embeddings) 3. **Retrieval-based LLM annotation** For each sample to be annotated in the _working set_, we retrieve the top-\(k\) similar examples (Liu et al., 2022; Gao et al., 2021) from the _completion set_ based on the cosine similarity of the question embeddings. For CDP annotation, we use the questions of the top-\(k\) examples and their CDP programs as the prompts, and let the LLM return the CDP program for the given sample. The format of an example is presented in Fig 2. Here we choose "gpt-3.5-turbo" as the LLM and \(k\) is set to \(5\). For SDP annotation, we use the questions of the top-\(k\) examples and their SDP programs as the prompts, and let the LLM return the SDP program. 4. **Automatic verification, updating completion set and working set** After obtaining all annotations of _working set_ returned by the LLM, the annotated CDPs and SDPs are executed Figure 2: Overview of data collection, with CDP as an example. using Wolfram Mathematica, and then the results are compared with the ground truth to determine correctness. For GSM8K and SVAMP, since the answers should be numeric, we consider the answers not equal unless they can be converted to float and their values differ at most by \(1e^{-3}\). For MathQA, due to the multiple-choice format of questions, we adopt exact match to compare answers. Samples with correct results after execution are put into the _completion set_, and are removed from the _working set_. 5. Repeat step \(3\) and step \(4\) for three to five times until the _working set_ is empty or no new samples can be added into the _completion set_. 6. **Manually modifying remaining working set** If there are still any remaining samples in the _working set_, we manually conduct annotations on the samples, until the programs can get correct results using Wolfram Mathematica. The ways of creating Python CoTs and NL CoTs are the same as above. Note that for NL CoTs, because they cannot be directly verified by an engine, we just apply a simple rule followed by NL CoT, _"Therefore the answer is:"_, to obtain the answers. NDPs in Wolfram and Python can be obtained by removing the comments in their corresponding CDPs. ## 4 Methodology In accordance with previous studies (Uesato et al., 2022; Lightman et al., 2023), we employ supervised fine-tuning (SFT), _self-consistency_ decoding (alternatively referred to as _majority voting_) (Wang et al., 2023), and reward model _reranking_ methodologies on our annotated dataset. ### Supervised Fine-tuning We conduct SFT on a pre-trained language model using questions and chain-of-thought annotations in each dataset. The training aims to maximize the likelihood of the answer given the question. In evaluation, we extract the final answer generated by the SFT model. As shown in Figure 1, the NL CoT places the final answer in the last sentence, _"Therefore, the answer is E."_. In the cases of SDP, CDP, and NDP, we execute the program to obtain the answer. ### Majority Voting In self-consistency decoding (Wang et al., 2023)4, we first sample a certain number of CoTs from the language model. We then perform majority voting over the answers extracted from all the sampled CoTs and choose the final answer that is the most favored among all answers. We simply adopt the temperature sampling strategy (Ackley et al., 1985; Ficler and Goldberg, 2017) with \(T=1.0\), because it is reported (Wang et al., 2023) that self-consistency decoding is generally robust to sampling strategies and hyperparameters. Footnote 4: We use the term “majority voting” in the rest of this paper unless specified. ### Reranking with Reward Model Following Cobbe et al. (2021), a reward model (RM) is trained to determine whether an answer to the question is correct or not. Given the SFT model, we perform sampling to obtain a certain number of CoT solutions to the question. As a common practice, the reward model is a language model that is initialized from the SFT model. Similar to the outcome-based reward model (ORM) (Uesato et al., 2022), the reward model is trained to predict a binary label that indicates the _"correct"_ or _"incorrect"_ solution5. Once the input passes through the reward model, classification is conducted with a linear classifier on the hidden state of the last token. Finally, the solution with the highest _"correct"_ score among the candidates is selected as the final answer. Footnote 5: Our preliminary experiments with process-based reward (Lightman et al., 2023) show similar performance with outcome-based reward. We attribute the reason to the quality of automatic process labels (Uesato et al., 2022). As we do not have explicit "_correct_" and "_incorrect_" pairs annotated, we adopt the model at the \(2^{nd}\) epoch during supervised fine-tuning to sample solution pairs. According to Cobbe et al. (2021), using the checkpoints at initial epochs can provide more diverse solutions for training a reward model. For each question in the training data \(\mathcal{D}\), we sample \(K\) solutions. We then use all the samples that contain both correct and incorrect solutions to train the reward model for three epochs. ## 5 Experiments ### Experiment Settings We conduct experiments on the three datasets: GSM8K (Cobbe et al., 2021), MathQA (Amini et al., 2019)6, and SVAMP (Patel et al., 2021). Appendix SSB illustrates the preprocessing procedure for MathQA and SVAMP dataset. The training data for all CoT types is obtained using the method described in SS3. We report the results of few-shot prompting using GPT-3.5-turbo, majority voting, and RM reranking7. Footnote 6: Due to limitation of computing budget, we only experiment with a random \(15k\) samples of MathQA training set. Footnote 7: We also experimented with RM-weighted voting and the performance is similar to reranking (Appendix SSC). We adopt the pre-trained language model Galactica (Taylor et al., 2022) which is trained on a large-scale scientific corpus and programming codes. The Galactic model shows superior performance in math problem solving compared to other foundation models such as LLaMA (Touvron et al., 2023) in our preliminary experiments. Throughout the experiments, we use the model size of \(6.7\)B8 and \(30\)B9 available in HuggingFace. We use the Megatron-Deepspeed10 framework for efficient supervised fine-tuning, following BLOOM (Scao et al., 2022). The model is fine-tuned for \(40\) epochs with a maximum sequence length of \(1024\). Please refer to Appendix (Table 8) for hyper-parameter settings. Footnote 8: [https://huggingface.co/facebook/galactica-6.7b](https://huggingface.co/facebook/galactica-6.7b) Footnote 9: [https://huggingface.co/facebook/galactica-30b](https://huggingface.co/facebook/galactica-30b) Footnote 10: [https://github.com/bigscience-workshop/Megatron-DeepSpeed](https://github.com/bigscience-workshop/Megatron-DeepSpeed) We select the SFT model with the best accuracy for sampling to obtain the majority voting (Wang et al., 2023) results. To train the reward model, we generate \(100\) samples for each question in the training set using the SFT checkpoint at the second epoch and compare them with the ground-truths to determine the correct labels. By using an earlier checkpoint, we can have more sampling diversity, which is helpful for the reward model training. As described in SS4.3, we initialize the reward model with the best SFT model checkpoint and fine-tune it for three epochs. \begin{table} \begin{tabular}{c l c c c c c} \hline \hline **Program** & **CoT Type** & **Size** & **GSM8K** & **MathQA** & **SVAMP** & **Avg.** \\ \hline - & Natural Language & \(6.7\)B & \(41.0\) & \(58.7\) & \(53.8\) & \(51.2\) \\ \hline \multirow{3}{*}{Python} & Non-Describing Program & \(6.7\)B & \(56.3\) & \(64.4\) & \(59.1\) & \(59.9\) \\ & Self-Describing Program & \(6.7\)B & \(57.1\) & \(64.8\) & \(69.3\) & \(63.7\) \\ & Comment-Describing Program & \(6.7\)B & \(56.5\) & \(64.7\) & \(62.3\) & \(61.2\) \\ \hline \multirow{3}{*}{Wolfram} & Non-Describing Program & \(6.7\)B & \(53.4\) & \(63.0\) & \(58.6\) & \(58.3\) \\ & Self-Describing Program & \(6.7\)B & \(50.2\) & \(62.5\) & \(65.5\) & \(59.4\) \\ & Comment-Describing Program & \(6.7\)B & \(57.0\) & \(63.1\) & \(64.0\) & \(61.4\) \\ \hline \hline - & Natural Language & \(30\)B & \(57.4\) & \(66.6\) & \(70.1\) & \(64.7\) \\ \hline \multirow{3}{*}{Python} & Non-Describing Program & \(30\)B & \(65.8\) & \(66.0\) & \(73.9\) & \(68.6\) \\ & Self-Describing Program & \(30\)B & \(68.3\) & \(67.2\) & \(80.4\) & \(72.0\) \\ & Comment-Describing Program & \(30\)B & \(68.7\) & \(67.2\) & \(78.2\) & \(71.4\) \\ \hline \multirow{3}{*}{Wolfram} & Non-Describing Program & \(30\)B & \(62.2\) & \(64.9\) & \(73.1\) & \(66.7\) \\ & Self-Describing Program & \(30\)B & \(62.6\) & \(64.3\) & \(73.9\) & \(66.9\) \\ \cline{1-1} & Comment-Describing Program & \(30\)B & \(66.7\) & \(65.0\) & \(75.9\) & \(69.2\) \\ \hline \hline \end{tabular} \end{table} Table 1: Supervised fine-tuning performance of all CoT types. Numbers displayed in bold are highest in the same settings. ### Supervised Fine-tuning Results Table 1 presents the supervised fine-tuning results across all datasets, languages, and CoT types. In general, program-based CoTs perform better than natural language CoT. An enlarged model size correlates with a noticeable increase in the performance of natural language CoTs, presumably due to an improved capacity for natural language understanding. Nevertheless, program-based CoTs consistently and significantly outperform natural language CoT. The SFT models described here are then used for majority voting and reranking. ### Main Results Table 2 shows the comparison among different methods: GPT-3.5-turbo11 prompting, majority voting, and reranking. We can observe that larger models (i.e., \(30\)B) can significantly improve the performance over smaller models (i.e., \(6.7\)B) on all datasets. Our best variants are able to outperform few-shot prompting GPT-3.5-turbo by a large margin. Footnote 11: [https://platform.openai.com/docs/models/gpt-3-5](https://platform.openai.com/docs/models/gpt-3-5) Prompting PerformanceThe few-shot examples are selected randomly from the training annotations for all types of CoT. Among the methods using GPT-3.5.turbo, natural language is generally better than comment-describing program, self-describing program, and non-describing program, \begin{table} \begin{tabular}{c l c c c c} \hline \hline **Program** & **Method** & **Size** & **GSM8K** & **MathQA** & **SVAMP** \\ \hline \hline - & GPT-3.5-turbo prompting + Natural Language & N.A. & \(75.3\) & \(60.6\) & \(73.0\) \\ \hline \multirow{2}{*}{Wolfram} & GPT-3.5-turbo prompting + Self-Describing Program & N.A. & \(73.5\) & \(39.3\) & \(72.8\) \\ & GPT-3.5-turbo prompting + Comment-Describing Program & N.A. & \(69.1\) & \(31.2\) & \(70.1\) \\ \hline \multirow{2}{*}{Python} & GPT-3.5-turbo prompting + Self-Describing Program & N.A. & \(78.0\) & \(45.5\) & \(78.4\) \\ & GPT-3.5-turbo prompting + Comment-Describing Program & N.A. & \(72.4\) & \(46.2\) & \(77.6\) \\ \hline \hline - & \begin{tabular}{c} SFT + Majority Voting + Natural Language \\ SFT + Reranking + Natural Language \\ \end{tabular} & \(6.7\)B & \(50.8\) & \(59.5\) & \(63.1\) \\ \hline \multirow{4}{*}{Wolfram} & SFT + Majority Voting + Non-Describing Program & \(6.7\)B & \(59.5\) & \(61.8\) & \(67.0\) \\ & SFT + Majority Voting + Self-Describing Program & \(6.7\)B & \(53.7\) & \(70.3\) & \(60.9\) \\ & SFT + Majority Voting + Self-Describing Program & \(6.7\)B & \(59.5\) & \(72.7\) & \(72.9\) \\ & SFT + Majority Voting + Comment-Describing Program & \(6.7\)B & \(61.3\) & \(71.3\) & \(68.3\) \\ & SFT + Reranking + Non-Describing Program & \(6.7\)B & \(61.1\) & \(71.0\) & \(66.4\) \\ & SFT + Reranking + Self-Describing Program & \(6.7\)B & \(71.4\) & \(73.8\) & \(77.3\) \\ & SFT + Reranking + Comment-Describing Program & \(6.7\)B & \(69.7\) & \(72.0\) & \(75.5\) \\ \hline \multirow{4}{*}{Python} & SFT + Majority Voting + Non-Describing Program & \(6.7\)B & \(56.4\) & \(70.5\) & \(61.6\) \\ & SFT + Majority Voting + Self-Describing Program & \(6.7\)B & \(61.1\) & \(73.7\) & \(73.7\) \\ & SFT + Majority Voting + Comment-Describing Program & \(6.7\)B & \(58.6\) & \(71.5\) & \(63.4\) \\ & SFT + Reranking + Non-Describing Program & \(6.7\)B & \(63.6\) & \(70.7\) & \(60.7\) \\ & SFT + Reranking + Self-Describing Program & \(6.7\)B & \(72.4\) & \(75.2\) & \(78.6\) \\ & SFT + Reranking + Comment-Describing Program & \(6.7\)B & \(69.9\) & \(71.6\) & \(69.8\) \\ \hline \hline - & \begin{tabular}{c} SFT + Majority Voting + Natural Language \\ SFT + Reranking + Natural Language \\ \end{tabular} & \begin{tabular}{c} 30B \\ SFT + Reranking + Non-Describing Program \\ SFT + Reranking + Non-Describing Program \\ SFT + Reranking + Self-Describing Program \\ SFT + Majority Voting + Comment-Describing Program \\ SFT + Reranking + Non-Describing Program \\ SFT + Reranking + Comment-Describing Program \\ SFT + Rerank while self-describing program in Python is better on GSM8K and SVAMP. We attribute the low performance to the limited availability of programs in the pre-training data of GPT-3 (Brown et al., 2020), which appears to be true for most existing LLMs (Touvron et al., 2023; Taylor et al., 2022; Touvron et al., 2023). Furthermore, it is challenging to generalize to new problems with just a few examples of in-context learning. While ongoing research addresses these challenges (Min et al., 2022; Hao et al., 2022; Coda-Forno et al., 2023), our work does not focus on this aspect. The CoTs of SDP in Python are more similar to the programming codes in the pre-training corpus, and thus the use of them leads to better performance on GSM8K and SVAMP. For the nosier dataset, MathQA, natural language CoTs tend to make guesses on multi-choice questions, even if they are incorrect, whereas program CoTs tend to choose no decision if there is no valid answer available. Program and Natural Language ComparisonIn general, the performance with all types of program CoTs is consistently better than that with natural language CoTs. This superiority is particularly evident in the case of MathQA where the program CoTs, in combination with reranking, lead to performance improvement exceeding \(10\) points for the \(6.7\)B model compared to natural language CoTs. This is because the answers are in multiple-choice format in MathQA, and thus inaccurate predictions for which the program execution results are "_null-result_" can be easily filtered out before performing majority voting or reranking12. Table 3 presents the percentage of null-result answers in MathQA predictions. Although natural language CoTs produce fewer null-result answers, their performance with null-result answers is worse, as shown in Table 4. Footnote 12: We would see many predictions give “_null_” as the voting/reranking answer if we did not remove them. Therefore, it is essential to remove those samples as voting/re-ranking could be misled by them. Conversely, natural language CoTs tend to choose an answer (e.g., A, B), regardless of the accuracy of the CoT, because the CoT cannot be executed as a program. However, there are exceptions where we use the non-describing program. For example, the performance of "majority voting + NDP" with Python using the \(6.7\)B model is worse than the natural language counterpart on SVAMP. The same observation also applies to the GSM8K dataset with the \(30\)B model for both Wolfram and Python languages. Without natural language, NDP has a weaker language understanding capability compared to the SDP and CDP. Program CoT ComparisonUnder both the majority voting and reranking strategies, self-describing program consistently achieve the best performance, followed by the comment-describing program, and then non-describing program. Unlike in SFT, self-describing program provides more diversity and therefore tends to perform better in voting and re-ranking. Notably, the \(30\)B model with Python, "reranking + SDP", achieves the best performance on GSM8K and SVAMP. The performance is also \(2.9\) points and \(8.6\) points higher than the best prompting approach with GPT-3.5-turbo on GSM8K and SVAMP, respectively."reranking + SDP" with Wolfram also obtains the best performance on the noisy MathQA dataset, with +\(28\) points improvement over GPT-3.5-turbo prompting. Though the performance with CDP is worse than SDP, we can see that the best CDP methods can still outperform the best GPT-3.5-turbo prompting approach on all datasets. Programming Language ComparisonThe best-performing \(6.7\)B and \(30\)B models are often the methods in Python, as shown in Table 2. The only exception is that the best \(30\)B model with Python falls \(0.5\) points behind the best \(30\)B model with Wolfram. For non-describing and self-describing \begin{table} \begin{tabular}{l c c} \hline \hline **Range** & \(6.7\)B & \(30\)B \\ \hline Overall & \(59.5\) & \(69.4\) \\ \hline \(0\sim 20\) & \(73.9\) & \(81.2\) \\ \(20\sim 40\) & \(58.8\) & \(54.5\) \\ \(40\sim 60\) & \(58.3\) & \(62.1\) \\ \(60\sim 80\) & \(40.3\) & \(55.8\) \\ \(80\sim 100\) & \(35.7\) & \(46.3\) \\ \hline \hline \end{tabular} \end{table} Table 4: NL majority voting accuracy against percentage of _null-result_ answers in CDP. \begin{table} \begin{tabular}{l c c} \hline \hline & **Null-result Answer (\%)** & \(6.7\)B & \(30\)B \\ \hline NL & \(0.53\) & \(2.27\) \\ SDP & \(34.87\) & \(33.12\) \\ CDP & \(34.73\) & \(32.78\) \\ \hline \hline \end{tabular} \end{table} Table 3: Percentage of _null-result_ answers in MathQA Wolfram predictions. programs, the use of Python often outperforms the use of Wolfram. For comment-describing program, the methods using Python and Wolfram have comparable performance, with the \(6.7\)B model using Wolfram having better performance on SVAMP. ## 6 Analysis ### Number of Instances for Sampling We measure the effect of the number of sampled instances \(K\) during majority voting. We vary the number \(K\) from \(1\) to \(100\) and evaluate the accuracies for the \(6.7\)B and \(30\)B models. Figure 3 provides the results on GSM8K. The performance of all methods improves rapidly with an increase of \(K\) and becomes stable when \(K\) is more than \(30\). Specifically, both CDP and NDP require a smaller number of \(K\) compared to SDP and NL. The results indicate that CDP and NDP are more deterministic while SDP and NL are more diverse. With more diverse CoTs, SDP and NL are able to gain more improvements with more samples in majority voting. ### Representation sampling statistics Table 5 reports the results of model predictions on GSM8K, including the percentage of syntactically correct predictions (i.e., execution rate), the percentage of correct answers (i.e., precision) and the chance of obtaining at least one correct answer among \(100\) samples (i.e., correct@100). Here, syntactically correct means that, we can extract or execute the CoT to get a _valid_ answer (e.g., A, B, C, or D letter for MathQA, and numeric value for GSM8K and SVAMP). It can be seen that NL CoT has a high correct@100 and execution rate but with the lowest precision compared to all other CoT types. This is probably because the natural langauge syntax is straightforward and it is challenging for the models on the current scale to perform precise calculations without the help of a computational engine. It is noteworthy that CDP usually has the highest precision and execution rate, and relatively high correct@100 score, and SDP has the lowest execution rate but the highest correct@100 score and relatively high precision. The results further support our hypothesis that CDP _is more deterministic and precise, while_ SDP _has a higher level of diversity_, and thus a higher chance of obtaining correct answers with the risk of making more errors. Therefore, we conclude that having a balance of diversity and precision is crucial for higher performance in voting and reranking. The execution rates of CDP and NDP are similar, but CDP scores higher in correct@100 and achieves significantly better precision. Such an observation indicates the benefits of including natural language comments. ### Upper bounds We analyze the results in reranking to explore the potential of the CoT designs (NL, CDP/SDP in Wolfram/Python). We calculate the accuracy when _any_ of the CoT is correct, which is considered Figure 3: Majority voting regarding the different number of sampled instances (Left: 6.7B; Right: 30B). We just depict the performance in Python for illustration purposes. the upper bound of all types of CoTs. We consider the best performance of individual types of CoTs and the upper bounds of all types of CoTs in Table 6. We find that the upper bounds of the CoTs on 30B models are \(98.8\)%, \(93.0\)%, \(95.0\)% on GSM8K, MathQA, SVAMP, respectively. It indicates the potential for combining the CoTs to create a more accurate representation. We leave this as future work. We also analyze the results of \(30\)B reward model reranking by comparing different CoT types on the same example (Figure 4). Though SDP has overall better performance, a non-negligible amount of failure is correctly solved by CDP or NL. The same observation applies to the failure cases of CDP or NL, too. The above results show that CDP, SDP, and NL have distinct advantages for math-problem-solving. We conduct another experiment using a method that treats all three types of CoTs equally during majority voting and reranking. For reranking, we train a reward model that is capable of distinguishing and ranking the three types of CoTs. The number of sampled CoT solutions is set to \(100\) for fair comparison. Specifically, we perform majority voting and reranking \begin{table} \begin{tabular}{l c c c c} \hline \hline & **Size** & ** GSM8K** & **MathQA** & **SVAMP** \\ \hline \hline \multirow{2}{*}{Best performance of _individual_ CoT} & \multirow{2}{*}{\(6.7\)B} & \(72.4\) & \(75.2\) & \(78.6\) \\ & & (Python SDP) & (Python SDP) & (Python SDP) \\ \hline \multirow{2}{*}{Upper bound of _all_ types of CoTs} & \multirow{2}{*}{\(6.7\)B} & \(\mathbf{89.5}\) & \(\mathbf{90.1}\) & \(\mathbf{91.0}\) \\ \hline \hline \multirow{2}{*}{Best performance of _individual_ CoT} & \multirow{2}{*}{\(30\)B} & \(80.9\) & \(78.6\) & \(87.0\) \\ & & (Python SDP) & (Wolfram SDP) & (Python SDP) \\ \hline \multirow{2}{*}{Upper bound of _all_ types of CoTs} & \multirow{2}{*}{\(30\)B} & \(\mathbf{98.8}\) & \(\mathbf{93.0}\) & \(\mathbf{95.0}\) \\ \hline \end{tabular} \end{table} Table 6: The best performance of individual types of CoTs, and the upper bounds of all types of CoTs (if any of the CoT is correct). Figure 4: The percentage of failure cases that are correctly predicted in different CoT types. \begin{table} \begin{tabular}{l l c c c c} \hline \hline **Program** & **CoT Type** & **Size** & **Correct@100** & **Precision (\%)** & **Executable (\%)** \\ \hline \hline - & Natural Language & \(6.7\)B & \(86.9\) & \(41.5\) & \(99.3\) \\ \hline \multirow{3}{*}{Wolfram} & Comment-Describing Program & \(6.7\)B & \(78.5\) & \(58.7\) & \(99.6\) \\ & Self-Describing Program & \(6.7\)B & \(82.8\) & \(54.7\) & \(94.1\) \\ & Non-Describing Program & \(6.7\)B & \(69.6\) & \(52.1\) & \(99.3\) \\ \hline \multirow{3}{*}{Python} & Comment-Describing Program & \(6.7\)B & \(78.8\) & \(55.9\) & \(98.6\) \\ & Self-Describing Program & \(6.7\)B & \(83.6\) & \(56.7\) & \(96.2\) \\ & Non-Describing Program & \(6.7\)B & \(70.9\) & \(55.3\) & \(99.6\) \\ \hline \hline - & Natural Language & \(30\)B & \(94.3\) & \(58.0\) & \(98.8\) \\ \hline \multirow{3}{*}{Wolfram} & Comment-Describing Program & \(30\)B & \(85.0\) & \(67.7\) & \(99.6\) \\ & Self-Describing Program & \(30\)B & \(91.1\) & \(65.1\) & \(97.8\) \\ & Non-Describing Program & \(30\)B & \(76.1\) & \(62.0\) & \(99.6\) \\ \hline \multirow{3}{*}{Python} & Comment-Describing Program & \(30\)B & \(86.1\) & \(68.1\) & \(99.0\) \\ & Self-Describing Program & \(30\)B & \(91.0\) & \(67.6\) & \(98.1\) \\ \cline{1-1} & Non-Describing Program & \(30\)B & \(82.6\) & \(65.0\) & \(99.8\) \\ \hline \hline \end{tabular} \end{table} Table 5: Sampling statistics on GSM8K dataset. on \(100\) solutions that contain three types of CoT, in which (CDP, SDP are in Wolfram. Table 7 shows the comparison between the synthetic results and the previous best performance in Table 2. The significant improvements suggest that there is still a large potential for a better CoT design that integrates the strengths of all three CoT types. ## 7 Related Work Mathematical reasoning through CoT prompting (Wei et al., 2022b), on large language models (Wei et al., 2022a), has experienced significant development in recent years, as evidenced by a large number of CoT methods proposed. Among them, Uesato et al. (2022) applied the _process-based_ and _outcome-based_ reward to score the natural language CoTs on GSM8K(Cobbe et al., 2021), greatly improving problem-solving effectiveness. Lightman et al. (2023) enhanced the capability of process-based reward model and achieve significant improvements on the challenging MATH dataset (Hendrycks et al., 2021). Furthermore, recent research efforts extended simple natural language CoTs, encompassing various approaches designed to enhance and optimize prompting performance. Specifically, Fu et al. (2023) introduced the concept of _complexity-based prompts_, showing that LLMs favor long reasoning chain, which often leads to superior performance. Moreover, the methods proposed by Zhou et al. (2023b) and Khot et al. (2023) make decomposition of problems into a series of simpler and managable questions. Similarly, Nye et al. (2021) presented the "_Scratchpad_" concept, designed to explicitly present the intermediate calculations to the large language model. Although these advancements are significant, ensuring the correctness of CoTs remains a challenge. The deterministic nature of programs is increasingly attracting the attention of researchers who use program-aided methods for math problem solving. Imani et al. (2023) developed a strategy that ensures answer consistency between programmatic and natural language reasoning, thereby enhancing reliability. In similar pursuits, both Chen et al. (2022) and Gao et al. (2023) proposed the use of Python programs as prompts. By offloading execution tasks to the Python interpreter, they were able to mitigate issues related to incorrect reasoning or calculations. The programs employed in these approaches are similar to our self-describing programs, where variables are represented using natural language. Zhou et al. (2023a) further combined the natural language and program by making use of the code interpreter in GPT-4 (OpenAI, 2023). Concurrently, research by Drori et al. (2022) and Li et al. (2022) demonstrated the effectiveness of generating purely symbolic Python programs to address MATH questions (Hendrycks et al., 2021) in programming competitions. He-Yueya et al. (2023) enabled declarative reasoning in a program by embedding symbolic expressions into natural language prompts. In response to the diversity of program CoT types, our work aims to provide a comprehensive analysis and comparison of the representations. Our objective is to uncover their distinctive characteristics and potential advantages. ## 8 Conclusion We have conducted a comprehensive study of chain-of-thought design for math problem solving, including natural language and program CoTs. We categorize the program CoTs into non-describing program, self-describing program, and comment-describing program. Through extensive experiments on GSM8K, MathQA and SVAMP, we find that the self-describing program often achieves \begin{table} \begin{tabular}{l l c c c} \hline \hline **Size** & **Method** & **GSM8K** & **MathQA** & **SVAMP** \\ \hline \multirow{4}{*}{\(6.7\)B} & Majority Voting (Best CoT) & \(61.3\) & \(72.7\) & \(72.9\) \\ & Majority Voting (NL + SDP + CDP) & \(67.4\) (\(+6.3\)) & \(76.0\) (\(+3.3\)) & \(76.0\) (\(+3.1\)) \\ & Reranking (Best CoT) & \(71.4\) & \(73.8\) & \(77.3\) \\ & Reranking (NL + SDP + CDP) & \(75.4\) (\(+4.0\)) & \(79.0\) (\(+5.2\)) & \(77.0\) (\(-0.3\)) \\ \hline \multirow{4}{*}{\(30\)B} & Majority Voting (Best CoT) & \(69.8\) & \(77.9\) & \(79.2\) \\ & Majority Voting (NL + SDP + CDP) & \(77.5\) (\(+7.7\)) & \(82.6\) (\(+4.7\)) & \(81.1\) (\(+1.9\)) \\ \cline{1-1} & Reranking (Best CoT) & \(79.9\) & \(78.6\) & \(83.4\) \\ \cline{1-1} & Reranking (NL + SDP + CDP) & \(83.5\) (\(+3.6\)) & \(83.5\) (\(+4.9\)) & \(83.9\) (\(+0.5\)) \\ \hline \hline \end{tabular} \end{table} Table 7: Performance of synthesizing CDP, SDP, and NL CoT types in Wolfram. the best performance and outperforms the few-shot prompting by GPT-3.5-turbo. It is better to use program CoTs than natural language CoTs for math problem solving. Self-describing program and comment-describing program perform better than non-describing program. Among the first two, self-describing program works better than comment-describing program. The program CoTs in Python work better than the program CoTs in Wolfram. We hope our experimental findings will provide valuable insights for the future design of chain-of-thought in math problem solving.
2309.17425
Data Filtering Networks
Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a data filtering network (DFN) for this second step of filtering a large uncurated dataset. Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data. Based on our insights, we construct new data filtering networks that induce state-of-the-art image-text datasets. Specifically, our best performing dataset DFN-5B enables us to train state-of-the-art CLIP models for their compute budgets: among other improvements on a variety of tasks, a ViT-H trained on our dataset achieves 84.4% zero-shot transfer accuracy on ImageNet, out-performing models trained on other datasets such as LAION-2B, DataComp-1B, or OpenAI's WIT. In order to facilitate further research in dataset design, we also release a new 2 billion example dataset DFN-2B and show that high performance data filtering networks can be trained from scratch using only publicly available data.
Alex Fang, Albin Madappally Jose, Amit Jain, Ludwig Schmidt, Alexander Toshev, Vaishaal Shankar
2023-09-29T17:37:29Z
http://arxiv.org/abs/2309.17425v3
# Data Filtering Networks ###### Abstract Large training sets have become a cornerstone of machine learning and are the foundation for recent advances in language modeling and multimodal learning. While data curation for pre-training is often still ad-hoc, one common paradigm is to first collect a massive pool of data from the Web and then filter this candidate pool down to an actual training set via various heuristics. In this work, we study the problem of learning a _data filtering network_ (DFN) for this second step of filtering a large uncurated dataset. Our key finding is that the quality of a network for filtering is distinct from its performance on downstream tasks: for instance, a model that performs well on ImageNet can yield worse training sets than a model with low ImageNet accuracy that is trained on a small amount of high-quality data. Based on our insights, we construct new data filtering networks that induce state-of-the-art image-text datasets. Specifically, our best performing dataset DFN-5B enables us to train state-of-the-art CLIP models for their compute budgets: among other improvements on a variety of tasks, a ViT-H trained on our dataset achieves 84.4% zero-shot transfer accuracy on ImageNet, outperforming models trained on other datasets such as LAION-2B, DataComp-1B, or OpenAI's WIT. In order to facilitate further research in dataset design, we also release a new 2 billion example dataset DFN-2B and show that high performance data filtering networks can be trained from scratch using only publicly available data. 1Apple, 2University of Washington ## 1 Introduction Carefully curated datasets have driven progress in machine learning for decades, from early pattern recognition experiments in Bell Labs to recent developments like GPT-4, Stable Diffusion, and CLIP (Highleyman and Kamentsky, 1959; LeCun et al., 1989, 1998; Deng et al., 2009; Krizhevsky et al., 2009, 2012; Radford et al., 2019, 2021, 2022; OpenAI, 2023). Despite their crucial role, datasets themselves are rarely the subject of active research (Sambasivan et al., 2021). Current approaches to improving performance on machine learning tasks have focused on scaling model capacity or training data volume. While scaling laws (Hestness et al., 2017; Kaplan et al., 2020; Aghajanyan et al., 2023; Cherti et al., 2023) have elucidated the relationship between model size, data size, and performance, little formal guidance exists on how to scale these quantities. On the model side, experimentation is straightforward - with enough compute, permutations of width, depth, normalization and training hyperparameters can be rigorously evaluated, leading to consistent modeling improvements over the years (Touvron et al., 2023a,b; Elsen et al., 2023). The dataset side is unfortunately murkier. Most large-scale training sets are not released, leaving the community to attempt open reproductions (Schuhmann et al., 2021, 2022; Byeon et al., 2022; Gao et al., 2020); however, these are often one-off efforts without the iterative refinement that models enjoy. Recent efforts like DataPerf, DataComp and MetaCLIP (Mazumder et al., 2022; Gadre et al., 2023; Xu et al., 2023) help bridge the gap by providing consistent dataset evaluation and reproduction frameworks. We argue dataset design can leverage the same tools as model design. Almost all large-scale dataset construction can be broken down into two phases: uncurated data collection and dataset filtering. We focus our work on the latter, with the assumption that a large uncurated dataset exists. We show data filtering networks (DFNs) - neural networks designed to filter data - can induce massive, high-quality pre-training datasets. Unlike previous techniques relying on domain-specific heuristics, DFNs paired with a large unfiltered image-text pool produce billion-scale state-of-the-art datasets algorithmically. We demonstrate DFNs can be efficiently trained from scratch and improved with the same techniques as standard ML models. The contributions of this work are as follows. First, we characterize the properties of data filtering networks that lead to high-quality datasets. We ablate properties of data filtering networks from supervision signal to training data quality. We find that a small contrastive image-text model trained on _only_ high-quality data is sufficient to construct state-of-the-art datasets. Second, we use these properties to train DFNs and construct datasets that induce Contrastive Figure 1: Compute scaling behavior of training CLIP models on various datasets. DFN-2B, the subset of CommonPool (DataComp-12.8B) chosen by our best performing data filtering networks, out-performs all other datasets including OpenAI’s WIT and the previous state-of-the-art CLIP training dataset DataComp-1B. Our ViT-L outperforms a ViT-G trained on LAION with \(18\times\) more compute. Similarly, our ViT-B/16 trained on our dataset outperforms OpenAI’s ViT-L/14 trained with \(4\times\) more compute. Our ViT-H/14 achieves \(84.4\%\) on ImageNet, out-performing any model in its compute class. All DFN-trained models were trained on DFN-2B, except for the ViT-H which was trained on DFN-5B. Both datasets were induced by the same DFN. We note the cost of training DFN was omitted from this plot, which corresponds to less than \(\frac{1}{50}\)th of total CLIP training cost. Image-Text Pre-trained (CLIP) models that achieve high accuracy and present better compute accuracy tradeoff than any existing dataset in the literature as show in Figure 1. In particular we train a ViT-L/14 or 12.8B examples seen on our DFN induced dataset DFN-2B to 81.4 ImageNet zero-shot transfer accuracy, outperforming the previous best ViT-L trained on DataComp-1B by over 2 percentage points. We further train a ViT-H/14 for 39B samples seen on a larger DFN induced dataset DFN-5B to 84.4ImageNet zero-shot transfer accuracy. We show that models trained on these datasets show consistent improvements on many tasks, including zero-shot classification, retrieval, and visual question answering, and maintain the favorable robustness properties of CLIP models. Lastly, the above insights can be used as a recipe to construct high-quality datasets from scratch by using only public data 1 thus making strides towards democratization of large high-quality datasets. In addition, we release DFN-2B for the community to enable research on large image-text models. Footnote 1: Most large public Image-Text datasets including LAION-5B and DataComp-1B are built using OpenAI’s CLIP model ## 2 Background and Related Work ### Contrastive Image Language Pre-training (CLIP) CLIP has altered the use of cheaply available image-alt-text datasets by demonstrating the practicality of large-scale training on web-scraped image-text pairs to build state-of-the-art image representations. CLIP consists of separate vision and text encoders, and uses contrastive loss during training to push the representations of related images and text pairs together, and unrelated pairs apart. Crucial to this process is a large dataset of _aligned_ image-text pairs - images paired with semantically relevant text. The release of CLIP was followed by several other image-text models such as ALIGN, BASIC, LiT and Open-CLIP all of which we will refer to in this work as CLIP models (Jia et al., 2021; Pham et al., 2023; Zhai et al., 2022b; Ilharco et al., 2021). CLIP models generally come in 3 canonical sizes of vision transformer: ViT-B/32, ViT-B/16 and ViT-L/14; since then, the open source community has extended these to 3 larger variants ViT-H/14, ViT-g/14 and ViT-G/14 (Dosovitskiy et al., 2020; Zhai et al., 2022a). Generally the larger models exhibit better zero-shot generalization and transfer properties. CLIP models have been trained on a variety of datasets from OpenAI's WiT, Google's WebLI and JFT-3B, LAION, COYO and DataComp-1B. Prior work has also studied how to fine-tune CLIP models to improve performance in targeted directions. CLIP models can be fine-tuned on image classification tasks by using templates to transform labels to text (Fang et al., 2022; Goyal et al., 2022). Additionally, practitioners often use weight ensembling to preserve robustness properties of the pre-trained model while reaping the benefits of fine-tuning (Wortsman et al., 2022). We take advantage of these techniques in order to improve the filtering models we train in this work. ### Dataset Construction Prior to CLIP, datasets most commonly used in computer vision were _supervised_ with human labels (Deng et al., 2009; Krizhevsky et al., 2009). Though these older dataset construction pipelines were quite intricate and did not scale beyond a few million examples, they share some similarity with current constructions. Classical datasets such as ImageNet and CIFAR started with a large _roughly curated_ pool of images paired with metadata, and used humans to either label or filter the data. Modern dataset pipelines have a similar procedure but at a much higher scale. The initial pool of images can contain up to 100 billion images, and the dataset filtering is purely automated, often with a set of rules and heuristic filtering stages (Jia et al., 2021). Past work in natural language processing has used binary filters as an initial step to remove low quality documents (Wenzek et al., 2019; Brown et al., 2020), but contain multiple components to their filtering pipelines. One of the first publicly available web-scale image-text datasets is LAION. LAION-400M and LAION-2B were constructed by collecting image-text pairs from Common Crawl, filtering by English, and then keeping pairs whose image and text are well _aligned_. This alignment is performed using a procedure known as _CLIP filtering_. The procedure leverages an existing image-text model (in LAION's case OpenAI CLIP ViT-B/32), and removes samples whose cosine similarity between image and text are below some threshold. We show pseudocode of the basic CLIP filtering operation below. ``` defclip_filter(image,text,threshold=0.3): #computeimageandtextrepresentations image_features=clip.encode_image(image_input) text_features=clip.encode_text(text_input) #computealignment dot_product=image_features.T@text_features norm_a=image_features.norm() norm_b=text_features.norm() similarity=dot_product/(norm_a*norm_b) #filterbyalignment returnsimilarity>threshold ``` While CLIP filtering is convenient it is dependent on a existing trained CLIP model, and perhaps limited on the top-line performance of any model trained using it as a filter. For example, despite LAION-2B being five times larger than OpenAI's dataset, models trained on it could only match OpenAI's ImageNet zero-shot performance with a significantly larger compute budget. To better facilitate the study of image-text datasets, researchers created the DataComp benchmark (Gadre et al., 2023). The benchmark provides 12.8 billion image-text pairs from Common Crawl so that researchers can study the effect of various data filtering techniques. DataComp fixes the computational budget used to train the resulting models, fixing the compute budget of the largest scale to match the cost of training OpenAI's ViT-L/14 CLIP model. These models are then evaluated on a suite of 38 downstream tasks, which includes ImageNet and distribution shifts, VTAB, and retrieval tasks. We use this benchmark as our primary method of evaluating the datasets created by our data filtering networks. The authors of DataComp also released a baseline dataset, DataComp-1B (DC-1B) that improved upon LAION-5B, by combining CLIP filtering with an ImageNet based clustering approach to improve dataset quality on a variety of benchmarks. However this dataset still relies on the OpenAI CLIP model for CLIP filtering and imposes a costly ImageNet specific clustering step in the pipeline. Recent work (Xu et al., 2023) has demystified the CLIP dataset construction process and demonstrated high quality dataset construction is possible by simple keyword based sampling and global balancing. While their work does create competitive datasets, the reliance on sampling heuristics from the original CLIP paper (Radford et al., 2021) allows for accurate dataset reproduction, our work focuses on _improving_ model performance using dataset construction. ## 3 Data Filtering Networks The core object we study in this work is a data filtering network (DFN). In this section we define DFNs and introduce their evaluation setup. ### Definitions Since our ultimate goal is to build functions that filter potentially trillions of examples efficiently, we restrict the scope of our study to DFNs that are only applied pointwise to elements of a larger data pool. Thus, processing a data pool with a DFN, defined in pseudocode as follows ``` defapply_dfn(dfn,data_pool): return[xforxinda_poolifdfn(x)] ``` lends itself to parallelization and thus efficient application. For a given DFN and pool, we refer to the data pool we train the DFN on as a _filter dataset_. Furthermore, we refer to the dataset constructed by filtering the pool with the DFN the _induced dataset_. We then refer to a model trained (only) on that dataset the _induced model_. As introduced in Section 2.2 a common choice for a DFN is a CLIP trained image-text model. Thus, a DFN can not only be used to induce a dataset but also be applied to common evaluation problems such as zero-shot ImageNet classification. Inversely, a CLIP model can be both used for general recognition as well as as a DFN. When we use a CLIP model as a DFN, we define its _filtering performance_ as the performance of the induced model, as evaluated on standard benchmarks, e.g. ImageNet top-1. ### Data Filtering Networks Evaluation Setup With these definitions in place, we now address how we evaluate DFNs. In our context, the quality of a DFN is determined by the strength of models it can induce. We build on the evaluation framework proposed by DataComp (Gadre et al., 2023). DataComp provides a multi-scale evaluation Figure 2: A high level overview of our pipeline for constructing datasets using DFNs framework for datasets by measuring CLIP model zero-shot performance. It provides 4 nested unfiltered image-text pair pools of increasing size. In this work, we use the medium (128M datapoints), large (1.28B datapoints) and xlarge(12.8B datapoints) pools. We also follow the DataComp guidelines of model hyperparameters for each of these pools, which are ViT-B/32 for medium, ViT-B/16 for large and ViT-L/14 for XL. Exact hyperparameters can be found in Table 7. We additionally expand our DFN to a larger pool of 42B images by combining 30B non-DataComp web-scraped images with the DataComp XL pool. We denote the dataset induced using this pool and our DFN as DFN-5B, which we use to train a ViT-H/14 model. For evaluation we use 38 zero-shot classification and retrieval tasks in the DataComp benchmark. We denote the average performance on these benchmarks simply as "Average" performance, but we also track various subsets: ImageNet performance (IN), ImageNet distribution shift performance (IN shifts), Visual Task Adapation Benchmark (VTAB), Retrieval performance (COCO, Flickr, WinoGAViL). Our actual training runs on both Nvidia A100s and TPU v4s. We use OpenClip and AXlearn to train our CLIP models on GPUs and TPUs respectively (Ilharco et al., 2021; Apple, 2023). ### Understanding Data Filtering Networks As open source CLIP-models improve on standard vision metrics such as ImageNet, the question arises whether we can replace the OpenAI CLIP model used in the dataset construction process with one of these better models. We can even hope of recursively applying this process to continuously train better models that can be used as better filtering models that once again yield even better models. Unfortunately, this does not seem to be true. Figure 3 shows that ImageNet performance of CLIP models is not correlated with filtering performance. To measure filtering performance, we create a dataset by using the CLIP model to apply CLIP filtering on DataComp's medium raw pool, and measure ImageNet performance of models trained on the induced dataset. It is especially Figure 3: Filtering strength is uncorrelated with image task performance. The models are trained using CLIP, and the number of samples seen and the training data are displayed on the right hand side. Filtering performance is measured by filtering on DataComp medium. striking that a model with 30% less ImageNet performance than OpenAI's CLIP models can be as good when used as a filtering model. We find that data quality is key to training good filtering models. To demonstrate this, we start with a high-quality pool of 10 million samples from Conceptual 12M (CC12M), and gradually replace it with unfiltered data from Common Crawl until this pool only contains Common Crawl. We train DFNs on these data mixes, and use these DFNs to CLIP filter a separate pool of 128 million Common Crawl samples from DataComp's medium scale. In Figure 4, we measure the ImageNet performance of both the DFNs and the induced models trained on datasets generated by each of the DFNs. While the ImageNet performance of the DFNs degrade steadily as they are trained on larger fractions of unfiltered data, their performance as filtering networks decreases immediately when the high-quality pool is "poisoned" with even a small portion of unfiltered data. Once the filtering training pool is poisoned, the dataset induced by the DFN is only slightly better than unfiltered data. \begin{table} \begin{tabular}{l l c c} \hline \hline DFN Type & Filter Dataset & ImageNet & Average \\ \hline No Filter Baseline & None & 0.176 & 0.258 \\ ResNet-34 Image Binary Filter & ImageNet & 0.242 & 0.292 \\ OpenAI ViT-B/32 Image Binary Filter & ImageNet & 0.266 & 0.295 \\ ResNet-34 Image Binary Filter & CC12M & 0.203 & 0.257 \\ OpenAI ViT-B/32 Image Binary Filter & CC12M & 0.218 & 0.276 \\ M3AE ViT-B/16 & CC12M & 0.237 & 0.297 \\ CLIP ViT-B/32 & CC12M & 0.289 & 0.335 \\ \hline \hline \end{tabular} \end{table} Table 1: Filtering Performance of various filtering models, after filtering DataComp medium scale (ViT-B/32, 128M samples seen). We present results on ImageNet top-1 as well as “Average” set of tasks (see Sec. 3.2 for details.) Figure 4: Data quality determines the filtering performance of models. We create these filter training datasets of various quality by having a set pool size of 10 million samples, and interpolating between CC-12M (high quality) and CommonPool (low quality). We then train models induced by the DFN filtering DataComp medium. Next, we explore using filtering models beyond CLIP models. While DFNs can use any model that can be reduced to a binary function, intuitively it makes sense to use CLIP models. By filtering with a similarity score between the image and text, we encourage keeping samples where the image and text are aligned. In order to verify this intuition we consider a few other options to produce a DFN. One is to train a binary classifier that can distinguish between ImageNet or CC12M data as positives and Common Crawl as negatives. We consider both ResNet (He et al., 2016) as well as frozen OpenAI CLIP embeddings for this filter. Another option is to use M3AE (Geng et al., 2022) trained on CC12M as a DFN that takes into account both images and text. We can use reconstruction loss as the filtering criterion, as it is a reasonable proxy for how similar samples are to the high-quality data used to train the filtering model. The filtering performance of all these options, including CLIP models, are summarized in Table 1, where the CLIP model outperform the other backbones. A key difference between the binary classifier and CLIP filters is that the binary filter makes an explicit assumption on what qualifies as a good distribution, while CLIP filters are more flexible. Although the M3AE and CLIP filtering models both are trained on CC12M and examine both modalities, M3AE performs much worse, potentially due to a combination of CLIP encouraging image-text alignment and the difficulty of text reconstruction from just CC12M. We conclude that CLIP models are the most practical and performant models for image-text DFNs. ## 4 Creating Better Data Filtering Networks Equipped with a better understanding of CLIP models as data filtering networks, we aim to create better data filtering networks. DFNs can be trained and modified in the same ways as standard machine learning models. We start by training a CLIP model on a high-quality dataset, and then we can fine-tune the filtering network on subsequent datasets that we want to do especially well on. We use weight ensembling to reduce overfitting on the fine-tuned datasets. Standard machine learning techniques such as augmentation, using a different initialization, and training for more steps with a larger batch size seem to improve the filtering model. We demonstrate the effect of these interventions in Table 2. On the other hand, using a different model size seems to have limited benefits, while model ensembling increases filtering costs without bringing gains. Compared to previous datasets such as DataComp-1B (DC-1B) which involved combining CLIP filtering with clustering-based heuristics, DFNs simplify the data filtering process into a single pipeline while also reducing computational costs. To create our best DFN, we train a ViT-B/32 CLIP model on High-Quality Image-Text Pairs (HQITP-350M), which is a high-quality dataset of 357 million image-text samples with human-verified captions. This dataset is similar to the HQITP-135M used in Ranasinghe et al. (2023), but expanded to 357M examples. We initialize the weights with OpenAI's checkpoint. We then fine-tune on the combined MS COCO training set, Flickr30k training set, and ImageNet 1k with OpenAI templates as the captions. We use additional augmentation at both training and fine-tuning time. Additional training details can be found in Appendix B. We create our dataset DFN-2B by applying this DFN on DataComp's full 12.8 billion sample CommonPool, with a threshold equivalent to taking the top 15% of samples. Our DFN induces datasets that achieve state-of-the-art results on medium, large, and xlarge scales in DataComp. In particular at xlarge, we train a ViT-L/14 on DFN-2B for 12.8B samples seen to achieve 81.4% zero-shot accuracy on ImageNet, and a 0.669 average over 38 DataComp evaluation datasets. As shown in Table 3, in terms of ImageNet zero-shot improvement, this is a 2.2% improvement over DC-1B, a 5.9% improvement over OpenAI WIT-400M, and a 8.3% improvement over LAION-2B. These improvements are beyond ImageNet, as we can see similar trends across the DataComp evaluation suite in distribution shifts, retrieval, VTAB, and average performance. Lastly, we train DFN-5B on a ViT-H/14 for 39B samples seen at 224 \(\times\) 224 resolution, and 5B samples at 378 \(\times\)378 resolution - achieving 84.4% zero-shot transfer accuracy on ImageNet, and 0.710 average on the DataComp evaluation suite. We find that models trained our DFN produced datasets outperform all other models on the evaluation suite regardless of pre-training dataset: MetaClip, WebLI or DataComp-1B (Xu et al., 2023; Zhai et al., 2022; Gadre et al., 2023), architectural improvements such as shape-optimized ViTs (Alabdulmohsin et al., 2023), a more performant sigmoid loss (Zhai et al., 2023), or pre-training performance optimizations such as those in Li et al. (2023b). Creating better datasets not only improves model performance, but also improves model efficiency. Performance that was once only achievable by larger models can be matched with a smaller model trained on a better dataset. Our ViT-L/14 trained on DFN-2B surpasses a ViT-G/14 trained on LAION-2B for 34B samples seen by 1.5% zero-shot accuracy on ImageNet, and by 0.002 average performance, despite using 16x less computational cost 2. Similarly, we can train a ViT-B/16 on DFN-2B for 12.8B samples seen to achieve competitive performance with OpenAI's ViT-L/14, representing a 4x computational cost reduction. Footnote 2: calculation does not take into account patch dropout used to train ViT-G/14 on LAION-2B The key to training good DFNs is using high-quality data for training the filtering network. Collecting verified high-quality data is expensive, as it often requires human annotations, and is thus difficult to scale to large quantities. But given a sizable high-quality dataset, we can explore if there are benefits to directly training on it instead of using it to train a DFN. In Table 4, we compare models trained on datasets induced by our DFNs with a model trained on HQITP-350M combined \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Intervention & & IN & IN Shifts & VTAB & Retrieval & Average \\ \hline \multirow{2}{*}{Augmentation} & ✗ & 0.620 & 0.493 & 0.534 & 0.515 & 0.536 \\ & ✓ & 0.626 & 0.501 & 0.534 & 0.516 & 0.542 \\ \hline \multirow{2}{*}{Samples Seen / Batch Size} & 2.56B / 4096 & 0.626 & 0.506 & 0.536 & 0.511 & 0.545 \\ & 5.12B / 8192 & 0.624 & 0.508 & 0.551 & 0.517 & 0.550 \\ \hline \multirow{2}{*}{Fine-tune} & ✗ & 0.624 & 0.508 & 0.551 & 0.517 & 0.550 \\ & ✓ & 0.678 & 0.540 & 0.555 & 0.534 & 0.560 \\ \hline \multirow{2}{*}{OAI-Init} & ✗ & 0.674 & 0.535 & 0.533 & 0.529 & 0.548 \\ & ✓ & 0.678 & 0.540 & 0.555 & 0.534 & 0.560 \\ \hline \hline \end{tabular} \end{table} Table 2: Standard interventions used to improve models can be used on DFNs to induce stronger datasets, leading to better models. DFNs are used to filter and train DataComp large (ViT-B/16, 1.28B samples seen). with the dataset induced by CLIP filtering CommonPool with OpenAI's ViT-B/32. Models trained on DFN induced datasets outperform the baseline on all major categories within the DataComp evaluation suite. Furthermore, training on the combination of HQITP-350M and DFN-2B seems to have little improvement when compared to just training on DFN-2B. By training a DFN instead of directly training on high-quality data, we demonstrate a successful recipe for leveraging high-quality data for creating large-scale high-quality datasets. We can also explore the differences between fine-tuning a DFN and directly training on the fine-tuning dataset. In Figure 5 and Table 8, we compare models trained on a dataset induced by a baseline DFN, a dataset induced by the baseline DFN fine-tuned on ImageNet and a dataset induced by the baseline DFN without fine-tuning on ImageNet combined with ImageNet. While the model that directly trains on ImageNet has much higher performance on ImageNet and ImageNet-V2, it does not improve upon the baseline for the ObjectNet, ImageNet-Sketch, and ImageNet-R. On the other hand, the DFN fine-tuned on ImageNet induces a dataset that improves over the baseline on ImageNet and all of its distribution shifts. Fine-tuning on DFN acts as a regularizer to induce datasets similar to the fine-tuning dataset, while maintaining strong robustness properties that come with drawing from a more distributionally diverse candidate pool. ### Better Datasets Beyond Vision Tasks: VQA Just like how machine learning models would ideally generalize across many tasks, we would also like our datasets to generalize across diverse tasks. We show that our datasets not only lead to better models when evaluated on vision tasks, but also lead to better visual question answering \begin{table} \begin{tabular}{l l c c c c c} \hline \hline Dataset & \begin{tabular}{c} DataComp \\ Scale \\ \end{tabular} & IN & IN Shifts & VTAB & Retrieval & Average \\ \hline DC-1B & medium & 0.297 & 0.239 & 0.346 & 0.231 & 0.328 \\ DFN-2B & medium & 0.371 & 0.298 & 0.388 & 0.288 & 0.373 \\ \hline DC-1B & large & 0.631 & 0.508 & 0.546 & 0.498 & 0.537 \\ DFN-2B & large & 0.678 & 0.540 & 0.555 & 0.534 & 0.560 \\ \hline LAION-2B & xlarge & 0.731 & 0.603 & 0.586 & 0.589 & 0.601 \\ OpenAI WIT-400M & xlarge & 0.755 & 0.649 & 0.586 & 0.543 & 0.617 \\ DC-1B & xlarge & 0.792 & 0.679 & 0.652 & 0.608 & 0.663 \\ DFN-2B & xlarge & 0.814 & 0.688 & 0.656 & 0.649 & 0.669 \\ \hline LAION-2B & N/A, ViT-G/14-224px & 0.801 & 0.691 & 0.646 & 0.635 & 0.667 \\ DC-1B (CLIPA-v2) & N/A, ViT-G/14-224px & 0.831 & **0.740** & 0.645 & 0.631 & 0.684 \\ MetaCLIP & N/A, ViT-H/14-336px & 0.805 & 0.700 & 0.640 & 0.652 & 0.667 \\ WebLI & N/A, ViT-SO/400M-384px & 0.831 & 0.734 & 0.648 & **0.698** & 0.692 \\ DFN-5B & N/A, ViT-H/14-224px & 0.834 & 0.713 & 0.675 & 0.684 & 0.698 \\ DFN-5B & N/A, ViT-H/14-378px & **0.844** & 0.738 & **0.685** & 0.695 & **0.710** \\ \hline \hline \end{tabular} \end{table} Table 3: Training on DFN-2B produces state-of-the-art CLIP models. Here we evaluate on the DataComp benchmark, comparing against LAION-2B, DC-1B, MetaCLIP and OpenAI WIT-400M. Additional comparisons can be found on the DataComp leaderboard. (VQA) models. We train a BLIP2 model (Li et al., 2023a) which takes as input a CLIP visual encoder and is trained for zero-shot VQA on COCO and Visual Genome, to measure zero-shot VQA performance on VQVA2, GQA, and OKVQA (Goyal et al., 2017; Hudson and Manning, 2019; Marino et al., 2019). We compare the performance on BLIP2 between the standard OpenAI ViT-L visual encoder and the ViT-L trained on DFN-2B. The DFN-2B model consistently outperforms the OpenAI ViT-L encoder and is competitive with a much larger EVA ViT-g model trained on LAION-2B3 Footnote 3: EVA’s ViT-g has an additional pre-training procedure trained on ImageNet-21k, COCO, Objects365 and Conceptual Captions 12M ### Publicly Reproducible DFNs Scientific research benefits from results that can be reproduced by anyone from scratch. Though OpenAI's internal dataset and HQITP-350M are not publicly accessible, we demonstrate that a competitive DFN can be trained on public data sources. We train a ViT-B/32 on Conceptual Caption12M, Conceptual Captions 3M, and Shutterstock 15M (Changpinyo et al., 2021; Sharma et al., 2018; Nguyen et al., 2023). As shown in Table 6, this DFN matches OpenAI's ViT-B/32 in terms of filtering performance at DataComp's medium and large scales. Additionally, this DFN can be modified as described in the previous section to further improve filtering performance. \begin{table} \begin{tabular}{l l c c c c} \hline \hline Dataset & Model & IN & IN Shifts & VTAB & Retrieval & Average \\ \hline OAI ViT-B/32 Induced Dataset & & & & & \\ + HQITP-350M & ViT-B/16 & 0.706 & 0.572 & 0.582 & 0.575 & 0.596 \\ DFN without FT Induced Dataset & ViT-B/16 & 0.729 & 0.599 & 0.604 & 0.597 & 0.612 \\ DFN-2B & ViT-B/16 & 0.762 & 0.623 & 0.598 & 0.611 & 0.609 \\ \hline OAI ViT-B/32 Induced Dataset & & & & & \\ + HQITP-350M & & & & & \\ DFN-2B & ViT-L/14 & 0.814 & 0.688 & 0.656 & 0.649 & 0.669 \\ DFN-2B + HQITP-350M & ViT-L/14 & 0.813 & 0.691 & 0.662 & 0.656 & 0.670 \\ \hline \hline \end{tabular} \end{table} Table 4: High-quality data is best used to train the filtering model rather than the end model. Training DFNs with HQITP-350M induces a dataset that outperforms the dataset induced by a worse DFN combined with HQITP-350M. \begin{table} \begin{tabular}{l l c c c} \hline \hline Visual Encoder & & & & \\ Training Dataset & Architecture & VQAv2 Acc. (\%) & GQA Acc. (\%) & OKVQ Acc. (\%) \\ \hline OAI-WIT-400M & ViT-L & 45.5 & 30.0 & 19.1 \\ DFN-2B & ViT-L & 48.3 & 31.3 & 21.9 \\ LAION-2B & ViT-g & 48.7 & 31.1 & 24.5 \\ \hline \hline \end{tabular} \end{table} Table 5: Performance of BLIP-2 variants with different visual encoder training datasets. The DFN-2B trained ViT-L provides consistent improvements across multiple zero-shot VQA tasks. ## 5 Discussion The simplicity of the data filtering network pipeline makes it a flexible tool to integrate into existing workflows. As DFNs operates on individual samples, this approach scales linearly with candidate pool size, enabling the creation of datasets orders of magnitude larger than those that we introduce in this work. Additionally, the DFNs we train in this work are relatively small neural networks which allows for filtering to be directly integrated into training procedures of much larger networks for minimal marginal cost. DFNs can then filter batches of raw data that are then trained on, reducing the need for complex data pre-processing procedures. As useful as DFNs are in building performant models in this work, there are still many unanswered questions to address in future work. We still do not know exactly how to optimize directly for dataset quality, and thus opt for weak proxies such as alignment. It is not even clear what that \begin{table} \begin{tabular}{l l c c c c c} \hline \hline \multirow{2}{*}{DFN Training Data} & \multirow{2}{*}{ \begin{tabular}{c} DataComp \\ Scale \\ \end{tabular} } & \multirow{2}{*}{IN} & \multirow{2}{*}{IN Shifts} & \multirow{2}{*}{VTAB} & \multirow{2}{*}{Retrieval} & \multirow{2}{*}{Average} \\ \hline CC12M + CC3M + SS15M & medium & 0.307 & 0.253 & 0.359 & 0.274 & 0.349 \\ OpenAI WIT-400M & medium & 0.285 & 0.240 & 0.355 & 0.253 & 0.338 \\ \hline CC12M + CC3M + SS15M & large & 0.591 & 0.481 & 0.522 & 0.503 & 0.532 \\ OpenAI WIT-400M & large & 0.578 & 0.466 & 0.525 & 0.475 & 0.527 \\ \hline \hline \end{tabular} \end{table} Table 6: DFNs are trained with a ViT-B/32, then used to filter DataComp pools. Conceptual 12M, Conceptual Captions 3M, and Shutterstock 15M are publicly available datasets, demonstrating that large-scale high-quality datasets can be constructed with only publicly available resources. Figure 5: Datasets induced by DFNs can be robust to distribution shift. DFNs can be fine-tuned to maintain robustness of induced datasets, unlike directly training on ImageNet (IN). DFNs are not performing distillation because induced datasets lead to higher performing models than the original DFN. Distribution shifts used are IN-V2, ObjectNet, IN-Sketch, IN-R, and IN-A. proxy would be for other domains where DFNs could be applied such as speech, text or video data. We hope that these open questions and the bridge DFNs build between modeling work and dataset work can lead to fruitful new avenues of research. ## Acknowledgements We would like to thank Bowen Zhang, Ruoming Pang, Brandon McKinzie, Mitchell Wortsman, Gabriel Ilharco, Ross Wightman, Achal Dave, Josh Susskind, Alaaeldin Ali, Fartash Faghri, Preetum Nakkiran, and Chen Huang for helpful feedback at various stages of the project. Bowen and Ruoming were invaluable for helping us set up AxLearn and answering countless questions when we ran into various errors. Brandon pointed us in the direction of the HQITP datasets that were crucial for our best results, and also helped us with AxLearn issues. Mitchell's OpenClip experience helped us set hyper-parameters for our largest scale runs. Gabriel helped us debug a webdataset related dataloader bug. Ross caught a bug in our final high resolution model that led to a modest performance improvement. Achal provided hyper-parameters and instructions for training BLIP2 for the VQA experiments. Alaaeldin, Chen, Fartash, Josh, and Preetum provided helpful comments on the manuscript.
2309.00159
Betweenness algebras
We introduce and study a class of betweenness algebras-Boolean algebras with binary operators, closely related to ternary frames with a betweenness relation. From various axioms for betweenness, we chose those that are most common, which makes our work applicable to a wide range of betweenness structures studied in the literature. On the algebraic side, we work with two operators of possibility and of suffciency.
Ivo Duentsch, Rafal Gruszczynski, Paula Menchon
2023-08-31T22:20:21Z
http://arxiv.org/abs/2309.00159v1
# Betweenness algebras ###### Abstract. We introduce and study a class of _betweenness algebras_--Boolean algebras with binary operators, closely related to ternary frames with a betweenness relation. From various axioms for betweenness, we chose those that are most common, which makes our work applicable to a wide range of betweenness structures studied in the literature. On the algebraic side, we work with two operators of _possibility_ and of _sufficiency_. MSC: 06E25, 03B45 Keywords: Boolean algebras with operators, modal algebras, sufficiency algebras, binary operators, ternary relations, betweenness relation ## 1. Introduction Betweenness relations--well-known from geometry--are probably the most deeply investigated ternary relations in logic and mathematics. The origin of the studies can be traced back at least to the works of Huntington and Kline (1924, 1917), through the seminal contributions of Tarski (Tarski and Givant, 1999), up to the results of Altwegg (1950), Sholander (1952), Duvelmeyer and Wenzel (2004) and Duntsch and Urquhart (2006). At least as early as in the seminal papers of Jonsson and Tarski (1951), the connection between \(n+1\)-ary relations and \(n\)-ary operators on Boolean algebras was established in the form of Jonsson-Tarski duality for Boolean algebras with operators. The developed techniques turned out to be particularly successful in the study of modal logics and their algebraic semantics. The abstract approach of Jonsson and Tarski can be made concrete by focusing on a relation (or relations) of particular choice, betweenness in the case of the approach from our work. As observed by van Benthem and Bezhanishvili (2007), a ternary betweenness relation \(B\) gives rise to the binary modal operator \(\langle B\rangle\) whose relational semantics is given by the following condition: \[x\Vdash\langle B\rangle(\varphi,\psi)\ :\longleftrightarrow\ (\exists\,y,z\in U)\,(B(y,x,z)\text{ and }y\Vdash\varphi\text{ and }z\Vdash\psi)\,,\] where \(B(y,x,z)\) is interpreted as _point \(x\) is between points \(y\) and \(z\)_. Our intention is to investigate the algebraic properties of \(\langle B\rangle\) within the framework of Boolean algebras with operators. In Section 2 we recall basic facts about _possibility_ and _sufficiency_ operators which will find their application in the sequel. In Section 3 we commit ourselves to a particular notion of _betweenness_ by choosing what we see as the core axioms for the reflexive version of the geometric relation. Section 4 justifies our approach via possibility and sufficiency operators as we show that the class of the so-called _betweenness frames_ is neither modally nor sufficiency axiomatic. In Section 5 Introduction A _path_ of a graph \(G\) is a graph \(G\) with vertex set \(V\), and a graph \(G\) is a graph with vertex set \(V\). The graph \(G\) is called _path-connected_ if it is a path of \(G\). A _path_ of a graph \(G\) is a path of \(G\) if it is a path of \(G\). A _path_ of a graph \(G\) is a path of \(G\) if it is a path of \(G\) if it is a path of \(G\). A _path_ of a graph \(G\) is a path of \(G\) if Suppose that \(\mathfrak{F}\coloneqq\langle U,Q\rangle\) is a frame where \(Q\) is an \((n+1)\)-ary relation on \(U\). We define an \(n\)-ary operator \(\langle Q\rangle\colon\left(2^{U}\right)^{n}\to 2^{U}\) by \[\langle Q\rangle(X_{1},\ldots,X_{n})\coloneqq\{u\in U\mid\left(\exists x_{1} \in X_{1}\ldots\exists x_{n}\in X_{n}\right)Q(x_{1},\ldots,x_{n},u)\}. \tag{2}\] It is well known that \(\langle Q\rangle\) is a complete possibility operator (Jonsson and Tarski, 1951). The structure \(\langle 2^{U},\langle Q\rangle\rangle\) is called the _full possibility complex algebra over \(\mathfrak{F}\)_, denoted by \(\mathsf{Cm}^{p}(\mathfrak{F})\). Each subalgebra is called a _possibility complex algebra over \(\mathfrak{F}\)_. If \(\mathfrak{F}\) is understood we shall just speak of possibility complex algebras. The following result is decisive for the theory of BAOs and generalizes Stone's theorem of representing Boolean algebras: **Theorem 2.2** (Jonsson and Tarski, 1951, Theorem 3.10).: _If \(\mathfrak{A}\coloneqq\langle A,f\rangle\) is a Boolean algebra with an \(n\)-ary possibility operator \(f\), then the Stone map \(h\colon h\hookrightarrow 2^{\operatorname{Ult}(A)}\), defined by \(h(x)\coloneqq\{\mathscr{U}\in\operatorname{Ult}(A)\mid x\in\mathscr{U}\}\) is an embedding of \(\mathfrak{A}\) into \(\mathsf{Cm}^{p}(\mathsf{Cf}^{p}(\mathfrak{A}))\)._ The algebra \(\mathsf{Cm}^{p}(\mathsf{Cf}^{p}(\mathfrak{A}))\) is called the _possibility canonical extension_ of \(\mathfrak{A}\), denoted by \(\mathsf{Em}^{p}(\mathfrak{A})\). For details of the origin and theory of BAOs see the survey by Jonsson (1993). Starting with an \(n+1\)-ary frame \(\mathfrak{F}\coloneqq\langle U,R\rangle\) there is also a representation theorem: **Theorem 2.3**.: \(\mathfrak{F}\) _can be embedded into the canonical frame of its full possibility complex algebra._ Proof.: This is a generalization of (Orlowska et al., 2015, Theorem 3.2.7). Define \(k\colon\mathfrak{F}\to\operatorname{Ult}(\mathsf{Cm}^{p}(\mathfrak{F}))\) by \(k(x)\coloneqq\uparrow\{x\}\), that is, \(k(x)\) is the principal filter of \(2^{U}\) generated by \(\{x\}\); clearly, \(k\) is injective. We define \(\langle R\rangle\) as in (2), and \(Q_{\langle R\rangle}\) as in (1). Let \(x_{1},\ldots,x_{n+1}\in U\). Then: \[Q_{\langle R\rangle}(k(x_{1}),\ldots,k(x_{n+1})) \longleftrightarrow\langle R\rangle\left[\uparrow\{x_{1}\}, \ldots,\uparrow\{x_{n}\}\right]\subseteq\uparrow\{x_{n+1}\}\] \[\longleftrightarrow\left(\forall X_{i}\right)\left(x_{i}\in X_{i },1\leq i\leq n\to\langle R\rangle(X_{1},\ldots X_{n})\in\uparrow\{x_{n+1}\}\right)\] \[\longleftrightarrow\left(\forall X_{i}\right)\left(x_{i}\in X_{i },1\leq i\leq n\to x_{n+1}\in\langle R\rangle(X_{1},\ldots X_{n})\right)\] \[\longleftrightarrow\left(\forall X_{i}\right)\left(x_{i}\in X_{i },1\leq i\leq n\to\] \[\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\left(\exists u_{i }\in X_{i}\right)\!R(u_{1},\ldots,u_{n},x_{n+1})\right).\] If \(R(x_{1},\ldots,x_{k+1})\), we choose \(u_{i}\coloneqq x_{i}\); this shows that \(Q_{\langle R\rangle}(k(x_{1}),\ldots,k(x_{n+1}))\). Conversely, if \(Q_{\langle R\rangle}(k(x_{1}),\ldots,k(x_{n+1}))\), choosing \(X_{i}\coloneqq\{x_{i}\}\) shows that the tuple \(\langle x_{1},\ldots,x_{k+1}\rangle\) is in \(R\). The relational semantics of classical modal logics is limited in expression since it can talk about (some) properties of a binary relation \(R\) but not about properties of \(-R\). A "sufficiency" counterpart of the modal necessity operator \(\square\) was independently suggested by Humberstone (1983) with \(\blacksquare\) and Gargov et al. (1987) with \(\square\).1 The algebraic properties of such an operator and representation properties were investigated in a sequence of papers by Duntsch and Orlowska (2004, 2001) and Duntsch, Orlowska, and Tinchev (2017). **Definition 2.4**.: A mapping \(g\colon A^{n}\to A\) is an _\(n\)-ary sufficiency operator_ if and only if \(g\) meets the following two constraints: 1. If there is \(i\) such that \(1\leqslant i\leqslant n\) and \(x_{i}=\mathbf{0}\), then \(g(x_{1},\ldots,x_{n})=\mathbf{1}\) (co-normality). 2. If \(\langle x_{1},\ldots,x_{n}\rangle\) and \(\langle y_{1},\ldots,y_{n}\rangle\) are \(n\)-termed sequences in \(A\) such that \(x_{i}=y_{i}\) for all \(i\neq k\), then \(g(x_{1},\ldots,x_{k},\ldots,x_{n})\cdot g(y_{1},\ldots,y_{k},\ldots,y_{n})=g(x_ {1},\ldots,x_{k}+y_{k},\ldots,x_{n})\) (co-additivity). Note that a sufficiency operator is antitone in each argument. The pair \(\mathfrak{A}\coloneqq\langle A,g\rangle\) is called a _sufficiency algebra_. While unary possibility algebras are algebraic models of the logic \(\mathsf{K}\), the unary sufficiency algebras are algebraic models of its counterpart \(\mathsf{K}^{*}\)(Tehlikeli, 1985). The _sufficiency canonical frame_ is the system \(\langle\operatorname{Ult}(A),S\rangle\) where \(S\) is the \((n+1)\)-ary relation on \(\operatorname{Ult}(A)\) defined by \[S(\mathscr{U}_{1},\ldots,\mathscr{U}_{n+1})\ :\longleftrightarrow\ g[ \mathscr{U}_{1}\times\ldots\times\mathscr{U}_{n}]\cap\mathscr{U}_{n+1}\neq\emptyset,\] denoted by \(\mathsf{Cf}^{s}(\mathfrak{A})\). Conversely, If \(\mathfrak{F}\coloneqq\langle U,S\rangle\) is an \((n+1)\)-ary frame we define an \(n\)-ary operator \([[S]]\) on \(2^{U}\) by \[[[S]](X_{1},\ldots,X_{n})\coloneqq\{u\in U\mid X_{1}\times\ldots\times X_{n} \times\{u\}\subseteq S\}.\] The algebra \(\langle 2^{U},[[S]]\rangle\) is called the _full sufficiency complex algebra_ of \(\mathfrak{F}\), denoted by \(\mathsf{Cm}^{s}(\mathfrak{F})\). Each subalgebra is called a _sufficiency complex algebra over \(\mathfrak{F}\)_. If \(\mathfrak{F}\) is understood we shall omit the reference to \(\mathfrak{F}\). It is well known that \([[S]]\) is a complete co-additive operator on \(2^{U}\)(Duntsch and Orlowska, 2001, Proposition 5). In analogy to possibility algebras we have **Theorem 2.5** (Duntsch and Orlowska, 2001).: 1. _If a mapping_ \(g\) _is an_ \(n\)_-ary sufficiency operator on_ \(\mathfrak{A}\)_, the Stone map_ \(h\colon\mathfrak{A}\to 2^{\operatorname{Ult}(A)}\) _is an embedding of_ \(\mathfrak{A}\) _into_ \(\mathsf{Cm}^{s}(\mathsf{Cf}^{s}(\mathfrak{A}))\)_._ 2. _If_ \(\mathfrak{F}\coloneqq\langle U,S\rangle\) _is an_ \((n+1)\)_-ary frame the map_ \(k\colon\mathfrak{F}\to\operatorname{Ult}(\mathsf{Cm}^{s}(\mathfrak{F}))\) _such that_ \(k(x)\coloneqq\uparrow\{x\}\) _is an embedding._ Any algebra \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) such that \(A\) is a BA and \(f\) and \(g\) are--respectively--a possibility and a sufficiency operator of the same arity will be called a _Possibility-Sufficiency-algebra_ (PS-algebra). Since the mappings \(h\) and \(k\) are the same as in Theorem 2.2 and Theorem 2.3 we can define the PS-canonical frame of \(\mathfrak{A}\) as \(\langle\operatorname{Ult}(A),Q_{f},S_{g}\rangle\) and denote it by \(\mathsf{Cf}^{ps}(\mathfrak{A})\). The algebra \(\mathsf{Cm}^{ps}(\mathsf{Cf}^{ps}(\mathfrak{A}))\) is called the _canonical extension of \(\mathfrak{A}\)_, denoted by \(\mathsf{Em}^{ps}(\mathfrak{A})\). The structure \(\mathsf{Cf}^{ps}(\mathsf{Cm}^{ps}(\mathfrak{F}))\) is the _canonical extension of \(\mathfrak{F}\)_ denoted by \(\mathsf{Cc}^{ps}(\mathfrak{F})\). From the outset, there is no connection between the possibility operator \(f\) and the sufficiency operator \(g\). To enhance the expressiveness of the combined corresponding logics, Duntsch and Orlowska (2001) introduced the class of _mixed algebras_ (MIAs) which are PS-algebras \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) which satisfy the condition \(Q_{f}=S_{g}\); in the unary case this is equivalent to \[(\forall\mathscr{U}_{1},\mathscr{U}_{2}\in\operatorname{Ult}(A))\left(f[ \mathscr{U}_{1}]\subseteq\mathscr{U}_{2}\longleftrightarrow g[\mathscr{U}_{1} ]\cap\mathscr{U}_{2}\neq\emptyset\right)\,. \tag{3}\] It was shown that the class of MIAs is not first order axiomatizable and that the canonical extension of a MIA is isomorphic to the full complex algebra \(\langle 2^{U},\langle R\rangle,[[R]]\rangle\) of a frame. For an overview and examples of mixed algebras with unary operators see (Orlowska et al., 2015, Section 3.6). Subsequently, Duntsch, Orlowska, and Tinchev (2017) introduced a first-order axiomatizable proper subclass of mixed algebras, called _weak MIAs_, which satisfy in the unary case the axiom: \[a\neq\mathbf{0}\to g(a)\leq f(a). \tag{4}\] It turned out that the equational class generated by the weak MIAs are the algebraic models of the logic \(K\widetilde{\ }\), presented by Gargov et al. (1987). We shall see later that the axioms of betweenness relations can be algebraically expressed in weak MIAs, but not by possibility or sufficiency operators alone. ## 3. A definition of betweenness In the sequel, we will focus on relational systems \(\langle U,B\rangle\) such that \(B\) is a ternary relation on a non-empty set \(U\). Such systems will be called _3-frames_. **Definition 3.1**.: Let \(\langle U,B\rangle\) be a 3-frame. \(B\) is called a _betweenness relation_ if it satisfies the following (universal) axioms: (BT0) \[B(a,a,a)\,,\] (BT1) \[B(a,b,c)\to B(c,b,a)\,,\] (BT2) \[B(a,b,c)\to B(a,a,b)\,,\] (BT3) \[B(a,b,c)\wedge B(a,c,b)\to b=c\.\] Note that (BT0)-(BT2) are expanding in the sense that they require certain triples to be in a betweenness relation, while (BT3) is contracting, since it prohibits triples to be in \(B\). \(\dashv\) **Definition 3.2**.: A ternary relation \(B\) is a _weak betweenness_ if it satisfies (BT0)-(BT2) and (BTW) \[B(a,b,a)\to a=b\,.\] \(\dashv\) The following example shows that (BTW) is strictly weaker than (BT3): **Example 3.3**.: Set \(U\coloneqq\{0,1,2\}\) and define \[B\coloneqq\{\langle a,a,a\rangle\mid a\in U\}\cup\{\langle a,a,b \rangle\mid a,b\in U\}\cup\\ \{\langle a,b,b\rangle\mid a,b\in U\}\cup\{\langle 0,1,2\rangle, \langle 2,1,0\rangle,\langle 0,2,1\rangle,\langle 1,2,0\rangle\}\,.\] Then, (BTW) is vacuously true, and \(\langle 0,1,2\rangle,\langle 0,2,1\rangle\in B\). \(\dashv\) To show the difference consider the following condition: (C) \[\#\langle a,b,c\rangle\to(\langle a,b,c\rangle\notin B\vee\langle a,c,b\rangle \notin B)\,.\] Here, \(\#\langle a,b,c\rangle\) if and only if \(\left|\{a,b,c\}\right|=3\). **Proposition 3.4**.: _Assume \(B\) satisfies (BT2). Then, \(B\) satisfies (BT3) if and only if it satisfies (BTW) and (C)._ Proof.: (\(\rightarrow\)) For (BTW) consider \[B(x,y,x)\stackrel{{\eqref{BT2}}}{{\longrightarrow}}B(x,x,y) \stackrel{{\eqref{BT3}}}{{\longrightarrow}}x=y.\] If \(B(x,y,z)\) and \(B(x,z,y)\), then \(y=z\) by (BT3), and thus, not \(\#\langle x,y,z\rangle\). (\(\leftarrow\)) Suppose that \(B(x,y,z)\) and \(B(x,z,y)\); then, not \(\#\langle x,y,z\rangle\) by (C). If \(y=z\), there is nothing more to show. If \(x=z\), then \(B(x,y,x)\), and \(x=y\) by (BTW), hence, \(y=z\). Similarly, if \(x=y\), then \(B(x,z,x)\) and therefore \(x=y=z\). **Definition 3.5**.: \(B\) is a _strong betweenness_ if \(B\) meets the following stronger version of (BT2): (BT2s) \[B(a,a,b).\] Clearly, (BT2s) implies (BT2). In the presence of symmetry in the form of (BT1), the axiom (BT2s) is equivalent to Tarski's axiom 12 from (Tarski and Givant, 1999). The choice of the axioms is by no means arbitrary, but embodies what can be seen as the _core_ axioms for reflexive betweenness. Reflexivity in the form of (BT0) is equivalent to Axiom 13 of Tarski and Givant (1999). Symmetry, which is (BT1), is taken as Postulate A by Huntington and Kline (1917)2 and Axiom 14 by Tarski and Givant. (BT2) in the presence of symmetry can be seen as a weakening of Tarski's reflexivity axiom (Axiom 12) and is one of the axioms for betweenness obtained from binary relations (see Altwegg, 1950). (BT3) arises naturally in the context of--again--betweenness induced by binary relations and has a clear and natural geometric meaning. (BTW) is present in Tarski's system as Axiom 6. Footnote 2: It must be said, though, that they work with the strict betweenness. For more on the motivation for the choice of (BT0)-(BT3) the reader is invited to consult (Duntsch and Urquhart, 2006). **Definition 3.6**.: A pair \(\mathfrak{F}\coloneqq\langle U,B\rangle\) such that \(B\subseteq U^{3}\) and \(B\) satisfies (BT0)-(BT3) will be called a _betweenness frame_ or just a _b-frame_. If we replace (BT3) by (BTW) then \(\mathfrak{F}\) is called a _weak betweenness frame_, and in case (BT2s) is substituted for (BT2), a _strong betweenness frame_. ## 4. Non-definability of betweenness relations In the algebraic approach to betweenness, we are going to engage both possibility and sufficiency operators. This is justified by the fact that betweenness is neither possibility nor sufficiency axiomatic. We devote this section to the proofs of the aforementioned phenomena. ### Bounded and co-bounded morphisms The standard notion of a _bounded morphism_ for binary relations has a natural generalization to \(n\)-ary ones. We restrict ourselves to \(3\)-frames since this is all we require. **Definition 4.1**.: If \(\langle U,R\rangle\) and \(\langle V,S\rangle\) are \(3\)-frames, then a mapping \(f\colon U\to V\) is a _bounded morphism_ if 1. if \(R(x,y,z)\), then \(S(f(x),f(y),f(z))\) (i.e., \(f\) preserves \(R\), i.e., satisfies the forth condition); 2. if \(S(f(w),x,y)\), then \((\exists u,v\in U)(f(u)=x\wedge f(v)=y\wedge R(w,u,v))\) (i.e., \(f\) satisfies the back condition).3 Footnote 3: (Blackburn et al., 2001, see p. 140) \(f\colon U\to V\) is called a _co-bounded morphism_ if for all \(x,y,z\in U\) and \(t\in V\); 1. if \(-R(x,y,z)\), then \(-S(f(x),f(y),f(z))\) (i.e., \(f\) preserves \(-R\)); 2. if \(-S(f(w),x,y)\), then \((\exists u,v\in U)(f(u)=x\wedge f(v)=y\wedge-R(w,u,v))\) (i.e., \(f\) satisfies the back condition). \(\dashv\) Since in the case of betweenness relations the middle argument plays a distinguished role, we allow ourselves to modify the definition accordingly when we need it without spelling it out explicitly. The following two theorems are crucial for the sequel. **Theorem 4.2** (Goldblatt and Thomason, 1975, Theorem 3).: _Let \(\tau\) be a modal similarity type. A first-order definable class of \(\tau\) frames is possibility definable if and only if it is closed under taking bounded homomorphic images, generated subframes, disjoint unions, and reflects ultrafilter extensions._ **Theorem 4.3** (Duntsch and Orlowska, 2001, Section 5).: _Let \(\tau\) be a sufficiency similarity type. A first-order definable class of \(\tau\) frames \(\langle U,R\rangle\) is sufficiency definable if and only if the class of its complementary frames \(\langle U,-R\rangle\) is possibility definable._ ### Non-definability For the first of the two non-definability results (and for several examples in the sequel), we invoke relevant facts about betweenness relations obtained from binary ones. If \(\langle U,R\rangle\) is a binary frame, then it induces a <<natural>> ternary relation \(B\) on \(U\): \[B_{R}(x,y,z)\;:\longleftrightarrow\;x\;R\;y\;R\;z\lor z\;R\;y\;R\;x\,.\] For these, we mention only one more definition and two basic facts: **Definition 4.4**.: A binary relation \(R\) on \(U\) is called _strongly antisymmetric_ if and only if \[x\;R\;y\;R\;z\;R\;x\to y=z.\] \(\dashv\) **Proposition 4.5** (Gruszczynski and Menchon, 2023).: _A reflexive \(R\subseteq U^{2}\) is strongly antisymmetric if and only if \(B_{R}\) is a betweenness relation._ **Corollary 4.6**.: _If \(R\subseteq U^{2}\) is a partial order relation, then \(B_{R}\) is a betweenness relation._ Among the betweenness axioms, (BT0) and (BT1) have modal correspondents which is proved below in Theorem 6.6. However, in general, we have **Theorem 4.7** (Duntsch and Orlowska, 2019).: _The class of weak betweenness relations is not modal axiomatic._ Proof.: We are going to show that the class of weak betweenness relations is not closed under bounded morphisms. To this end, consider the set \(\mathsf{Z}\) of all integers with the betweenness relation \(B_{\leqslant}\) induced by the standard linear order \(\leqslant\) on \(\mathsf{Z}\). By Corollary 4.6 we have that \(B_{\mathsf{Z}}\) is a betweenness relation, and so, \(\mathfrak{Z}\coloneqq\langle\mathsf{Z},B_{\leqslant}\rangle\) is a b-frame. On the other hand, \(\mathfrak{F}\coloneqq\langle\{w_{0},w_{1}\},R\rangle\) where \(w_{0}\neq w_{1}\) and \(R\coloneqq\{w_{0},w_{1}\}^{3}\) is not even a weak b-frame, as \(R\) does not satisfy (BTW). Let \(f\colon\mathsf{Z}\to\{w_{0},w_{1}\}\) be such that: \[f(x)\coloneqq\begin{cases}w_{0},&\text{ if $x$ is even},\\ w_{1},&\text{ if $x$ is odd}.\end{cases}\] Since \(R\) is the universal ternary relation on \(\{w_{0},w_{1}\}\), \(f\) preserves \(B_{\leqslant}\), so to show that \(f\) is indeed a bounded morphism all that is left is to prove that \(f\) satisfies the back condition: \[R(u,f(x),v)\to(\exists y,z\in\mathsf{Z})\left(B_{\leqslant}(y,x,z)\wedge f(y) =u\wedge f(z)=v\right).\] The proof will be done by cases: Suppose \(R(u,f(x),v)\). In case \(f(x)=w_{0}\), we have that \(x\) is even, and there are the following possibilities: 1. \(u=v=w_{0}\): Set \(y\coloneqq x\eqdot z\). We have \(B_{\leqslant}(x,x,x)\) and \(f(y)=w_{0}\) and \(f(z)=w_{0}\). 2. \(u=v=w_{1}\): Set \(y\coloneqq x-1\) and \(z\coloneqq x+1\). Thus \(B_{\leqslant}(x-1,x,x+1)\) and \(f(x-1)=w_{1}\) and \(f(x+1)=w_{1}\). 3. \(u=w_{0}\) and \(v=w_{1}\): Set \(y\coloneqq x\) and \(z\coloneqq x+1\). Thus \(B_{\leqslant}(x,x,x+1)\) and \(f(x)=w_{0}\) and \(f(x+1)=w_{1}\). 4. \(u=w_{1}\) and \(v=w_{0}\): Set \(y\coloneqq x-1\) and \(z\coloneqq x\). We have that \(B_{\leqslant}(x-1,x,x)\) and \(f(y)=w_{1}\) and \(f(z)=w_{0}\). The proof for the case \(f(x)=w_{1}\) is analogous. **Theorem 4.8**.: _The class of betweenness relations is not sufficiency axiomatic._ Proof.: In light of Theorem 4.3 it is enough to show that the class of complementary \(3\)-frames for betweenness frames is not possibility definable. This is the same as showing that the class of betweenness frames is not closed under co-bounded morphisms. Thus, we exhibit two \(3\)-frames \(\langle U,R\rangle\) and \(\langle V,S\rangle\) as well as a co-bounded surjective morphism \(p\colon U\to V\) such that \(\langle U,R\rangle\) is a betweenness frame and \(\langle V,S\rangle\) is not. Let \(U\coloneqq\{a,b\}\) and \(R\coloneqq\{\langle a,a,a\rangle,\langle b,b,b\rangle\}\); then \(R\) is a betweenness relation. Furthermore, set \(V\coloneqq\{x\}\) and \(S\coloneqq\emptyset\); then \(-S=\{\langle x,x,x\rangle\}\) and \(S\) is not a betweenness relation since it does not satisfy (BT0). Obviously, there is a unique surjection \(p\colon U\to V\) given by \(p(a)\coloneqq x=:p(b)\), and it is a bounded morphism between \(\langle U,-R\rangle\) and \(\langle V,-S\rangle\). It preserves \(-R\) as it is constant, and it satisfies the back condition since if e.g., \(-S(x,p(a),x)\), then \(-R(b,a,b)\) and \(f(b)=x\). The proof reflects the fact that reflexivity of a binary relation is not definable by a unary sufficiency operator. ## 5. Complex algebras of b-frames To motivate the choice of axioms for abstract algebras of betweenness we will focus on b-frames and their complex algebras. We are going to show that axioms of b-frames correspond to algebraic properties of their complex algebras, and the latter will serve in the next section as <<role models>> for the axioms expressed within the framework of PS-algebras. For a 3-frame \(\mathfrak{F}\coloneqq\langle U,B\rangle\) we define its complex operators by \[(\mathtt{df}\,\langle B\rangle) \langle B\rangle(X,Y) \coloneqq\{u\in U\mid(\exists x\in X)(\exists y\in Y)\,B(x,u,y)\}\] \[=\{u\in U\mid(X\times\{u\}\times Y)\cap B\neq\emptyset\}\] \[(\mathtt{df}\,[[B]]) [[B]](X,Y) \coloneqq\{u\in U\mid(\forall x\in X)(\forall y\in Y)\,B(x,u,y)\}\] \[=\{u\in U\mid X\times\{u\}\times Y\subseteq B\}.\] Thus, \(\mathsf{Cm}^{ps}(\mathfrak{F})=\langle 2^{U},\langle B\rangle,[[B]]\rangle\) is the full complex algebra of \(\mathfrak{F}\). We will prove that the following conditions for complex algebras correspond to relational axioms for betweenness: (BT0 \[{}^{\mathsf{c}}\] ) \[X\subseteq\langle B\rangle(X,X)\,,\] (BT1 \[{}^{\mathsf{f}}_{f}\] ) \[\langle B\rangle(X,Y)\subseteq\langle B\rangle(Y,X)\,,\] (BT1 \[{}^{\mathsf{c}}_{g}\] ) \[[[B]](X,Y)\subseteq[[B]](Y,X)\,,\] (BT2 \[{}^{\mathsf{c}}\] ) \[Y\cap\langle B\rangle(X,Z)\subseteq\langle B\rangle(X\cap\langle B \rangle(X,Y),Z)\,,\] (BT3 \[{}^{\mathsf{c}}\] ) \[\langle B\rangle(X,[[B]](X,-Y)\cap Y)\subseteq Y\,,\] (BTW \[{}^{\mathsf{c}}\] ) \[[[B]](X,X)\subseteq X\,,\text{ for all }X\neq\emptyset\,,\] (BT2 \[{}^{\mathsf{c}}\] ) \[X\subseteq\langle B\rangle(X,Y)\,,\text{ for all }Y\neq\emptyset\,.\] Note that all of these are universal, so they hold in all subalgebras of \(\mathsf{Cm}^{ps}(\mathfrak{F})\). **Theorem 5.1**.: _Let \(\mathfrak{F}\coloneqq\langle U,B\rangle\) be a 3-frame. Then, \(\mathfrak{F}\) satisfies (BT\(i\)) if and only if \(\mathsf{Cm}^{ps}(\mathfrak{F})\) satisfies (BT\(i^{\mathsf{c}}\)), for any \(i\in\{0,1_{f},1_{g},2,3,\mathrm{W},2\mathrm{s}\}\)._ Proof.: (\(i=0\)) Let \(\mathfrak{F}\) satisfy (BT0) and take \(x\in X\). Since \(B(x,x,x)\), it is the case that \((X\times\{x\}\times X)\cap B\neq\emptyset\mathrm{in}\) consequence as required. Conversely, given \(x\in U\), we have \(\{x\}\subseteq\langle B\rangle(\{x\},\{x\})\), which means that \(B(x,x,x)\). (\(i=1_{f}\)) Suppose \(\mathfrak{F}\) meets (BT1). If \(x\in\langle B\rangle(X,Y)\), then \((X\times\{x\}\times Y)\cap B\neq\emptyset\), and from (BT1) we obtain \((Y\times\{x\}\times X)\cap B\neq\emptyset\), i.e., \(\langle B\rangle(Y,X)\). For the reverse implication, suppose that \(\mathfrak{A}\) satisfies (BT1\({}^{\mathsf{c}}_{f}\)). If \(B(x,y,z)\), then \((\{x\}\times\{y\}\times\{z\})\cap B\neq\emptyset\), i.e., \(y\in\langle B\rangle(\{x\},\{z\})\). By (BT1\({}^{\mathsf{c}}_{f}\)) \(\langle B\rangle(\{x\},\{z\})\subseteq\langle B\rangle(\{z\},\{x\})\), so \((\{z\}\times\{y\}\times\{x\})\cap B\neq\emptyset\), and so (BT1) holds for \(\mathfrak{F}\). \((i=1_{g})\) If \(z\in[[B]](X,Y)\), then \(X\times\{z\}\times Y\subseteq B\). (BT1) implies that \(Y\times\{z\}\times X\subseteq B\), and thus \(z\in[[B]](Y,X)\). Assume now that \([[B]](X,Y)\subseteq[[B]](Y,X)\) for all \(X,Y\subseteq U\). If \(B(x,z,y)\), then \(z\in[[B]](\{x\},\{y\})\), and in consequence \(z\in[[B]](\{y\},\{x\})\). Therefore \(B(y,z,x)\). \((i=2)\) Assume (BT2), and let \(y\in Y\cap\langle B\rangle(X,Z)\). This means that there are \(x\in X\) and \(z\in Z\) such that \(B(x,y,z)\). By the axiom, \(B(x,x,y)\), so \(x\in\langle B\rangle(X,Y)\). Therefore, \[[(X\cap\langle B\rangle(X,Y))\times\{y\}\times Z]\cap B\neq\emptyset\] and in consequence we have \[y\in\langle B\rangle(X\cap\langle B\rangle(X,Y),Z)\] as required. We now assume the inclusion \(Y\cap\langle B\rangle(X,Z)\subseteq\langle B\rangle(X\cap\langle B\rangle(X,Y ),Z)\) and that \(B(x,y,z)\). Thus \(y\in\{y\}\cap\langle B\rangle(\{x\},\{z\})\). Thus, it must be the case that: \[y\in\langle B\rangle(\{x\}\cap\langle B\rangle(\{x\},\{y\}),\{z\})\] which in particular means that \[\left(\left[\{x\}\cap\langle B\rangle(\{x\},\{y\})\right]\times\{y\}\times\{z \}\right)\cap B\neq\emptyset\,.\] On the other hand, this entails that \(\{x\}\cap\langle B\rangle(\{x\},\{y\})\neq\emptyset\), and thus \(B(x,x,y)\).4 Footnote 4: We would like to thank Sören Brinck Knudstorp, whose help was crucial to discover this inclusion. \((i=3)\) Assume (BT3) for \(\mathfrak{F}\). Let \(z\in\langle B\rangle(X,[[B]](X,-Y)\cap Y)\). Then, there are \(x\in X\) and \(a\in[[B]](X,-Y)\cap Y\) such that \(B(x,z,a)\). Since \(a\) is in \([[B]](X,-Y)\), for all \(x^{\prime}\in X\) and \(b\notin Y\) it is the case that \(B(x^{\prime},a,b)\). If \(z\notin Y\), then \(B(x,a,z)\) and so (BT3) entails that \(z=a\) and \(z\in Y\). Therefore \(z\in Y\) as required. Let \(\mathfrak{A}\) satisfy (BT3\({}^{\circ}\)). Assume \(B(x,y,a)\), \(B(x,a,y)\) and \(a\neq y\). Take \(X\coloneqq\{x\}\) and \(Y\coloneqq-\{a\}\). Since \(\{x\}\times\{y\}\times\{a\}\subseteq B\) we have that \(y\in[[B]](\{x\},\{a\})\cap-\{a\}\). Thus, \[a\in\langle B\rangle\left(\{x\},[[B]](\{x\},\{a\})\cap-\{a\}\right)\,.\] This yields a contradiction, as (BT3\({}^{\circ}\)) entails that \(a\in-\{a\}\). \((i=\) W) Let \(\mathfrak{F}\) satisfy (BTW). If \(y\in[[B]](X,X)\), then \(X\times\{y\}\times X\subseteq B\). But there is \(x\in X\), so \(B(x,y,x)\) and by the axiom assumed, we have that \(x=y\), i.e., \(y\in X\). If (BTW\({}^{\circ}\)) holds for \(\mathfrak{A}\) and \(B(x,y,x)\), then \(\{x\}\times\{y\}\times\{x\}\subseteq B\), so \(y\in[[B]](\{x\},\{x\})\) and in consequence \(y\in\{x\}\), as required. \((i=2\)s) Assume \(B(a,a,b)\). Let \(x\in X\). Since there is \(y\in Y\) and \(B(x,x,y)\) by the assumption, we have that \(x\in\langle B\rangle(X,Y)\), as required. For the reverse implication, \(\{x\}\subseteq\langle B\rangle(\{x\},\{y\})\) for any \(x\) and \(y\), therefore, we have that \(B(x,x,y)\). Complex algebras satisfy a property which connects \(\langle B\rangle\) and \([[B]]\) and which will become important in the next section: **Lemma 5.2**.: _If \(X,Y\neq\emptyset\), then_ (wMIA\({}^{c}\)) \[[[B]](X,Y)\subseteq\langle B\rangle(X,Y)\,.\] Proof.: Suppose that \(X,Y\neq\emptyset\), and that \(u\in[[B]](X,Y)\); then, \(X\times\{u\}\times Y\subseteq B\) by definition of \([[B]]\). Since \(X,Y\neq\emptyset\), there are \(x\in X\), \(y\in Y\) with \(B(x,u,y)\). This shows that \(u\in\langle B\rangle(X,Y)\). Let us observe that the betweenness relation of a b-frame \(\mathfrak{F}\) can be characterized by means of the sufficiency operator of the full complex algebra of \(\mathfrak{F}\): **Proposition 5.3**.: _If \(\mathfrak{F}\coloneqq\langle U,B\rangle\) is a betweenness frame and \(\mathsf{Cm}^{ps}(\mathfrak{F})\) is its full complex betweenness algebra, then_:__ \[B=\bigcup_{\langle X,Y\rangle\in 2^{U}\times 2^{U}}X\times[[B]](X,Y)\times Y\,.\] Proof.: \((\subseteq)\) If \(B(x,y,z)\), then \(y\in[[B]](\{x\},\{z\})\), hence, \[\langle x,y,z\rangle\in\{x\}\times[[B]](\{x\},\{z\})\times\{z\}\,.\] \((\supseteq)\) Now assume that \(X\) and \(Y\) are subsets of \(U\) such that \[\langle x,z,y\rangle\in X\times[[B]](X,Y)\times Y\,.\] Therefore, \(x\in X\), \(y\in Y\) and \(z\in[[B]](X,Y)\). From the last condition we obtain \[X\times\{z\}\times Y\subseteq B\,,\] which implies \(B(x,z,y)\). ## 6. Betweenness algebras Theorem 5.1 motivates our choice of axioms towards an abstract algebraization of betweenness, i.e., we translate the counterparts of the betweenness axioms for \(3\)-frames in the \(\mathord{\text{\text{\text{\text{\text{\text{\text{\text{\text{\text It can be shown that (wMIA) is independent of the other axioms. Moreover, let us observe that (ABT0) is equivalent to \[x\cdot y\leq f(x,y).\] Proof.: From (ABT0) we obtain \(x\cdot y\leq f(x\cdot y,x\cdot y)\), and so monotonicity of \(f\) entails that \(x\cdot y\leq f(x,y)\). The following example shows that (ABT1\({}_{f}\)) and (ABT1\({}_{g}\)) are independent of each other: **Example 6.2**.: Consider \(\mathfrak{A}_{1}\coloneqq\langle A,f_{1},g_{1}\rangle\) and \(\mathfrak{A}_{2}\coloneqq\langle A,f_{2},g_{2}\rangle\) where \(A\) is the four element Boolean algebra with atoms \(a,b\), and \(f_{1},f_{2}\) and \(g_{1},g_{2}\) are given by the tables below: \[\begin{array}{c|cccccccc}f_{1}&\mathbf{0}&a&b&\mathbf{1}&g_{1}&\mathbf{0}&a &b&\mathbf{1}\\ \hline\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}& \mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ a&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&a&\mathbf{1}&\mathbf{0}&b& \mathbf{0}\\ b&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&b&\mathbf{1}&\mathbf{0}&b& \mathbf{0}\\ \mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}& \mathbf{1}&\mathbf{0}&b&\mathbf{0}\\ \end{array}\qquad\begin{array}{c|cccccccc}f_{2}&\mathbf{0}&a&b&\mathbf{1}& g_{2}&\mathbf{0}&a&b&\mathbf{1}\\ \hline\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}& \mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ a&\mathbf{0}&a&a&a&a&\mathbf{1}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ b&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&b&\mathbf{1}&\mathbf{0}& \mathbf{0}&\mathbf{0}\\ \mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}& \mathbf{1}&\mathbf{0}&\mathbf{0}&\mathbf{0}\\ \end{array}\] We can see that \(f_{1}(x,y)=f_{1}(y,x)\) for all \(x\) and \(y\), but \(g_{1}(a,b)\neq g_{1}(b,a)\) in \(\mathfrak{A}_{1}\), and \(g_{2}(x,y)=g_{2}(y,x)\) but \(f_{2}(a,b)\neq f_{2}(b,a)\) in \(\mathfrak{A}_{2}\). This example justifies the inclusion of both (ABT1\({}_{f}\)) and (ABT1\({}_{g}\)) in the set of postulates for betweenness algebras. Both axioms are counterparts of the symmetry axiom (BT1). **Definition 6.3**.: If \(\mathfrak{A}\) satisfies (ABT0)-(ABT2) and (ABTW) \[a\neq\mathbf{0}\to g(a,a)\leq a\,,\] then \(\mathfrak{A}\) is a _weak betweenness algebra_. If \(\mathfrak{A}\) satisfies (ABT1\({}_{f}\)), (ABT3), (wMIA) and (ABT2\({}^{s}\)) \[b\neq\mathbf{0}\to a\leq f(a,b)\,,\] then it will be called a _strong betweenness algebra_. **Proposition 6.4**.: _Every betweenness algebra is a weak betweenness algebra._ Proof.: Let \(x\neq\mathbf{0}\); we will show that \(g(x,x)\leq x\). It follows from (ABT3) that \(f(x,g(x,x)\cdot-x)\leq-x\), and by (ABT2) we have \[-x\cdot g(x,x)\cdot f(x,x)\leq f\big{(}x\cdot f(x,-x\cdot g(x,x)),x\big{)}\leq f (x\cdot-x,x)=\mathbf{0}\,.\] Thus, \(-x\cdot g(x,x)\cdot f(x,x)=\mathbf{0}\). Since \(x\neq\mathbf{0}\) by the assumption, we apply (wMIA) to obtain that \(g(x,x)\leq f(x,x)\). Thus \(-x\cdot g(x,x)=\mathbf{0}\), i.e., \(g(x,x)\leq x\). **Proposition 6.5**.: (ABT2\({}^{s}\)) _entails both (ABT0) and (ABT2), therefore, every strong betweenness algebra is a betweenness algebra._ Proof.: (ABT0): If \(x=\mathbf{0}\), then immediately \(x\leq f(x,x)\). If \(x\neq\mathbf{0}\), then from the assumption we obtain \(x\leq f(x,x)\). (ABT2): Fix arbitrary \(x,y\) and \(z\). We have two possibilities, either \(y=\mathbf{0}\) or \(y\neq\mathbf{0}\). In the former, we have \(\mathbf{0}=y\cdot f(x,z)\leq f(x\cdot f(x,y),z)\). In the latter, directly from (ABT2\({}^{s}\)) we obtain \(x=x\cdot f(x,y)\) so \(y\cdot f(x,z)\leq f(x,z)=f(x\cdot f(x,y),z)\) From Lemma 5.2 and Theorem 5.1 we see that the axioms hold in complex algebras of b-frames: **Theorem 6.6**.: _If \(\mathfrak{F}\) is a (weak, strong) betweenness frame, then \(\mathsf{Cm}^{ps}(\mathfrak{F})\) is a (weak, strong) betweenness algebra._ The definition of sufficiency operator implies that \(g(\mathbf{0},a)=g(a,\mathbf{0})=\mathbf{1}\) for all \(a\in A\). Our next result shows under which other conditions \(g(a,b)=\mathbf{1}\) is possible: **Theorem 6.7**.: _Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a b-algebra, \(\mathbf{0}\not\in\{a,b,c\}\subseteq A\) and \(g(a,b)=\mathbf{1}\). Then,_ 1. \(|A|=2\) _or_ \(a\cdot b=\mathbf{0}\)_._ 2. \(a,b\in\mathrm{At}(A)\)_._ 3. _If_ \(g(a,c)=\mathbf{1}\)_, then_ \(b=c\)_._ Proof.: 1. Suppose that \(a\cdot b\neq\mathbf{0}\). Then, by (ABTW) and the fact that \(g\) is antitone, \[\mathbf{1}=g(a,b)\leq g(a\cdot b,a\cdot b)\leq a\cdot b,\] which implies \(a=b=\mathbf{1}\). If \(0\lesssim c\leq\mathbf{1}\), then \(\mathbf{1}=g(\mathbf{1},\mathbf{1})\leq g(c,c)\leq c\) which shows that \(|A|=2\). 2. If \(|A|=2\), then \(a=b=\mathbf{1}\in\mathrm{At}(A)\). Thus, \(a\cdot b=\mathbf{0}\) by \(1\). above. Suppose that \(0\leq c\lesssim b\); then \(0\neq b\cdot-c\neq\mathbf{0}\), hence, \(\mathbf{1}=g(a,b)\leq g(a,b\cdot-c)\leq f(a,b\cdot-c)\) by (wMIA). Furthermore, \(\mathbf{1}=g(a,b)\leq g(a,c)\), therefore, \(b\cdot-c\leq-c=g(a,c)\cdot-c\). Using (ABT3) and the fact that \(f\) is monotone we obtain \[\mathbf{1}=f(a,b\cdot-c)\leq f(a,g(a,c)\cdot-c))\leq-c,\] which implies \(c=\mathbf{0}\). It is shown analogously that \(a\) is an atom. 3. If \(g(a,b)=g(a,c)=\mathbf{1}\), then both \(b\) and \(c\) are atoms by \(2\). above and \(g(a,b)\cdot g(a,c)=g(a,b+c)=\mathbf{1}\). Again by \(2\). above we obtain that \(b+c\) is an atom, which implies \(b=c\). **Corollary 6.8**.: _Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a b-algebra. Then,_ 1. _If_ \(A\) _has at most one atom, then_ \(g^{-1}[\{\mathbf{1}\}]=(\{\mathbf{0}\}\times A)\cup(A\times\{\mathbf{0}\})\)_._ 2. _If_ \(M\subseteq(A\setminus\{\mathbf{0}\})\times(A\setminus\{\mathbf{0}\})\) _and_ \(g[M]=\{\mathbf{1}\}\)_, then_ \(M\subseteq\mathrm{At}(A)^{2}\) _and_ \(M\) _is an antichain in the product order of_ \(A^{2}\)_._ **Example 6.9**.: With respect to Theorem 6.7(3) we will exhibit a b-algebra \(\mathfrak{A}\) such that \(g(a,b)=\mathbf{1}=g(c,d)\), none of the four arguments is zero and they are pairwise disjoint, i.e., the theorem is as strong as it can be. To this end, let \(U\coloneqq\{a,b,c,d\}\), take all triples \[M\coloneqq\{\langle a,x,b\rangle\mid x\in U\}\cup\{\langle c,x,d\rangle\mid x \in U\},\] and close \(M\) under axioms (BT0)-(BT3) to obtain \(B\). Such closure does not lead to a contradiction, that is, no pair of triples \(\langle x,y,z\rangle\) and \(\langle x,z,y\rangle\) such that \(y\neq z\) is contained in \(B\). In consequence, \(\mathfrak{F}\coloneqq\langle U,B\rangle\) is a b-frame. In \(\mathsf{Cm}^{ps}(\mathfrak{A})\) we have \([[B]](\{a\},\{b\})=U=[[B]](\{c\},\{d\})\). The situation is depicted in Figure 2. **Theorem 6.10**.: _Suppose that \(\langle A,g\rangle\) is a binary sufficiency algebra which satisfies_ (ABTW)_. If \(g(a,b)\neq\mathbf{0}\), then \(a\cdot b\leq g(a,b)\). If, additionally, \(a\cdot b\neq\mathbf{0}\), then \(g(a,b)=a\cdot b\) and \(a\cdot b\) is an atom._ Proof.: Assume that \(a\cdot b\not\leq g(a,b)\). Then, \(a\cdot b\cdot-g(a,b)\neq\mathbf{0}\), and (ABTW) implies that \[g(a\cdot b\cdot-g(a,b),a\cdot-g(a,b))\leq a\cdot b\cdot-g(a,b).\] Since \(g\) is antitone we obtain \[g(a,b)\leq g(a\cdot b\cdot-g(a,b),a\cdot b\cdot-g(a,b))\leq a\cdot b\cdot-g(a, b)\leq-g(a,b),\] and thus, \(g(a,b)=\mathbf{0}\). If, in addition, \(a\cdot b\neq\mathbf{0}\), then \(g(a\cdot b,a\cdot b)\leq a\cdot b\) by (ABTW), thus, \(g(a,b)=a\cdot b\). Finally, suppose that \(\mathbf{0}\lesssim c\leq a\cdot b\). Using (ABTW) and the fact that \(g\) is antitone we obtain \[\mathbf{0}\neq c\leq a\cdot b\leq g(a,b)\leq g(a\cdot b,a\cdot b)\leq g(c,c) \leq c.\] It follows that \(c=a\cdot b\), thus, \(a\cdot b\) is an atom of \(A\) Figure 1. The situation excluded by Theorem 6.7(3). Figure 2. As Example 6.9 shows, there may be disjoint pairs of atoms for which \(g\) takes the value \(\mathbf{1}\). **Example 6.11**.: Consider the b-frame \(\mathfrak{F}\coloneqq\langle[0,1],B_{\leqslant}\rangle\) where \([0,1]\) is the closed unit interval of the real numbers and \(B_{\leqslant}\) is induced by the standard order \(\leqslant\). In \(\mathsf{Cm}^{ps}(\mathfrak{F})\) we have \([[B_{\leqslant}]](\{0\},\{1\})=[0,1]\), and clearly \(\{0\}\) and \(\{1\}\) are disjoint atoms of the complex algebra of \(\mathfrak{F}\). It is easy to see that if \([[B_{\leqslant}]](X,Y)=[0,1]\), then \(\{X,Y\}=\{\{0\},\{1\}\}\). On the other hand, for the subframe \(\mathfrak{F}^{*}\coloneqq\langle[0,1),B_{\leqslant}^{*}\rangle\) of \(\mathfrak{F}\), we have that: \[[[B_{\leqslant}^{*}]]^{-1}\,[\{[0,1)\}]=\left(\{\emptyset\}\times 2^{[0,1 )}\right)\cup\left(2^{[0,1)}\times\{\emptyset\}\right),\] although \(\mathfrak{F}^{*}\) has uncountably many atoms. \(\dashv\) **Lemma 6.12**.: _Let \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) be a PS-algebra. In the presence of (ABT0), (ABT2) and (ABT3) the axiom (ABTW) is equivalent to_ \[(\forall a,b\in A)[a\cdot b\neq\mathbf{0}\to g(a,b)\leq f(a,b)]. \tag{5}\] Proof.: (\(\rightarrow\)) Suppose \(a\cdot b\neq\mathbf{0}\). Then, using (ABTW) and (ABT0) we have: \[g(a,b)\leq g(a\cdot b,a\cdot b)\leq a\cdot b\leq f(a,b)\,.\] (\(\leftarrow\)) Suppose \(a\neq\mathbf{0}\). By (ABT3) it is the case that \(f(a,g(a,a)\cdot-a)\leq-a\), and (ABT2) entails that \[-a\cdot g(a,a)\cdot f(a,a)\leq f(a\cdot f(a,g(a,a)\cdot-a),a)\leq f(\mathbf{0 },a)=\mathbf{0}\,.\] Since (5) entails that \(g(a,a)\leq f(a,a)\), we have \(-a\cdot g(a,a)=\mathbf{0}\), as required. Clearly, (wMIA) implies (5), but the converse does not hold as the algebra with atoms \(a,b,c\) of Table 1 shows. Algebras which satisfy (wMIA) are binary instances of the weak MIAs introduced in Section 2. A remarkable property of such algebras is the fact that they are discriminator algebras in the sense of Werner (1978). Here, the _unary discriminator_ on a Boolean algebra \(A\) is the mapping \(d\colon A\to A\) such that for all \(a\in A\) \[d(a)=\begin{cases}\mathbf{0},&\text{if }a=\mathbf{0},\\ \mathbf{1},&\text{otherwise.}\end{cases}\] \begin{table} \begin{tabular}{c|c c c c c c c c} \(f\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(b\) & \(ac\) & \(c\) & \(ab\) & \(bc\) & \(a\) \\ \hline \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \(b\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathit{bc}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(ab\) \\ \(ac\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \(c\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(bc\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(ac\) \\ \(ab\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \(bc\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \(a\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(ab\) & \(\mathbf{1}\) & \(ac\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \end{tabular} \begin{tabular}{c|c c c c c c c} \(g\) & \(\mathbf{0}\) & \(\mathbf{1}\) & \(b\) & \(ac\) & \(c\) & \(ab\) & \(bc\) & \(a\) \\ \hline \(\mathbf{0}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{1}\) \\ \(\mathbf{1}\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(b\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(a\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(ac\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(ac\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(a\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(ab\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(bc\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \(a\) & \(\mathbf{1}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) & \(\mathbf{0}\) \\ \end{tabular} \end{table} Table 1. A b-algebra satisfying strong b-axioms, (5), and not (wMIA). For simplicty, \(xy\) is \(x+y\). However, as the lemma below demonstrates, (wMIA) can be weakened to (5). **Lemma 6.13**.: _Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a binary PS-algebra. Then \(d:A\to A\) as defined above is the unary discriminator if and only if \(\mathfrak{A}\) satisfies (5)._ Proof.: \((\rightarrow)\) Suppose that \(d\) is the discriminator, and let \(a,b\in A,a\cdot b\neq 0\). Then, \(d(a\cdot b)=f(a\cdot b,a\cdot b)+-g(a\cdot b,a\cdot b)=\mathbf{1}\), i.e. \(g(a\cdot b,a\cdot b)\leq f(a\cdot b,a\cdot b)\). Now, \[g(a,b)\leq g(a\cdot b,a\cdot b)\leq f(a\cdot b,a\cdot b)\leq f(a,b),\] since \(g\) is antitone and \(f\) is monotone. \((\leftarrow)\) Let \(a\in A,a\neq\mathbf{0}\). Then, \(g(a,a)\leq f(a,a)\) by the hypothesis, which implies that \(f(a,a)+-g(a,a)=\mathbf{1}\). Furthermore, observe that --due to the facts that \(g\) is antitone and \(f\) is monotone --(5) is equivalent to \[a\neq\mathbf{0}\to g(a,a)\leq f(a,a)\,.\] Collecting all the facts above we have the following **Theorem 6.14**.: _Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a PS-algebra that satisfies (ABT0), (ABT2) and (ABT3). Then, \(d\colon A\to A\) defined by \(d(a)\coloneqq f(a,a)+-g(a,a)\) is the unary discriminator if and only if \(\mathfrak{A}\) satisfies (ABTW)._ Proof.: \((\rightarrow)\) Suppose that \(d\) is the discriminator, and let \(\mathbf{0}\neq a\in\mathfrak{A}\). Then, \(d(a)=f(a,a)+-g(a,a)=\mathbf{1}\) which implies \(g(a,a)\leq f(a,a)\). So (5) holds which entails (ABTW), by Lemma 6.12. \((\leftarrow)\) Let \(a\in A,a\neq\mathbf{0}\). Then, \(g(a,a)\leq f(a,a)\) by the hypothesis, which implies that \(f(a,a)+-g(a,a)=\mathbf{1}\). **Corollary 6.15**.: _Each weak b-algebra is a discriminator algebra._ Since a discriminator algebra is simple by (Werner, 1978, Theorem 2.2) this implies **Proposition 6.16**.: _The class of weak b-algebras is not closed under products._ Since every b-algebra is a weak b-algebra we obtain **Corollary 6.17**.: _Each b-algebra is a discriminator algebra._ ## 7. Representation of b-algebras In this section we shall show that the class of b-algebras is closed under canonical extensions. Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a PS-algebra. Modifying the notation in Section 2 for our purpose we designate the middle column as special and define canonical relations on \(\operatorname{Ult}(A)\) by \[\begin{array}{lcl}(\mathtt{df}\,Q_{f})&&Q_{f}(\mathscr{U}_{1},\mathscr{U}_{ 2},\mathscr{U}_{3})\;:\longleftrightarrow\;f[\mathscr{U}_{1}\times\mathscr{U }_{3}]\subseteq\mathscr{U}_{2},\\ (\mathtt{df}\,S_{g})&&S_{g}(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U}_{3}) \;:\longleftrightarrow\;g[\mathscr{U}_{1}\times\mathscr{U}_{3}]\cap\mathscr{U }_{2}\neq\emptyset.\end{array}\] So, \(\mathtt{Cf}^{ps}(\mathfrak{A})=\langle\operatorname{Ult}(A),Q_{f},S_{g}\rangle\) is the canonical frame of \(\mathfrak{A}\). The weak MIA axiom (wMIA) shows that the canonical relations are not independent: **Lemma 7.1**.: \(\mathfrak{A}\) _satisfies_ (wMIA) _if and only if \(S_{g}\subseteq Q_{f}\). In particular, \(S_{[[B]]}\subseteq Q_{\langle B\rangle}\) for every \(\mathsf{Cm}^{ps}(\mathfrak{F})\) of a b-frame \(\mathfrak{F}\)._ Proof.: (\(\rightarrow\)) Suppose that \(S_{g}(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U}_{3})\). Then, \(\mathscr{U}_{2}\cap g[\mathscr{U}_{1}\times\mathscr{U}_{3}]\neq\emptyset\), say, \(x\in\mathscr{U}_{1},y\in\mathscr{U}_{3}\) and \(g(x,y)\in\mathscr{U}_{2}\). We need to show that \(f[\mathscr{U}_{1}\times\mathscr{U}_{3}]\subseteq\mathscr{U}_{2}\). Let \(s\in\mathscr{U}_{1}\), and \(t\in\mathscr{U}_{3}\); then, \(\mathbf{0}\neq x\cdot s\in\mathscr{U}_{1}\) and \(\mathbf{0}\neq y\cdot t\in\mathscr{U}_{3}\). By (wMIA) we have \(g(x\cdot s,y\cdot t)\leq f(x\cdot s,y\cdot t)\). Using \(g(x,y)\in\mathscr{U}_{2}\) and the facts that \(g\) is antitone and \(f\) is monotone we obtain \[g(x,y)\leq g(x\cdot s,y\cdot t)\leq f(x\cdot s,y\cdot t)\leq f(s,t)\in\mathscr{ U}_{2}.\] (\(\leftarrow\)) Let \(x,y\neq\mathbf{0}\) and assume that \(g(x,y)\not\leq f(x,y)\), i.e., \(g(x,y)\cdot-f(x,y)\neq\mathbf{0}\). Choose ultrafilters \(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U}_{3}\) such that \(x\in\mathscr{U}_{1},y\in\mathscr{U}_{3}\), and \(g(x,y)\cdot-f(x,y)\in\mathscr{U}_{2}\). Then, \(g[\mathscr{U}_{1}\times\mathscr{U}_{3}]\cap\mathscr{U}_{2}\neq\emptyset\), and therefore, \(f[\mathscr{U}_{1}\times\mathscr{U}_{3}]\subseteq\mathscr{U}_{2}\), in particular, \(f(x,y)\in\mathscr{U}_{2}\). This contradicts \(-f(x,y)\in\mathscr{U}_{2}\). The crucial observation about canonical frames of b-algebras is the following result: **Lemma 7.2**.: _If \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a betweenness algebra, then its canonical frame \(\mathsf{Cf}^{ps}(\mathfrak{A})=\langle\operatorname{Ult}(A),Q_{f},S_{g}\rangle\) has the following properties_:__ \[\begin{split}&\mathrm{(BT0^{\mathsf{Cf}})}\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad By the assumption, \(y\cdot f(x\cdot-f(x,y),\mathbf{1})\in\mathscr{U}_{2}\), but (ABT2) implies \[y\cdot f(x\cdot-f(x,y),\mathbf{1})\leq f(x\cdot-f(x,y)\cdot f(x\cdot-f(x,y),y), \mathbf{1})=f(\mathbf{0},\mathbf{1})=\mathbf{0},\] which is a contradiction. It follows that \(f(x,y)\in\mathscr{U}_{1}\). (BT3\({}^{\sf cf}\)): Assume \(f[\mathscr{U}_{1}\times\mathscr{U}_{3}]\subseteq\mathscr{U}_{2}\) and \(g[\mathscr{U}_{1}\times\mathscr{U}_{2}]\cap\mathscr{U}_{3}\neq\emptyset\). Let \(x\in\mathscr{U}_{2}\) and \(-x\in\mathscr{U}_{3}\). Furthermore, let \(a\in\mathscr{U}_{1}\) and \(b\in\mathscr{U}_{2}\) be such that \(g(a,b)\in\mathscr{U}_{3}\). Since \(g\) is antitone we have that \(g(a,b\cdot x)\in\mathscr{U}_{3}\). Since also \(-b+-x\in\mathscr{U}_{3}\), it follows that \(g(a,b\cdot x)\cdot(-b+-x)\in\mathscr{U}_{3}\), and from (ABT3) we obtain \[\mathscr{U}_{2}\ni f(a,g(a,b\cdot x)\cdot(-b+-x))\leq-b+-x\in\mathscr{U}_{2}\,.\] But \(b\cdot x\in\mathscr{U}_{2}\), a contradiction. So it must be the case that \(\mathscr{U}_{2}=\mathscr{U}_{3}\), as required. (BT2s\({}^{\sf cf}\)): We need to show that \(f[\mathscr{U}_{1}\times\mathscr{U}_{2}]\subseteq\mathscr{U}_{1}\). If \(x\in\mathscr{U}_{1}\) and \(y\in\mathscr{U}_{2}\), then \(y\neq\mathbf{0}\) and by (ABT2\({}^{\sf s}\)) we obtain that \(x\leq f(x,y)\). Therefore \(f(x,y)\in\mathscr{U}_{1}\), as required. (BTW\({}^{\sf cf}\)): Assume that \(S_{g}(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U}_{1})\), that is \(g[\mathscr{U}_{1}\times\mathscr{U}_{1}]\cap\mathscr{U}_{2}\neq\mathbf{0}\). Let \(y_{1},y_{2}\in\mathscr{U}_{1}\) such that \(g(y_{1},y_{2})\in\mathscr{U}_{2}\). So, from the fact that \(g\) is antitone we obtain that \(g(y_{1}\cdot y_{2},y_{1}\cdot y_{2})\in\mathscr{U}_{2}\). If \(x\in\mathscr{U}_{1}\), then using the property one more time we obtain that \(g(x\cdot y_{1}\cdot y_{2},x\cdot y_{1}\cdot y_{2})\in\mathscr{U}_{2}\). However, \(x\cdot y_{1}\cdot y_{2}\neq\mathbf{0}\), therefore, from (ABTW) we obtain \[g(x\cdot y_{1}\cdot y_{2},x\cdot y_{1}\cdot y_{2})\leq x\cdot y_{1}\cdot y_{2}\,,\] and so \(x\) is in \(\mathscr{U}_{2}\). Thus, \(\mathscr{U}_{1}=\mathscr{U}_{2}\) by maximality of ultrafilters. We now have the following representation theorem: **Theorem 7.3**.: _Suppose that \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is a (weak, strong) betweenness algebra. Then, the Stone map \(h\colon\mathfrak{A}\hookrightarrow\mathsf{Em}^{ps}(\mathfrak{A})\) is a (weak, strong) betweenness algebra embedding._ Proof.: \(h\colon\mathfrak{A}\hookrightarrow\mathsf{Em}^{ps}(\mathfrak{A})\) is a PS-algebra embedding by Theorems 2.2 and 2.5, and we need to show that \(\mathsf{Em}^{ps}(\mathfrak{A})\) is a (weak, strong) betweenness algebra; we will use Lemma 7.2. Suppose that \(\emptyset\neq X,Y,Z\subseteq\operatorname{Ult}(A)\). Let us begin with the proof for a betweenness algebra \(\mathfrak{A}\). (ABT0): Let \(\mathscr{U}\in X\). By (BT0\({}^{\sf cf}\)) we have \(Q_{f}(\mathscr{U},\mathscr{U},\mathscr{U})\), and \(\mathscr{U}\in X\) implies that \((X\times\{\mathscr{U}\}\times X)\cap Q_{f}\neq\emptyset\). It follows that \(\mathscr{U}\in\langle Q_{f}\rangle(X,X)\). (ABT1\({}_{f}\)): Let \(\mathscr{U}\in\langle Q_{f}\rangle(X,Y)\). Then, \((X\times\{\mathscr{U}\}\times Y)\cap Q_{f}\neq\emptyset\), say, \(\mathscr{U}_{1}\in X,\mathscr{U}_{2}\in Y\) such that \(Q_{f}(\mathscr{U}_{1},\mathscr{U},\mathscr{U}_{2})\). By (BT1\({}^{\sf cf}_{f}\)), \(Q_{f}(\mathscr{U}_{2},\mathscr{U},\mathscr{U}_{1})\) which shows that \((Y\times\{\mathscr{U}\}\times X)\cap Q_{f}\neq\emptyset\), i.e., \(\mathscr{U}\in\langle Q_{f}\rangle(Y,X)\). (ABT1\({}_{g}\)) Suppose \(\mathscr{U}\in[[S_{g}]](X,Y)\), that is \(X\times\{\mathscr{U}\}\times Y\subseteq S_{g}\). If \(\mathscr{U}_{1}\in Y\) and \(\mathscr{U}_{2}\in X\), then \(S_{g}(\mathscr{U}_{2},\mathscr{U},\mathscr{U}_{1})\) and this with (BT1\({}^{\sf cf}_{g}\)) entails that \(S_{g}(\mathscr{U}_{1},\mathscr{U},\mathscr{U}_{2})\). Since our choices were arbitrary, we have that \(Y\times\{\mathscr{U}\}\times X\subseteq S_{g}\), i.e., \(\mathscr{U}\in[[S_{g}]](Y,X)\). (ABT2): Let \(\mathscr{U}\in Y\cap\langle Q_{f}\rangle(X,Z)\), i.e., \(\mathscr{U}\in Y\) and there are \(\mathscr{U}_{1}\in X,\mathscr{U}_{2}\in Z\) such that \(Q_{f}(\mathscr{U}_{1},\mathscr{U},\mathscr{U}_{2})\). By (BT2\({}^{\sf cf}\)) we have \(Q_{f}(\mathscr{U}_{1},\mathscr{U}_{1},\mathscr{U})\), so \(\mathscr{U}_{1}\in\langle Q_{f}\rangle(X,Y)\). Thus, \(\mathscr{U}_{1}\in X\cap\langle Q_{f}\rangle(X,Y)\). As \(\mathscr{U}_{2}\) is in \(Z\), we obtain: \[\mathscr{U}\in\langle Q_{f}\rangle(X\cap\langle Q_{f}\rangle(X,Y),Z)\] as required. (ABT3): Suppose that \(X,Y,Z\subseteq\operatorname{Ult}(A)\) and \(\mathscr{U}\in\langle Q_{f}\rangle(X,[[S_{g}]](X,-Y)\cap Y)\). Then, there are \(\mathscr{U}_{1}\in X,\mathscr{U}_{2}\in[[S_{g}]](X,-Y)\cap Y\) such that \(Q_{f}(\mathscr{U}_{1},\mathscr{U},\mathscr{U}_{2})\). Now, \[\mathscr{U}_{2}\in[[S_{g}]](X,-Y)\longleftrightarrow(X\times\{\mathscr{U}_ {2}\}\times-Y)\subseteq S_{g}.\] Assume that \(\mathscr{U}\not\in Y\); then, \(S_{g}(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U})\) and (BT3\({}^{\mathsf{cf}}\)) implies that \(\mathscr{U}=\mathscr{U}_{2}\in Y\), a contradiction. For a weak betweenness algebra \(\mathfrak{A}\) we need to show that (ABTW) holds in \(\mathsf{Em}^{ps}(\mathfrak{A})\). To this end, suppose that \(\emptyset\neq X\subseteq\operatorname{Ult}(A)\), and \(\mathscr{U}\in[[S_{g}]](X,X)\); then, \((X\times\{\mathscr{U}\}\times X)\subseteq S_{g}\). Since \(X\neq\emptyset\), there is some \(\mathscr{V}\in X\), and \(S_{g}(\mathscr{V},\mathscr{U},\mathscr{V})\). (BTW\({}^{\mathsf{cf}}\)) now implies \(\mathscr{U}=\mathscr{V}\), thus, \(\mathscr{U}\in X\). Finally, for a strong betweenness algebra \(\mathfrak{A}\) we need to prove (ABT2) for \(\mathsf{Em}^{ps}(\mathfrak{A})\). Yet this is straightforward, since if \(\emptyset\neq Y\subseteq\operatorname{Ult}(A)\) and \(\mathscr{U}_{1}\in X\), then by (BT2s\({}^{\mathsf{cf}}\)) we have that \(Q_{f}(\mathscr{U}_{1},\mathscr{U}_{1},\mathscr{U})\) for some \(\mathscr{U}\in Y\). This entails that \(\mathscr{U}_{1}\in\langle Q_{f}\rangle(X,Y)\). What we have proven in this section so far is the following: For any b-algebra5\(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) its canonical frame \(\mathsf{cf}^{ps}(\mathfrak{A})=\langle\operatorname{Ult}(A),Q_{f},S_{g}\rangle\) behaves in such a way that the relations \(Q_{f}\) and \(S_{g}\) in a certain way <<simulate>> betweenness axioms. Furthermore, the standard Stone mapping embeds \(\mathfrak{A}\) into \(\mathsf{Em}^{ps}(\mathfrak{A})=\langle 2^{\operatorname{Ult}(A)},\langle Q_{f} \rangle,[[S_{g}]]\rangle\). Nonetheless, neither \(\mathsf{cf}^{ps}(\mathfrak{A})\) is necessarily a b-frame, nor \(\mathsf{Em}^{ps}(\mathfrak{A})\) is necessarily a complex b-algebra. Of course, if \(\mathfrak{A}\) is a MIA, i.e., \(Q_{f}=S_{g}\), then we can take the reduct of \(\mathsf{cf}^{ps}(\mathfrak{A})\) to \(\langle\operatorname{Ult}(A),Q_{f}\rangle\) which is a b-frame, and we can embed \(\mathfrak{A}\) via Stone mapping into the complex b-algebra \(\langle 2^{\operatorname{Ult}(A)},\langle Q_{f}\rangle,[[Q_{f}]]\rangle\). The problem is--as we will see in the next section in Theorem 8.7--that there are no infinite b-algebras that are MIAs. Footnote 5: Everything that we write about here applies to weak and strong b-algebras as well. Unfortunately, the question whether for any given b-algebra \(\mathfrak{A}\) there exists a b-frame \(\mathfrak{F}\coloneqq\langle U,B\rangle\) such that \(\mathfrak{A}\) can be embedded into \(\mathsf{Cm}^{ps}(\mathfrak{F})\) has a negative answer. This will be proven in the next section in the form of Example 8.4. ## 8. B-algebras and b-complex algebras In this section we will investigate the connections between b-complex algebras and (abstract) b-algebras and exemplify some instances in which they differ. Let us start with the following definitions: **Definition 8.1**.: The class of b-algebras is denoted by \(\mathbf{Abtw}\). An algebra \(\mathfrak{A}\in\mathbf{Abtw}\) is a _b-complex algebra_ or _representable_ if there exists a b-frame \(\mathfrak{F}\) such that \(\mathfrak{A}\) is isomorphic to a subalgebra of \(\mathsf{Cm}^{ps}(\mathfrak{F})\). The class of b-complex algebras is denoted by \(\mathbf{Cbtw}\). As earlier, \(\mathfrak{A}\) is a _full_ b-complex algebra when it is isomorphic to \(\mathsf{Cm}^{ps}(\mathfrak{F})\) of a b-frame \(\mathfrak{F}\). \(\dashv\) Let us briefly recall the situation. For a b-algebra \(\mathfrak{A}\) we start with operators \(f,g\) which lead to ternary relations \(Q_{f}\) and \(S_{g}\) on the ultrafilter frame which, in turn, lead to operators \(\langle Q_{f}\rangle\) and \([[S_{g}]]\), and the embedding of \(\mathfrak{A}\) into \(\mathsf{Em}^{ps}(\mathfrak{A})\) is straightforward (Section 7). This direction does not involve b-frames. From frames to algebras we start with a single relation \(B\) on \(U\) which leads to a b-algebra on \(2^{U}\) with operators \(\langle B\rangle,[[B]]\) which, in turn, give us two relations \(Q_{\langle B\rangle}\) and \(S_{[[B]]}\) which, in general, are not equal. One might ask into which one, if any, can we embed our \(B\). The answer is: It does not matter. The reason for this is the fact that the mapping \(k\colon U\to\operatorname{Ult}(\mathsf{Cm}^{ps}(\mathfrak{F}))\) of Theorem 2.3 sends points of \(U\) to principal ultrafilters of \(2^{U}\), and the relation \(\{\langle k(x),k(y),k(z)\rangle\mid x,y,z\in U\}\) is isomorphic to \(B\); it is contained in both \(Q_{f}\) and \(S_{g}\). This holds for any 3-frame, so the observation is not particular to b-relations. We first describe the b-algebras on the set of constants: **Example 8.2**.: Suppose that \(A\coloneqq\{\mathbf{0},\mathbf{1}\}\), and let \(a,b\in A\). If \(a=\mathbf{0}\) or \(b=\mathbf{0}\), then the values of \(f(a,b)\) and \(g(a,b)\) are determined by the normality, respectively, co-normality of \(f\), respectively, \(g\); furthermore, \(f(\mathbf{1},\mathbf{1})=\mathbf{1}\) by (ABT0). If \(g(\mathbf{1},\mathbf{1})=\mathbf{1}\), then the corresponding algebra \(\mathfrak{A}_{0}\) is isomorphic to the smallest full b-complex algebra: Let \(\mathfrak{F}\coloneqq\langle U,B\rangle\) where \(U\coloneqq\{x\}\) and \(B\coloneqq\{\langle x,x,x\rangle\}\). Then, \[\begin{array}{c|cccc}\langle B\rangle&\emptyset&U&[[B]]&\emptyset&U\\ \hline\emptyset&\emptyset&\emptyset&\emptyset&U&U\\ U&\emptyset&U&U&U&U\end{array}\] and \(Q_{\langle B\rangle}=\{\langle\{U\},\{U\},\{U\}\rangle\}=S_{[[B]]}\). This algebra is special in the sense that it is not a proper subalgebra of a b-algebra by Theorem 6.10(2). Finally, let \(g(\mathbf{1},\mathbf{1})=\mathbf{0}\); then, \(f\) and \(g\) are as below: \[\begin{array}{c|cccc}f&\mathbf{0}&\mathbf{1}&&g&\mathbf{0}&\mathbf{1}\\ \hline\mathbf{0}&\mathbf{0}&\mathbf{0}&&\mathbf{0}&\mathbf{1}&\mathbf{1}\\ \mathbf{1}&\mathbf{0}&\mathbf{1}&&\mathbf{1}&\mathbf{1}&\mathbf{0}\end{array}\] It is easy to see that \(\mathfrak{A}_{1}\coloneqq\langle A,f,g\rangle\) is a strong b-algebra, and \(Q_{f}(\{\mathbf{1}\},\{\mathbf{1}\},\{\mathbf{1}\})\), but not \(S_{g}(\{\mathbf{1}\},\{\mathbf{1}\},\{\mathbf{1}\})\) as \(g[\{\mathbf{1}\}\times\{\mathbf{1}\}]\cap\{\mathbf{1}\}=\emptyset\). Obviously, \(\mathfrak{A}_{1}\) is not isomorphic to \(\mathsf{Cm}^{ps}(\mathfrak{F})\), and so cannot be isomorphic to the full complex algebra of any b-frame. On the other hand, \(\mathfrak{A}_{1}\) is a b-complex algebra by Theorem 8.3 below. \(\dashv\) **Theorem 8.3**.: _Every b-algebra \(\mathfrak{A}\) with at least four elements contains an isomorphic copy of the two-element algebra \(\mathfrak{A}_{1}\) of Example 8.2 as a subalgebra._ Proof.: Suppose that \(\mathfrak{A}^{\prime}\coloneqq\langle A,f,g\rangle\in\mathbf{Abtw}\) has at least four elements, and let \(\mathbf{2}\) be the subalgebra of \(\mathfrak{A}^{\prime}\) generated by \(\{\mathbf{0},\mathbf{1}\}\). Then, \(\mathbf{2}\) is isomorphic to \(\mathfrak{A}_{0}\) or \(\mathfrak{A}_{1}\) of Example 8.2, and it is not isomorphic to \(\mathfrak{A}_{0}\) since \(\mathfrak{A}\) has at least four elements. We know from Theorem 6.6 that the full PS-complex algebra of a b-frame is a b-algebra, i.e., that \(\mathbf{Cbtw}\subseteq\mathbf{Abtw}\). The natural question arising at this point is whether the converse inclusion holds, that is, _whether every b-algebra is a complex algebra_.6 As the following example shows, this is not necessarily the case. **Example 8.4**.: Let \(A\) be the eight-element Boolean algebra with atoms \(\{a,b,c\}\) and \(f\) and \(g\) be as below (for the brevity of presentation we write \(xy\) instead of \(x+y\)): \[\begin{array}{c|ccccccccc|ccccc}f&\mathbf{0}&a&b&c&ab&ac&bc&\mathbf{1}&g& \mathbf{0}&a&b&c&ab&ac&bc&\mathbf{1}\\ \hline\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}& \mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{0}&\mathbf{1}& \mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ a&\mathbf{0}&a&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}& \mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ b&\mathbf{0}&\mathbf{1}&b&bc&\mathbf{1}&\mathbf{1}&bc&\mathbf{1}&b&\mathbf{1 }\\ c&\mathbf{0}&\mathbf{1}&bc&c&\mathbf{1}&\mathbf{1}&bc&\mathbf{1}&c&\mathbf{1} \\ ab&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1 }&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ ac&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}& \mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}\\ bc&\mathbf{0}&\mathbf{1}&bc&bc&\mathbf{1}&\mathbf{1}&bc&\mathbf{1}&\mathbf{1 }&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1} \\ \mathbf{1}&\mathbf{0}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}& \mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{1}&\mathbf{ **Example 8.6**.: By Corollary 4.6, the relation \(B_{\leqslant}\) obtained from the natural ordering \(\leqslant\) of \(\omega\) is a betweenness relation, i.e., \(\mathfrak{N}\coloneqq\langle\omega,B_{\leqslant}\rangle\) is a b-frame. Let \(X,Y\subseteq\omega\) be infinite; then, both \(X\) and \(Y\) are cofinal, i.e., for every \(n\in\omega\) there are \(k\in X,m\in Y\) such that \(n\leqslant k,m\). Hence, \(n\in\langle B_{\leqslant}\rangle(X,Y)\) if and only if \(\min X\leqslant n\) or \(\min Y\leqslant n\), and it follows that \(\langle B_{\leqslant}\rangle(X,Y)=\uparrow\min(X\cup Y)\). In particular, \(\langle B_{\leqslant}\rangle(X,Y)\) is cofinite, and therefore, \(\langle B_{\leqslant}\rangle[\mathscr{U}_{1}\times\mathscr{U}_{2}]\subseteq \mathscr{U}\) for all free ultrafilters \(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U}\) of \(\omega\). On the other hand, \([[B_{\leqslant}]](X,Y)=\emptyset\). Assume that \(k\in[[B_{\leqslant}]](X,Y)\). Since \(X,Y\) are cofinite, there are \(n\in X,m\in Y\) such that \(k<n,m\) which implies that \(\langle n,k,m\rangle\not\in B\), a contradiction. Therefore \(Q_{\langle B_{\leqslant}\rangle}\neq S_{[[B_{\leqslant}]]}\), and the full complex algebra of \(\mathfrak{N}\) is not a MIA. Let us point out an analogy holding between \(\mathfrak{N}\) and the standard ultrafilter extension of the binary frame \(\mathfrak{N}_{2}\coloneqq\langle\omega,\leq\rangle\). In case of the latter, principal ultrafilters correspond to the natural numbers, and free ultrafilters are clustered at infinity, in the sense that every free \(\mathscr{U}\) is accessible from every principal \(\mathscr{U}^{\prime}\), and the accessibility relation is universal on the set of free ultrafilters. In case of \(\mathfrak{N}\) the situation is somewhat analogous, since as we have seen \(Q_{\langle B_{\leqslant}\rangle}\) is universal on free ultrafilters, and for any \(n,m\in\omega\) such that \(m\leqslant n\), and any free \(\mathscr{U}\) we have \(Q_{\langle B_{\leqslant}\rangle}(\uparrow\{m\},\uparrow\{n\},\mathscr{U})\). Indeed, as we have seen above, \(\langle B_{\leqslant}\rangle(X,Y)=\uparrow\min(X\cup Y)\), and since \(m\in X\cup Y\), \(\min(X\cup Y)\leqslant n\). Therefore, \(n\in\uparrow\min(X\cup Y)\), as required. Thus \(Q_{\langle B_{\leqslant}\rangle}\) may be viewed as a relation that puts a large cluster of free ultrafilters at infinity in case of the \(3\)-frame \(\mathfrak{N}\). \(\dashv\) **Theorem 8.7**.: _No infinite b-algebra is a MIA._ Proof.: We will proceed in three steps: 1. _No infinite full b-complex algebra is a MIA_: Let \(\mathscr{U}\) be a free ultrafilter on \(2^{U}\). Then, \(Q_{\langle B\rangle}(\mathscr{U},\mathscr{U},\mathscr{U})\) by Lemma 7.2. Assume that \(S_{[[B]]}(\mathscr{U},\mathscr{U},\mathscr{U})\). Then, there are \(X,Y\in\mathscr{U}\) such that \([[B]](X,Y)\in\mathscr{U}\). Since \(X\cap Y\in\mathscr{U}\) and \(\mathscr{U}\) is a free ultrafilter, \(X\cap Y\) is infinite. This contradicts Theorem 6.10. 2. _No infinite subalgebra of a full b-complex algebra is a MIA_: Let \(\mathfrak{S}\coloneqq\langle D,\langle B\rangle_{D},[[B]]_{D}\rangle\) be an infinite subalgebra of \(\mathsf{Cm}^{ps}(\mathfrak{F})\), and \(\mathscr{U}\) be a free ultrafilter of \(D\). Observing that \(\mathbf{Abtw}\) is a universal class, we see that \(\mathfrak{S}\) is a b-algebra, and therefore, \(Q_{\langle B\rangle_{D}}(\mathscr{U},\mathscr{U},\mathscr{U})\), again by Lemma 7.2. Assume that \(S_{[[B]]_{D}}(\mathscr{U},\mathscr{U},\mathscr{U})\), i.e., \([[B]]_{D}[\mathscr{U}\times\mathscr{U}]\cap\mathscr{U}\neq\emptyset\). Let \(X,Y\in\mathscr{U}\) be such that \([[B]]_{D}(X,Y)\in\mathscr{U}\); then, \([[B]]_{D}(X,Y)\neq\emptyset\) and \(X\cap Y\) is infinite. \([[B]]_{D}(X,Y)=[[B]](X,Y)\), so again, we have a contradiction with Theorem 6.10. 3. _No infinite b-algebra is a MIA_: Suppose \(\mathfrak{A}\coloneqq\langle A,f,g\rangle\) is an infinite MIA. Then \(Q_{f}=S_{g}\), and the reduct of \(\mathsf{Cf}^{ps}(\mathfrak{A})\) to \(\langle\operatorname{Ult}(A),Q_{f}\rangle\) is a b-frame by Lemma 7.2. Hence, \(\mathsf{Em}^{ps}(\mathfrak{A})\) is a full b-complex algebra, and by Theorem 7.3, \(\mathfrak{A}\) is isomorphic to a subalgebra of \(\mathsf{Em}^{ps}(\mathfrak{A})\). It follows from the previous results that \(\mathfrak{A}\) is not a MIA. This concludes the proof. The next example is an algebraic explanation of the non-definability result of Theorem 4.7. **Example 8.8**.: Let \(\mathfrak{Z}\coloneqq\langle\mathsf{Z},B_{\leqslant}\rangle\) be as in Theorem 4.7 and consider its full complex algebra \(\mathsf{Cm}^{ps}(\mathfrak{Z})\). Let \(E\) and \(O\) be the sets of even and odd integers, respectively. Then, \(A\coloneqq\{\emptyset,E,O,\mathsf{Z}\}\) is a Boolean subalgebra of \(2^{\mathsf{Z}}\), and we shall show that it is a PS-subalgebra of \(\mathsf{Cm}^{ps}(\mathfrak{Z})\). If \(x\in\mathsf{Z}\), then there are even and odd numbers below and above \(x\) which shows that \(\langle B\rangle(E,E)=\langle B\rangle(E,O)=\langle B\rangle(O,E)=\langle B \rangle(O,O)=\mathsf{Z}\). Furthermore, there are odd and even numbers strictly greater than \(x\), and it follows that \([[B]](E,E)=[[B]](E,O)=[[B]](O,E)=[[B]](O,O)=\emptyset\). Thus, \(\mathfrak{A}\coloneqq\langle A,\langle B\rangle,[[B]]\rangle\) is a PS-subalgebra of \(\mathsf{Cm}^{ps}(\mathfrak{Z})\). On the other hand consider the full complex algebra of the frame \(\langle U,R\rangle\), where \(R\) is the ternary universal relation on \(U\coloneqq\{w_{0},w_{1}\}\). There, \([[R]](X,Y)=U\) for all \(X,Y\subseteq U\), contrasting the considerations above. The algebra \(\mathsf{Cm}^{ps}(\mathfrak{Z})\) has yet one more interesting property. Since for any \(X,Y\in 2^{U}\), \([[B]](X,Y)\) is always finite, if \([[B]][\mathscr{U}_{1}\times\mathscr{U}_{2}]\cap\mathscr{U}\neq\emptyset\), \(\mathscr{U}\) must be a principal filter, i.e., for some \(x\in U\), \(\mathscr{U}=\mathord{\uparrow}\{x\}\). Therefore, \(Q_{\langle B\rangle}(\mathscr{U},\mathscr{U},\mathscr{U})\) and not \(S_{[[B]]}(\mathscr{U},\mathscr{U},\mathscr{U})\) for all free ultrafilters of \(2^{\mathsf{Z}}\). As can be seen from the proof above, the only property of \(\langle\mathsf{Z},\leqslant\rangle\) that was needed to obtain both conclusions was its order type, which is \(\omega^{*}+\omega\). \(\dashv\) **Example 8.9**.: In Example 8.8 we saw a full complex algebra for which the relation \(S_{[[B]]}\) is empty on the set of triples of free ultrafilters. We will show now that this does not have to be the case. To this end, consider the strong b-frame \(\mathfrak{Q}\coloneqq\langle\mathsf{Q},B_{\leqslant}\rangle\) induced by the standard dense linear order \(\leqslant\) on the set of rational numbers \(\mathsf{Q}\). Consider three intervals \(X\coloneqq(-\infty,0)\), \(Y\coloneqq[0,1]\) and \(Z\coloneqq(1,+\infty)\). Let \(\mathscr{U}_{X},\mathscr{U}_{Y}\) and \(\mathscr{U}_{Z}\) be free ultrafilters containing the respective intervals. In this case we have that \([[B]](X,Z)=Y\) and so \([[B]][\mathscr{U}_{X}\times\mathscr{U}_{Z}]\cap\mathscr{U}_{Y}\neq\emptyset\). Generally, for any pair \(\mathscr{U}_{1},\mathscr{U}_{2}\) of free ultrafilters such that \((r,p)\in\mathscr{U}_{1}\), \((q,s)\in\mathscr{U}_{2}\) and \(r<p<q<s\) there is a free ultrafilter \(\mathscr{U}\) such that \(S_{[[B]]}(\mathscr{U}_{1},\mathscr{U}_{2},\mathscr{U})\), i.e., any such ultrafilter which contains the open interval \((p,q)\). \(\dashv\) ## 9. Summary and further work We have put forward--well justified--axioms for an algebraic treatment of reflexive betweenness relations. The resulting class **Abtw** of b-algebras turned out to have some <<good>> properties such as closure under canonical extensions, and some <<bad>> ones, too, such as non-b-representability. In the context of the latter, the most pressing questions now are the following: 1. Is there for any b-algebra \(\mathfrak{A}\) a \(3\)-frame \(\mathfrak{F}\coloneqq\langle U,R\rangle\) such that \(\mathfrak{A}\) is embeddable into the complex PS-algebra of \(\mathfrak{F}\)? We know from Example 8.4 that \(\mathfrak{F}\) cannot be a b-frame, but it may be possible that there is a larger class of \(3\)-frames that can give us representability. Basically we ask if we can prove an analog of Theorem 8.5 from Duntsch et al. (2017) or the _Important Lemma_ from Gargov et al. (1987). 2. Which, if any, axioms can we add to (ABT0)-(ABT3) and (wMIA) to obtain a subclass of **Abtw** that is b-representable? We also investigated properties of the possibility and sufficiency operators in the context of b-algebras, and although initially it seemed that our axioms say little about \(g\), we were able to prove--especially in Section 6--quite a number of strong properties, some of them limiting in nature. Also, we knew that \(f\) and \(g\) should be bounded by certain axioms to cooperate fruitfully, but to our surprise, it turned out in Theorem 8.7 that the cooperation cannot be too strong. Still, we believe that there are new connections to be discovered, which will result in further insights into algebraic aspects of betweenness. What we entirely left out of this paper--but by no means neglected--are problems of axiomatizations of various subclasses of **Abtw** such as **Cbtw**. These we are going to pursue in future installments of the work presented here. ## Acknowledgements This research was funded by the National Science Center (Poland), grant number 2020/39/B/HS1/00216, "Logico-philosophical foundations of geometry and topology". For the purpose of Open Access, the authors have applied a CC-BY public copyright license to any Author Accepted Manuscript (AAM) version arising from this submission.
2309.07636
On the Relationship Between Iterated Statistical Linearization and Quasi-Newton Methods
This letter investigates relationships between iterated filtering algorithms based on statistical linearization, such as the iterated unscented Kalman filter (IUKF), and filtering algorithms based on quasi-Newton (QN) methods, such as the QN iterated extended Kalman filter (QN-IEKF). Firstly, it is shown that the IUKF and the iterated posterior linearization filter (IPLF) can be viewed as QN algorithms, by finding a Hessian correction in the QN-IEKF such that the IPLF iterate updates are identical to that of the QN-IEKF. Secondly, it is shown that the IPLF/IUKF update can be rewritten such that it is approximately identical to the QN-IEKF, albeit for an additional correction term. This enables a richer understanding of the properties of iterated filtering algorithms based on statistical linearization.
Anton Kullberg, Martin A. Skoglund, Isaac Skog, Gustaf Hendeby
2023-09-14T12:01:54Z
http://arxiv.org/abs/2309.07636v2
# On the Relationship Between Iterated Statistical Linearization and Quasi-Newton Methods ###### Abstract This letter investigates relationships between iterated filtering algorithms based on statistical linearization, such as the iterated unscented Kalman filter (lukf), and filtering algorithms based on quasi-Newton (qn) methods, such as the qn iterated extended Kalman filter (qn-lekf). Firstly, it is shown that the luf and the iterated posterior linearization filter (qlrf) can be viewed as qn algorithms, by finding a Hessian correction in the qn-lekf such that the lplf iterate updates are identical to that of the qn-lekf. Secondly, it is shown that the lplf/lukf update can be rewritten such that it is approximately identical to the qn-lekf, albeit for an additional correction term. This enables a richer understanding of the properties of iterated filtering algorithms based on statistical linearization. Nonlinear filtering, Statistical linearization, Quasi-Newton ## I Introduction State estimation in discrete-time state-space models with additive Gaussian noise, i.e., models such as \[\mathbf{x}_{k+1} =\mathbf{f}(\mathbf{x}_{k})+\mathbf{w}_{k} \mathbf{w}_{k}\stackrel{{\text{\tiny{iid}}}}{{\sim}} \mathcal{N}(\mathbf{0},\mathbf{Q}) \tag{1a}\] \[\mathbf{y}_{k} =\mathbf{h}(\mathbf{x}_{k})+\mathbf{e}_{k} \mathbf{e}_{k}\stackrel{{\text{\tiny{iid}}}}{{\sim}} \mathcal{N}(\mathbf{0},\mathbf{R}) \tag{1b}\] has been studied extensively for decades. Here, \(\mathbf{x}_{k}\), \(\mathbf{y}_{k}\), \(\mathbf{w}_{k}\), \(\mathbf{e}_{k}\) are the state, measurement, process noise, and measurement noise, respectively. The filtering problem is then to compute the marginal distributions \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), given a sequence of measurements \(\mathbf{y}_{1:k}\). In the case \(\mathbf{f}\) and \(\mathbf{h}\) are linear functions, this is analytically tractable and the solution is given by the Kalman filter, which is the optimal estimator in the mean-squared error sense [1]. For nonlinear state-space models, analytical solutions generally do not exist and approximate inference techniques have been developed for these cases. The extended Kalman filter (ekf), introduced alongside the original Kalman filter, was a simple way of extending the Kalman filter to the nonlinear case [1]. The original ekf linearizes the dynamical model \(\mathbf{f}\) and the measurement model \(\mathbf{h}\) at the current state estimate using a first-order Taylor expansion and then applies the standard Kalman filter updates. Since the ekf was developed, it has been identified that this choice of linearization point may be suboptimal. This has lead to the development of the iterated extended Kalman filter (iekf) [2, 3, 4]. This is a family of approximate inference techniques that attempt to find a better linearization point for the measurement model, which boils down to iterating the measurement update a number of times. This family of inference techniques include the line-search iekf, quasi-Newton (qn) lekf (qn-lekf) etc., which are commonly referred to as damped iekfs [5]. Alongside the development of the iekf, other strategies for nonlinear filtering were developed to circumvent the need for analytical linearization. Particularly, the unscented Kalman filter (ukf) was developed as a competitive alternative to the ekf [6, 7]. The ukf is essentially based on deterministically sampling the prior and propagating a set of "sigma points" through the nonlinear function, whereafter an approximate Gaussian distribution can be constructed based on the transformed points. This is done in both the time update, as well as the measurement update. The ukf has since been shown to be equivalent to statistically linearizing the nonlinear models and then applying the standard Kalman recursions [8]. Therefore, both the ukf and other deterministically sampled sigma-point methods can be interpreted as linearization-based nonlinear filtering algorithms. Similarly to the ekf, the ukf has also been extended to the iterated ukf (iukf) [9]. Recently, another strategy for linearization-based filtering was introduced as yet another alternative to the ekf and the sigma-point based filters, namely the iterated posterior linearization filter (iplf) [10]. The lplf is based on the idea of linearizing around the _posterior_\(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). As the posterior is not available, the iplf constructs an approximate posterior \(q_{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\) and then iterates the measurement update, each time performing statistical linearization around the current approximate posterior \(q_{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). Similarly to the iekf, the iplf has been shown to diverge for some particular problems [9], which has led to the development of damped versions of the iplf [11]. To gain a deeper understanding of the properties of the iterated statistical linearization filters, such as the iukf and iplf, we seek to connect this family of methods to classical qn methods, such as the qn-lekf. To that end, we firstly find a Hessian correction in the qn-lekf such that the iterate updates are identical to that of the iplf, thereby showing that the iplf, iukf and other iterated statistical linearization based filters may be viewed as qn methods. Secondly, as the necessary Hessian correction has a complicated form, we show that the iplf/lukf can be rewritten in such a way that an approximate qn structure appears without the need for a complicated Hessian correction. However, this secondary correspondence is only approximate as it requires an additional correction term, which nonetheless is fully interpretable. ## II Preliminaries Mathematical preliminaries are restated here for completeness. ### _Statistical Linearization_ Given a nonlinear model \[\mathbf{z}=\mathbf{g}(\mathbf{x}),\] we wish to find an affine representation \[\mathbf{g}(\mathbf{x})\approx\mathbf{A}\mathbf{x}+\mathbf{b}+\eta, \tag{2}\] with \(\eta\sim\mathcal{N}(\eta;\mathbf{0},\mathbf{\Omega})\). In this affine representation, there are three free parameters, \(\mathbf{A},\mathbf{b}\), and \(\mathbf{\Omega}\). Statistical linearization finds these parameters by linearizing w.r.t. a distribution \(p(\mathbf{x})\). Practically, one may think of this as constructing an affine function that best fits a number of samples of \(p(\mathbf{x})\) transformed through \(\mathbf{g}(\mathbf{x})\). Assuming that \(p(\mathbf{x})=\mathcal{N}(\mathbf{x};\hat{\mathbf{x}},\mathbf{P})\), statistical linearization selects the affine parameters as \[\mathbf{A} =\Psi^{\top}\mathbf{P}^{-1},\hskip 28.452756pt\mathbf{b}=\tilde{ \mathbf{z}}-\mathbf{A}\hat{\mathbf{x}} \tag{3a}\] \[\mathbf{\Omega} =\Phi-\mathbf{A}\mathbf{P}\mathbf{A}^{\top},\quad\tilde{\mathbf{ z}}=\mathbb{E}[\mathbf{g}(\mathbf{x})]\] (3b) \[\Psi =\mathbb{E}[(\mathbf{x}-\hat{\mathbf{x}})(\mathbf{g}(\mathbf{x}) -\tilde{\mathbf{z}})^{\top}]\] (3c) \[\Phi =\mathbb{E}[(\mathbf{g}(\mathbf{x})-\tilde{\mathbf{z}})(\mathbf{ g}(\mathbf{x})-\tilde{\mathbf{z}})^{\top}], \tag{3d}\] where the expectations are taken w.r.t. \(p(\mathbf{x})\). The major difference from analytical linearization is that \(\mathbf{\Omega}\neq 0\), which implies that the error in the linearization is captured. Typically, the expectations in (3) are not analytically tractable and thus, practically, one often resorts to some numerical integration technique. ### _Quasi-Newton Optimization_ A general nonlinear least-squares minimization problem is given by \[\hat{\mathbf{x}}=\operatorname*{arg\,min}_{\mathbf{x}}V(\mathbf{x}),\quad V( \mathbf{x})=\frac{1}{2}r(\mathbf{x})^{\top}r(\mathbf{x}). \tag{4}\] One particular family of methods for solving these problems, is the Newton family. This family of methods essentially finds the minimizing argument of (4) by starting at an initial guess \(\mathbf{x}_{0}\) and iterating \[\mathbf{x}_{i+1}=\mathbf{x}_{i}-(\nabla^{2}V(\mathbf{x}_{i}))^{-1}\nabla V( \mathbf{x}_{i}). \tag{5}\] Here, \(\nabla^{2}V(\mathbf{x}_{i})\) and \(\nabla V(\mathbf{x}_{i})\) are the Hessian and the gradient of \(V\) evaluated at \((\mathbf{x}_{i})\), respectively. Note that convergence of the iterates typically benefit from step-size correction, see _e.g._, [12]. For nonlinear least-squares problems, the gradient and Hessian are given by \[\nabla V(\mathbf{x}) =\mathbf{J}^{\top}(\mathbf{x})r(\mathbf{x}),\quad\mathbf{J}( \mathbf{x})=\frac{dr(\mathbf{s})}{d\mathbf{s}}\bigg{|}_{\mathbf{s}=\mathbf{x}} \tag{6a}\] \[\nabla^{2}V(\mathbf{x}) =\mathbf{J}^{\top}(\mathbf{x})\mathbf{J}(\mathbf{x})+\sum_{i=1}^ {n_{r}}[r(\mathbf{x})]_{i}\nabla^{2}[r(\mathbf{x})]_{i} \tag{6b}\] where \([r(\mathbf{x})]_{i}\) is the \(i\)th component of \(r(\mathbf{x})\) and \(n_{r}\) is the dimension of \(r(\mathbf{x})\). As the Hessian of the cost function can be computationally expensive to evaluate, approximate versions of Newton's method have been developed. In particular, the Gauss-Newton method approximates the Hessian as \[\nabla^{2}V(\mathbf{x})\approx\mathbf{J}(\mathbf{x})^{\top}\mathbf{J}( \mathbf{x}), \tag{7}\] thus only requiring first-order information. As such, it is a Quasi-Newton (qn) method since it operates as a Newton method with an approximate Hessian. This approximation may be bad far from the optimum, which may affect convergence. A remedy is to either approximate the Hessian directly in some other way, or by introducing a correction term to the Gauss-Newton approximate Hessian as \[\nabla^{2}V(\mathbf{x})\approx\mathbf{J}(\mathbf{x})^{\top}\mathbf{J}( \mathbf{x})+\mathbf{T}, \tag{8}\] where \(\mathbf{T}\) is supposed to capture second-order information and can be chosen in a variety of ways, see [12] for an overview. ### _Quasi-Newton_ iekf The cost function for the general iekf is of the form (4) with [5] \[r(\mathbf{x})=\begin{bmatrix}\mathbf{R}^{-1/2}(\mathbf{y}_{k}-\mathbf{h}( \mathbf{x}))\\ \mathbf{P}_{k|k-1}^{-1/2}(\hat{\mathbf{x}}_{k|k-1}-\mathbf{x})\end{bmatrix}. \tag{9}\] Hence, the Jacobian \(\mathbf{J}(\mathbf{x})\) is given by \[\mathbf{J}(\mathbf{x})=-\begin{bmatrix}\mathbf{R}^{-1/2}\mathbf{H}(\mathbf{x} )\\ \mathbf{P}_{k|k-1}^{-1/2}\end{bmatrix},\text{ where }\mathbf{H}(\mathbf{x})=\frac{d \mathbf{h}(\mathbf{s})}{d\mathbf{s}}\bigg{|}_{\mathbf{s}=\mathbf{x}}. \tag{10}\] Now, using (5) and (8), the qn-ieff iterate update is given by [5] \[\mathbf{x}_{i+1} =\tilde{\mathbf{x}}+\mathbf{K}_{i}^{q}(\mathbf{y}_{k}-\mathbf{h} _{i}-\mathbf{H}_{i}\tilde{\mathbf{x}}_{i})-\mathbf{S}_{i}^{q}\mathbf{T}_{i} \tilde{\mathbf{x}}_{i} \tag{11a}\] \[\mathbf{P}_{i+1} =\mathbf{P}-\mathbf{P}\mathbf{H}_{i}^{\top}(\mathbf{H}_{i} \mathbf{P}\mathbf{H}_{i}^{\top}+\mathbf{R})^{-1}\mathbf{H}_{i}\mathbf{P}\] (11b) \[\mathbf{S}_{i}^{q} \triangleq\big{(}\mathbf{H}_{i}^{\top}\mathbf{R}_{i}^{-1}\mathbf{ H}_{i}+\mathbf{P}^{-1}+\mathbf{T}_{i}\big{)}^{-1}\] (11c) \[\mathbf{K}_{i}^{q} \triangleq\mathbf{S}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}, \tag{11d}\] with simplified notation \(\tilde{\mathbf{x}}_{i}=\hat{\mathbf{x}}-\mathbf{x}_{i}\), \(\mathbf{h}_{i}=\mathbf{h}(\mathbf{x}_{i})\), \(\mathbf{H}_{i}=\mathbf{H}(\mathbf{x}_{i})\), and \(\mathbf{P}=\mathbf{P}_{k|k-1}\). Further, \(\mathbf{T}_{i}\) is the current Hessian correction which can be chosen freely, see [5]. ### _Iterated Posterior Linearization Filter_ The iplf was initially developed through minimizing the Kullback-Leibler (kl) divergence between the true marginal posterior \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\) and an approximation \(q(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), _i.e.,_ \[q(\mathbf{x}_{k}|\mathbf{y}_{1:k})=\min_{q}D_{\text{kl}}\left(p\|q\right), \tag{12}\] where \[D_{\text{kl}}\left(p\|q\right)=\int p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\log\frac{ p(\mathbf{x}_{k}|\mathbf{y}_{1:k})}{q(\mathbf{x}_{k}|\mathbf{y}_{1:k})}d\mathbf{x}_{k}.\] In particular, the divergence is used to find an "optimal" affine approximation of the observation model, subsequently used as the "surrogate" measurement model. However, as the objective (12) is not analytically tractable, and also requires access to the true posterior \(p(\mathbf{x}_{k}|\mathbf{y}_{1:k})\), the iplf approximately minimizes this by starting at some initial approximate posterior \(q_{0}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\). The approximate posterior is then iteratively refined until a stopping condition is met. Essentially, the recursions are similar to an iekf using a statistically linearized model. By adapting the notation of [13], the iterate update of the iplf can be written as \[\mathbf{x}_{i+1}^{\text{\sc nr}} =\hat{\mathbf{x}}+\mathbf{K}_{i}(\mathbf{y}_{k}-\mathbf{h}_{i}- \mathbf{H}_{i}\tilde{\mathbf{x}}_{i}) \tag{13a}\] \[\mathbf{P}_{i+1} =\mathbf{P}-\mathbf{K}_{i}\mathbf{S}_{i}\mathbf{K}_{i}^{\top}\] (13b) \[\mathbf{K}_{i} \triangleq\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\mathbf{S}_{i}\] (13c) \[\mathbf{S}_{i} \triangleq\left(\mathbf{H}_{i}\mathbf{P}_{i}\mathbf{H}_{i}^{\top }+\mathbf{R}+\mathbf{\Omega}_{i}\right)^{-1}, \tag{13d}\] where \(\mathbf{\Omega}_{i},~{}\mathbf{h}_{i}\) and \(\mathbf{H}_{i}\) are found through statistical linearization of \(\mathbf{h}\). Note that \(\mathbf{h}_{i}=\mathbf{b}+\mathbf{A}\hat{\mathbf{x}}\) and \(\mathbf{H}_{i}=\mathbf{A}\), see Section II-A. The iterates are initialized as \(\mathbf{x}_{i}=\hat{\mathbf{x}},~{}\mathbf{P}_{i}=\mathbf{P}\) and the updates are then iterated until the approximate posterior \(q_{i}(\mathbf{x}_{k}|\mathbf{y}_{1:k})\) does not significantly change, as measured by \[D_{\text{\sc kt}}(q_{i+1}\|q_{i}).\] Note that by choosing the covariance update (13b) as \(\mathbf{P}_{i+1}=\mathbf{P}\) until the last iteration, the iukf presented in [9] is obtained. We will now consider two relationships between the iplf update (13a) and the qn-iekf update (11a). ## III Exact Quasi-Newton Next, we show that the iplf can be viewed as an _exact_ qn method, _i.e._, that it corresponds to qn with a particular choice of Hessian correction \(\mathbf{T}_{i}\). More precisely, we find a Hessian correction \(\mathbf{T}_{i}\) such that the qn-iekf update (11a) is equal to the iplf update (13a). First, let \(\epsilon_{i}\triangleq\mathbf{y}_{k}-\mathbf{h}_{i}-\mathbf{H}_{i}\tilde{ \mathbf{x}}_{i}\). Now, setting (11a) equal to (13a) yields \[\mathbf{K}_{i}^{q}\epsilon_{i}-\mathbf{S}_{i}^{q}\mathbf{T}_{i} \tilde{\mathbf{x}}_{i}=\mathbf{K}_{i}\epsilon_{i}\iff\] \[\mathbf{S}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\epsilon_{i} -\mathbf{S}_{i}^{q}\mathbf{T}_{i}\tilde{\mathbf{x}}_{i}=\mathbf{K}_{i} \epsilon_{i}\iff\] \[\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\epsilon_{i}-\mathbf{T}_{i} \tilde{\mathbf{x}}_{i}=(\mathbf{S}_{i}^{q})^{-1}\mathbf{K}_{i}\epsilon_{i}\iff\] \[\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\epsilon_{i}-\mathbf{T}_{i} \tilde{\mathbf{x}}_{i}=(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{H}_{i}+ \mathbf{P}^{-1}+\mathbf{T}_{i})\mathbf{K}_{i}\epsilon_{i}\iff\] \[\underbrace{(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{-}\big{(} \mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{H}_{i}\mathbf{+}\mathbf{P}^{-1} \big{)}\mathbf{K}_{i})}_{(*)}\epsilon_{i}=\mathbf{T}_{i}(\tilde{\mathbf{x}}_{i} \mathbf{+}\mathbf{K}_{i}\epsilon_{i}).\] Now, note that \((*)\) can be written as \[(*) =(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}-\left(\mathbf{H}_{i}^{\top }\mathbf{R}^{-1}\mathbf{H}_{i}+\mathbf{P}^{-1}\right)\mathbf{K}_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}-\mathbf{H}_{i}^{\top} \mathbf{R}^{-1}\mathbf{H}_{i}\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\mathbf{S}_{i}- \mathbf{P}^{-1}\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\mathbf{S}_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\left(\mathbf{S}_{i}^{-1}- \mathbf{H}_{i}\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\right)\mathbf{S}_{i}- \mathbf{P}^{-1}\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\mathbf{S}_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}(\mathbf{R}+\mathbf{\Omega }_{i})\mathbf{S}_{i}-\mathbf{P}^{-1}\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\mathbf{S }_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{S}_{i}+\mathbf{H}_{i}^{\top} \mathbf{R}^{-1}\mathbf{\Omega}_{i}\mathbf{S}_{i}-\mathbf{P}^{-1}\mathbf{P}_{i} \mathbf{H}_{i}^{\top}\mathbf{S}_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{S}_{i}+\mathbf{H}_{i}^{\top} \mathbf{R}^{-1}\mathbf{\Omega}_{i}\mathbf{S}_{i}-\mathbf{P}^{-1}\mathbf{P}_{i} \mathbf{H}_{i}^{\top}\mathbf{S}_{i})\] \[=(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{\Omega}_{i}+\left( \mathbf{I}-\mathbf{P}^{-1}\mathbf{P}_{i}\right)\mathbf{H}_{i}^{\top})\mathbf{S }_{i}.\] Thus, we have \[\mathbf{T}_{i}(\tilde{\mathbf{x}}_{i}+\mathbf{K}_{i}\epsilon_{i})=(\mathbf{H}_ ^{\top}\mathbf{R}^{-1}\mathbf{\Omega}_{i}+\left(\mathbf{I}-\mathbf{P}^{-1} \mathbf{P}_{i}\right)\mathbf{H}_{i}^{\top})\mathbf{S}_{i}\epsilon_{i}.\] Letting \[\mathbf{s}_{i} \triangleq\tilde{\mathbf{x}}_{i}+\mathbf{K}_{i}\epsilon_{i}= \mathbf{x}_{i+1}^{\text{\sc nr}}-\mathbf{x}_{i} \tag{14a}\] \[\mathbf{p}_{i} \triangleq(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{\Omega}_{i}+ \left(\mathbf{I}-\mathbf{P}^{-1}\mathbf{P}_{i}\right)\mathbf{H}_{i}^{\top}) \mathbf{S}_{i}\epsilon_{i}, \tag{14b}\] we have \[\mathbf{T}_{i}\mathbf{s}_{i}=\mathbf{p}_{i}. \tag{15}\] which is similar to the _secant equation_, see _e.g._[12, p. 24]. Hence, we can follow a similar reasoning and procedure to find a solution. That is, we impose that \(\mathbf{T}_{i}\) be symmetric and that it is "close" to \(\mathbf{T}_{i-1}\), in some sense. Thus, we find \(\mathbf{T}_{i}\) by minimizing \[\min_{\mathbf{T}} \|\mathbf{T}-\mathbf{T}_{i-1}\|_{\mathbf{W}}\] (16) subject to \[\mathbf{T}=\mathbf{T}^{\top},\quad\mathbf{T}\mathbf{s}_{i}= \mathbf{p}_{i}.\] Here, \(\|\mathbf{A}\|_{\mathbf{W}}=\|\mathbf{W}^{1/2}\mathbf{A}\mathbf{W}^{1/2}\|\) and \(\mathbf{W}\) is a nonsingular symmetric matrix. Now, let \(\mathbf{s}_{i},\mathbf{p}_{i}\) be in \(\mathbb{R}^{n}\). Then, [14, Theorem 7.3] states that for any \(\mathbf{c}\in\mathbb{R}^{n}\) such that \(\mathbf{c}^{\top}\mathbf{s}_{i}>0\) and \(\mathbf{W}\mathbf{c}=\mathbf{W}^{-1}\mathbf{s}_{i}\), the solution to (16) is given by \[\mathbf{T}_{i}=\mathbf{T}_{i-1}+\frac{(\mathbf{p}_{i}-\mathbf{T} _{i-1}\mathbf{s}_{i})\mathbf{c}^{\top}+\mathbf{c}(\mathbf{p}_{i}-\mathbf{T}_{i- 1}\mathbf{s}_{i})^{\top}}{\mathbf{c}^{\top}\mathbf{s}_{i}}\\ -\frac{(\mathbf{p}_{i}-\mathbf{T}_{i-1}\mathbf{s}_{i})^{\top} \mathbf{s}_{i}}{(\mathbf{c}^{\top}\mathbf{s}_{i})^{2}}\mathbf{c}\mathbf{c}^{ \top}. \tag{17}\] In particular, we choose \(\mathbf{c}=\mathbf{s}_{i}\) which guarantees \(\mathbf{c}^{\top}\mathbf{s}_{i}=\mathbf{s}_{i}^{\top}\mathbf{s}_{i}>0\) as long as \(\mathbf{s}_{i}\neq 0\). Note that (18). Inspecting the components of (18), it is "weighted" with \(\mathbf{R}^{-1}\boldsymbol{\Omega}_{i}\mathbf{S}_{i}\). Hence, with decreasing measurement uncertainty \(\mathbf{R}\), the Hessian correction grows "larger", as the measurement model is more precise. Similarly, as the linearization error \(\boldsymbol{\Omega}_{i}\) grows, the Hessian correction also increases, which makes sense as it indicates that the measurement function \(\mathbf{h}\) is highly nonlinear and the Hessian approximation (7) is most likely poor and needs more correction. Also, note that if \(\boldsymbol{\Omega}_{i}=\boldsymbol{0}\), _i.e._, if the model is completely linear at the current iterate, the Hessian is only corrected according to the iterate difference \(\mathbf{s}_{i}\). Lastly, as the innovation covariance \(\mathbf{S}_{i}^{-1}\) decreases, the correction grows, essentially also indicating that the measurement carries a lot of information that can be exploited. This interpretation should approximately hold for the iplf as well, as usually \(\mathbf{P}^{-1}\mathbf{P}_{i}\approx\mathbf{I}\). However, a detailed analysis of the exact behavior of the iplf is non-trivial, as \(\mathbf{H}_{i},\boldsymbol{\Omega}_{i}\) and \(\mathbf{S}_{i}\) all depend on the previous iterate \(\mathbf{P}_{i}\). ## IV Approximate Quasi-Newton As an alternative to the exact view in Section III, we can also view the iplf as an _approximate_ qn method. Essentially, it boils down to modifying (13a) such that it approximately takes the form (11a). Start with (13d) and write \[\mathbf{S}_{i}=\big{(}\mathbf{H}_{i}\mathbf{P}_{i}\mathbf{H}_{i }^{\top}+\mathbf{R}+\boldsymbol{\Omega}_{i}\big{)}^{-1}=(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1}\big{(}\mathbf{I}-\\ \mathbf{H}_{i}(\mathbf{H}_{i}^{\top}(\mathbf{R}+\boldsymbol{ \Omega}_{i})^{-1}\mathbf{H}_{i}+\mathbf{P}_{i}^{-1})^{-1}\mathbf{H}_{i}^{\top }(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1}\big{)}.\] Now, note that \[(\mathbf{H}_{i}^{\top}(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1} \mathbf{H}_{i}+\mathbf{P}_{i}^{-1})^{-1}\\ =(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{H}_{i}+\mathbf{P}_{ i}^{-1}-\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}(\mathbf{R}^{-1}+\boldsymbol{ \Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}\mathbf{H}_{i})^{-1}\\ =(\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{H}_{i}+\mathbf{P}_ {i}^{-1}+\tilde{\mathbf{T}}_{i})^{-1}\triangleq\tilde{\mathbf{S}}_{i}^{q},\] with \[\tilde{\mathbf{T}}_{i} \triangleq-\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}(\mathbf{R}^{-1}+ \boldsymbol{\Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}\mathbf{H}_{i}\] \[=-\mathbf{H}_{i}^{\top}\left(\mathbf{R}^{-1}-(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1}\right)\mathbf{H}_{i}.\] Plugging this into (13c) yields \[\mathbf{P}_{i}\mathbf{H}_{i}^{\top}\left(\mathbf{I}-(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1}\mathbf{H}_{i}\tilde{\mathbf{S}}_{i}^{q}\mathbf{ H}_{i}^{\top}\right)(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1}\\ =\mathbf{P}_{i}\left((\tilde{\mathbf{S}}_{i}^{q})^{-1}-\mathbf{H}_ {i}^{\top}(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1}\mathbf{H}_{i}\right) \tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}(\mathbf{R}+\boldsymbol{ \Omega}_{i})^{-1}\\ =\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1}\\ =\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1}\\ =\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1} \tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}(\mathbf{R}^{-1 }+\boldsymbol{\Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}.\] Hence, with \(\epsilon_{i}=\mathbf{y}_{k}-\mathbf{h}_{i}-\mathbf{H}_{i}\tilde{\mathbf{x}}_{i}\), the iterate update becomes \[\mathbf{x}_{i+1}=\hat{\mathbf{x}}+\tilde{\mathbf{K}}_{i}^{q} \epsilon_{i}-\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}( \mathbf{R}^{-1}+\boldsymbol{\Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}\epsilon_{i}\] \[=\hat{\mathbf{x}}+\tilde{\mathbf{K}}_{i}^{q}\epsilon_{i}-\tilde{ \mathbf{S}}_{i}^{q}\overbrace{(-\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}(\mathbf{ R}^{-1}+\boldsymbol{\Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}\mathbf{H}_{i})}^{ \hat{\mathbf{T}}_{i}}\tilde{\mathbf{x}}_{i}\] \[-\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}( \mathbf{R}^{-1}+\boldsymbol{\Omega}_{i}^{-1})^{-1}\mathbf{R}^{-1}(\mathbf{y}_{ k}-\mathbf{h}_{i})\] \[=\hat{\mathbf{x}}+\tilde{\mathbf{K}}_{i}^{q}\epsilon_{i}-\tilde{ \mathbf{S}}_{i}^{q}\tilde{\mathbf{T}}_{i}\tilde{\mathbf{x}}_{i}-\underbrace{ \tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}(\mathbf{R}^{-1}-(\mathbf{R}+ \boldsymbol{\Omega}_{i})^{-1})(\mathbf{y}_{k}-\mathbf{h}_{i})}_{\triangleq \Delta_{i}}.\] Hence, the iplf can be interpreted as performing modified Quasi-Newton with a specific choice of Hessian correction \(\mathbf{T}_{i}\) and an additional correction term in the iterate update. The additional term \(\Delta_{i}\) can be viewed as a correction of the iterate based on a \(0\)th order Taylor expansion of the measurement model at the current iterate \(\mathbf{x}_{i}\). The step is further weighted by \(\mathbf{R}^{-1}-(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1}\), which can be interpreted as a measure of how close to linear the model is. Particularly, with \(\boldsymbol{\Omega}_{i}=\boldsymbol{0}\), the model is completely linear _at the current iterate_\(\mathbf{x}_{i}\), and \(\Delta_{i}\) and \(\tilde{\mathbf{T}}_{i}\) thus collapse to \(\boldsymbol{0}\). This is, of course, completely natural, as the Hessian of a linear model is identically \(\boldsymbol{0}\). This means that there is no additional information to extract from the curvature of the measurement model at, and around, the current iterate \(\mathbf{x}_{i}\). Further, the iterate update collapses to that of the standard Kalman filter, a desirable property of nonlinear filters applied to linear models. On the other hand, if the model is highly nonlinear, such that \(\boldsymbol{\Omega}_{i}\) is much larger than \(\mathbf{R}\), the weighting becomes \(\mathbf{R}^{-1}-(\mathbf{R}+\boldsymbol{\Omega}_{i})^{-1}\approx\mathbf{R}^{-1}\) which yields \[\tilde{\mathbf{T}}_{i}=-\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}\mathbf{H}_{i}, \quad\Delta_{i}=\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}( \mathbf{y}_{k}-\mathbf{h}_{i}).\] The terms in the iterate update related to \(\tilde{\mathbf{T}}_{i}\) and \(\Delta_{i}\) become \[-\tilde{\mathbf{S}}_{i}^{q}(-\mathbf{H}_{i}^{\top}\mathbf{R}^{-1} \mathbf{H}_{i})\tilde{\mathbf{x}}_{i}-\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{ \top}\mathbf{R}^{-1}(\mathbf{y}_{k}-\mathbf{h}_{i})\\ =-\tilde{\mathbf{S}}_{i}^{q}\mathbf{H}_{i}^{\top}\mathbf{R}^{-1}( \mathbf{y}_{k}-\mathbf{h}_{i}-\mathbf{H}_{i}\tilde{\mathbf{x}}_{i})=-\tilde{ \mathbf{K}}_{i}^{q}\epsilon_{i}.\] Hence, the iterate update collapses to \(\mathbf{x}_{i+1}=\hat{\mathbf{x}}\) which means that when the model is highly nonlinear, the iplf (and the iukf/lckf) will essentially avoid updating the iterate. This is reasonable, as it means that an approximate linear (affine) model is not an appropriate choice. This also means that iterated filters based on statistical linearization are automatically "cautious" in highly nonlinear regions of the measurement model, a feature not present in the standard iekf for instance. In particular, away from the limiting cases, these algorithms still adapt the step length depending on how well the approximate linear (affine) model approximates the true model. Further, this means that there is already a built-in Hessian correction in the statistically linearized filters and any qn versions thereof should take this into account when designing their respective Hessian approximations. ## V Conclusion In this letter, we have shown that iterated filtering algorithms based on statistical linearization, such as the iukf and the iplf, can be interpreted as qn methods with a particular choice of Hessian correction. Through this connection, we hope to enable a richer understanding of the properties of the iukf, iplf,
2305.19911
Neuron to Graph: Interpreting Language Model Neurons at Scale
Advances in Large Language Models (LLMs) have led to remarkable capabilities, yet their inner mechanisms remain largely unknown. To understand these models, we need to unravel the functions of individual neurons and their contribution to the network. This paper introduces a novel automated approach designed to scale interpretability techniques across a vast array of neurons within LLMs, to make them more interpretable and ultimately safe. Conventional methods require examination of examples with strong neuron activation and manual identification of patterns to decipher the concepts a neuron responds to. We propose Neuron to Graph (N2G), an innovative tool that automatically extracts a neuron's behaviour from the dataset it was trained on and translates it into an interpretable graph. N2G uses truncation and saliency methods to emphasise only the most pertinent tokens to a neuron while enriching dataset examples with diverse samples to better encompass the full spectrum of neuron behaviour. These graphs can be visualised to aid researchers' manual interpretation, and can generate token activations on text for automatic validation by comparison with the neuron's ground truth activations, which we use to show that the model is better at predicting neuron activation than two baseline methods. We also demonstrate how the generated graph representations can be flexibly used to facilitate further automation of interpretability research, by searching for neurons with particular properties, or programmatically comparing neurons to each other to identify similar neurons. Our method easily scales to build graph representations for all neurons in a 6-layer Transformer model using a single Tesla T4 GPU, allowing for wide usability. We release the code and instructions for use at https://github.com/alexjfoote/Neuron2Graph.
Alex Foote, Neel Nanda, Esben Kran, Ioannis Konstas, Shay Cohen, Fazl Barez
2023-05-31T14:44:33Z
http://arxiv.org/abs/2305.19911v1
# Neuron to Graph: Interpreting Language Model Neurons at Scale ###### Abstract Advances in Large Language Models (LLMs) have led to remarkable capabilities, yet their inner mechanisms remain largely unknown. To understand these models, we need to unravel the functions of individual neurons and their contribution to the network. This paper introduces a novel automated approach designed to scale interpretability techniques across a vast array of neurons within LLMs, to make them more interpretable and ultimately safe. Conventional methods require examination of examples with strong neuron activation and manual identification of patterns to decipher the concepts a neuron responds to. We propose Neuron to Graph (N2G), an innovative tool that automatically extracts a neuron's behaviour from the dataset it was trained on and translates it into an interpretable graph. N2G uses truncation and saliency methods to emphasise only the most pertinent tokens to a neuron while enriching dataset examples with diverse samples to better encompass the full spectrum of neuron behaviour. These graphs can be visualised to aid researchers' manual interpretation, and can generate token activations on text for automatic validation by comparison with the neuron's ground truth activations, which we use to show that the model is better at predicting neuron activation than two baseline methods. We also demonstrate how the generated graph representations can be flexibly used to facilitate further automation of interpretability research, by searching for neurons with particular properties, or programmatically comparing neurons to each other to identify similar neurons. Our method easily scales to build graph representations for all neurons in a 6-layer Transformer model using a single Tesla T4 GPU, allowing for wide usability. We release the code and instructions for use at [https://github.com/alexjfoote/Neuron2Graph](https://github.com/alexjfoote/Neuron2Graph). ## 1 Introduction Interpretability of machine learning models is an active research topic [17; 1] and can have a wide range of applications from bias detection [32] to autonomous vehicles [2] and Large Language Models (LLMs; [11]). The growing sub-field of mechanistic interpretability aims to understand the behaviour of individual neurons within models as well as how they combine into larger circuits of neurons that perform a particular function [22; 20; 16], with the ultimate aim of decomposing a model into interpretable components and using this to ensure model safety. Feature visualisation [23] is a tool for interpreting neurons in image models, whereby a synthetic input image is optimised to understand a target neuron. This has significantly aided work interpreting vision models. For example, to identify multimodal neurons which respond to abstract concepts [16], and to catalogue the behaviour of all early neurons in Inceptionv1 [21]. Similar interpretability tools for understanding neurons in LLMs are lacking. Currently, researchers often look at dataset examples containing tokens upon which a neuron strongly activates and investigate common elements and themes across examples to give some insight into neuron behaviour [11; 15]. However, this can give the illusion of interpretability when real behaviour is more complex [5], and measuring the degree to which these insights are correct is challenging. Additionally, inspecting individual neurons by hand is time-consuming and unlikely to scale to entire models. To overcome these challenges, we present **Neuron to Graph (N2G)**, which automatically converts a target neuron within an LLM to an interpretable graph that captures a neuron's behaviour. Our method takes maximally activating dataset examples for a target neuron, prunes them to remove irrelevant context, identifies the tokens which are important for neuron activation, and creates additional examples by replacing the important tokens with likely substitutes using DistilBERT [28]. These processed examples are the given as input to the graph builder, which removes unimportant tokens and creates a condensed graph representation. The graph can be visualised to facilitate understanding the neuron's behaviour, as well as used to process text and produce predicted token activations. This allows us to measure the correspondence between the target neuron's activations and the graph's structure, which provides a direct measurement of the degree to which a graph captures the neuron's behaviour. Once built, the graphs can be searched to quickly identify neurons with particular properties, which could help facilitate interpretability research. N2G provides a promising research direction for understanding the function of individual neurons in language models by building interpretable representations for every neuron in a model. Our main **contributions** are four-fold: * An input pruning method which removes any context from an input example that is unnecessary for neuron activation. * Token saliency computation to identify the importance of each context token for neuron activation. * An augmentation technique for pruned inputs to better explore and understand neuron behaviour by generating varied inputs through predicted token substitutions. * A graph-building process that results in a graph representation for each neuron. The quality of the representation can be automatically measured by comparing the output of the representation to the real neuron, the graph can be visualised for human interpretation, and the representations can be searched and compared to help automate parts of interpretability research. ## 2 Related Work Neuron analysis is a branch of natural language processing (NLP) research that investigates the structure and function of neurons within an LLM. It plays an important role in understanding the inner workings of a model and has the potential to enable a mechanistic understanding of large models. Prior work in neuron analysis has identified the presence of neurons correlated with specific concepts [26]. For instance, [8] explored neurons which specialised in linguistic and non-linguistic concepts in large language models, and [30] evaluated neurons which handle concepts such as causation in language. The existence of similar concepts embedded within models can also be found across different architectures. [33] and [29] examined neuron distributions across models and found that different architectures have similar localised representations of information, even when [10] used a combination of neuron analysis and visualisation techniques to compare transformer and recurrent models, finding that the transformer produces fewer neurons but exhibits stronger dynamics. There are various methods of identifying concept neurons [15]. In [3], a method was proposed for identifying important neurons across models by analyzing correlations between neurons from different models. In contrast, [7] developed a method to identify concept neurons in transformer feed-forward networks by computing the contribution of each neuron to the knowledge prediction. In contrast, we focus on identifying neurons using highly activating dataset examples. [18] demonstrated how the co-variance of neuron activations on a dataset can be used to distinguish neurons that are related to a particular concept. [31] also used neuron activations to train a probe which automatically evaluates language models for neurons correlated to linguistic concepts. One limitation of using highly activating dataset examples is that the accurate identification of concepts correlated with a neuron is limited by the dataset itself. A neuron may represent several concepts, and [6] emphasise the importance of conducting interpretability research on varied datasets, in order to avoid the "interpretability illusion", in which neurons that show consistent patterns of activation in one dataset activate on different concepts in another. [25] also showed the limitations of datasets in concept neuron identification. They demonstrated that generating synthetic language inputs that maximise the activations of a neuron surpasses naive search on a corpus. Researchers also used GPT-4 to simulate neurons and predict neuron activation, and then again use GPT-4 to explain the neuron behaviour [4]. This method represents an important step towards scalable neuron interpretability, providing the ability to automatically generate explanations for all neurons in a Language Model. However, whilst a natural language explanation can be very useful, it does not enable further automated analysis in the same way a more structured representation can. For example, searching explanations requires complex semantic search compared to simple syntactic matching, and similarly comparing the explanations for multiple neurons is more challenging. ## 3 Methodology In this section, we first discuss the model architecture we seek to create neuron interpretations for (SS3.1) and then discuss our algorithm for creating these interpretations (SS3.2). ### Model Architecture To test our algorithm, we analyse neurons of a SoLU model [11], an auto-regressive model from the Transformer family. This model uses the SoLU activation function, which pushes neurons to be monosemantic by penalising many neurons in a layer from firing at once. SoLU models have been shown to have a higher prevalence of neurons that represent a single feature, and are therefore more easily interpretable. They therefore provide an ideal test-bed for our model. Similarly to other auto-regressive Transformer models, the SoLU model implementation we use [19] includes multi-layered perceptron (MLP) layers within the Transformer blocks that make up the model. The model has six blocks, each with one MLP layer containing \(3072\) neurons, and was trained on the Pile [14]. We denote the set of neurons from all MLP layers by \(\mathcal{N}\), where each element in \(\mathcal{N}\) is \(n_{\ell j}\), indexing the \(j\)th neuron in layer \(\ell\) of the SoLU model. During its feed-forward step on a set of tokens, \(x=x_{1}\cdots x_{n}\), a given neuron \(n_{\ell j}\) generates an activation over all tokens \(x_{1}\cdots x_{n}\). We denote by \(a(i,x,\ell,j)\) the function that returns the activation of neuron \(j\) in layer \(\ell\) on input \(x\) for the \(i\)th token \(x_{i}\). We denote by \(a(\cdot,x,\ell,j)\) to be \(a(|x|,x,\ell,j)\). For integers \(c,d\), we denote by \(x_{c:\ d}\) the substring \(x_{c}x_{c+1}\cdots x_{d}\). Given a fixed neuron \(n_{\ell j}\in\mathcal{N}\) that we are seeking to create an interpretation for, we obtain the maximum activating dataset sample from the Neuroscience resource [19]. This resource provides the top 20 most activating examples from the training dataset for each neuron in the model. Each dataset example is a 1024-token string, and we refer to the final activating dataset sample of strings for any particular neuron as \(\mathcal{X}=\{x^{(1)},\ldots,x^{(m)}\}\). ### N2G: Neuron to Graph Algorithm For each neuron in the model we automatically build an interpretable representation that aims to capture the target neuron's behaviour. Our algorithm is inspired by the notion that the role of a neuron, and hence its possible interpretation, can be understood by examining the cases in which its activation levels are high. We, therefore, aim to expand the list of sequences in \(\mathcal{X}\) to related sequences Figure 1: **Overall architecture of N2G. Activations of the target neuron on the dataset examples are retrieved (neuron and activating tokens in red). Prompts are pruned and the importance of each token for neuron activation is measured (important tokens in blue). Pruned prompts are augmented by replacing important tokens with high-probability substitutes using DistilBERT. The augmented set of prompts are converted to a graph. The output graph is a real example which activates on the token “except” when preceded by any of the other tokens.** that also generate high activations for the target neuron \(n_{\ell j}\). This expanded list of sequences is then compiled into a lattice (a form of a directed acyclic graph) where each path in this lattice is a _minimally_ representative token sequence which results in high activation for the neuron. We present pseudocode for the algorithm in the Appendix. #### Pruning The algorithm begins with a pruning step, which aims to extract the key substring of each maximally activating dataset example. More formally, for each \(x^{(i)}\in\mathcal{X}\) it identifies the pivot token index: \[e_{i}=\arg\max_{k}a(k,x^{(i)},\ell,j). \tag{1}\] This pivot token has the highest activation among all tokens in \(x^{(i)}\) for the target neuron \(n_{\ell j}\), and therefore, is assumed to be the representative token for interpreting \(n_{\ell j}\). Given this pivot token, we want to find the shortest prior context required to activate the neuron. Following the identification of \(e_{i}\), we find a minimal subsequence of \(x^{(i)}\) that ends in position \(e_{i}\) and starts in position \(s_{i}\) such that: \[\frac{e_{i},x^{(i)}_{s_{i}}\cdots x^{(i)}_{e_{i}},\ell,j)}{a(e_{i},x^{(i)}, \ell,j)}\geq 0.5. \tag{2}\] Concretely, we iteratively add tokens to the left of the pivot token, each time re-calculating the activation on the pivot token in the new subsequence until it is at least half 2 of the activation on the pivot token in the full sequence. Footnote 2: We used empirical guidance to choose this threshold. The purpose of this pruning is to remove extraneous information that is irrelevant for neuron activation, which facilitates the later stages of the algorithm. After pruning, we end with a set of minimal subsequences \(\mathcal{Y}=\{y^{(1)},\ldots,y^{(m)}\}\) where each one is generated from a single element in \(\mathcal{X}\), \(y^{(i)}=x^{(i)}_{s_{i}:\;e_{i}}\). #### Saliency Identification Given the set \(\mathcal{Y}\) of strings, we follow another saliency identification step. For each example \(y^{(i)}\in\mathcal{Y}\), we measure the relative importance of the \(k\)th token \(y^{(i)}_{k}\) to the activation of the target neuron \(n_{\ell j}\) on the \(k^{\prime}\)th token \(x_{k^{\prime}}\). We calculate the value: \[\alpha_{k,k^{\prime}}=1-\frac{a(k^{\prime},y^{(i)},\ell,j)}{a(k^{\prime},\hat {y}^{(i)},\ell,j)}, \tag{3}\] where in this context, \(\hat{y}^{(i)}\) is the same as the sequence \(y^{(i)}\), except that the \(k\)th token is replaced with a special padding token. The relative importance \(\alpha_{k,l}\) indicates what would happen to the target neuron activation on the \(k^{\prime}\)th token of \(y^{(i)}\) if we perturbed the \(k\)th token of that sequence. Intuitively, if the activation of the neuron on token \(k^{\prime}\) is much lower in the perturbed sequence than the original sequence, the token \(k\) is providing necessary context that is important for neuron activation. This information is useful in later stages of the algorithm, allowing us to identify the key context tokens for a given neuron, and discard the unimportant tokens that are irrelevant to the neuron's behaviour. #### Augmentation We then automatically augment our set of examples \(\mathcal{Y}\) with additional examples. For each example \(y^{(i)}\in\mathcal{Y}\), we identify the tokens that are important for neuron activation on the pivot token by thresholding the value \(\alpha_{k,e_{i}}\) for all tokens at index \(k<e_{i}\). Each of these important tokens is then in turn replaced with related tokens that are predicted from a helper model, such as BERT [9]. These tokens, \(\mathcal{R}_{k,i}\) are a set of elements in the vocabulary that the helper model gives high probability to at position \(k\) when fed with \(y^{(i)}\) masked at position \(k\). Once we have generated a candidate set of augmentations \(\mathcal{A}\), we pass each new example \(a^{(i)}\) through the target model and measure the activation of the target neuron on the pivot token, \(a(e_{i},a^{(i)},\ell,j)\), and enrich \(\mathcal{Y}\) with all \(a^{(i)}\) where that pivot activation is more than half of the pivot activation on the originating sequence \(y^{(i)}\): \[\frac{a(e_{i},a^{(i)},\ell,j)}{a(e_{i},y^{(i)},\ell,j)}\geq 0.5. \tag{4}\] Intuitively, we are exploring the space around each input example to find other, similar, examples, and retaining the ones that still strongly activate the target neuron. This allows us to better understand the full extent of a neuron's behaviour. We now update \(\mathcal{Y}\) to include all augmented examples. For our helper model, we chose DistilBERT [28], as its bidirectionality provides the ability to use both the prior and post context for better token predictions. In addition, DistilBERT is a compact and efficient variant of BERT model [9], trained using knowledge distillation, allowing it to run 60% faster than BERT with similar statistical performance. This makes it an ideal choice for efficiently predicting substitute tokens. #### Graph Building After completing the previous stages, we have a set of dataset examples \(\mathcal{Y}\) which strongly activate our target neuron. We then aim to build a compact representation which concisely represents the tokens on which our neuron activates, as well as the context necessary for activation on these tokens. To do this, we create a lattice structure which has tokens as nodes, where each path is constructed from a sequence in \(\mathcal{Y}\) and contains the important context tokens for activation on the final pivot token, which ends the path. For each example \(y\in\mathcal{Y}\), we iterate over the tokens and retrieve the importance of that token relative to the pivot token. For each token \(y_{k}\), where \(k<e_{i}\), if its importance is above the threshold we add it to the path as an important token. Otherwise, we add it to the path as a special ignore token. We then end the path with pivot token. Ignore tokens are placeholder's indicating that some token is necessary here, but that the specific token does not matter - in contrast to an important token, where that specific token has been identified as key for neuron activation the pivot token. After creating the paths, we store them in a trie. We work backwards through each path, first adding the pivot token as a top layer **activating** token, then adding each **context** token as an important node or an ignore node as required. At the end of each path we add a special end node, which denotes a complete path through the trie. On this end node, we store the normalised activation of the target neuron on the activating token for that path. We compute the normalised activation by retrieving the maximum observed activation of the neuron on any token in the training dataset \(\mathcal{D}\), \(a_{max}=\max_{i,k}a(k,x^{(i)},\ell,j)\). We then compute the normalised activation as \(a(e_{i},y,\ell,j)/a_{max}\) for the example \(y\) which formed the path. By storing our representation in this way, we can use our representation to simulate neuron behaviour by predicting token activations. Specifically, we can pass an input \(x\) consisting of tokens \(x_{1}\cdots x_{n}\) to the trie, and it will output a predicted normalised activation for each token \(x_{i}\) in \(x\). Starting at the root of the trie, we check if the token \(x_{i}\) is in the root's child nodes. If it is, we have identified that \(x_{i}\) is an activating token, so we then continue to check if there is the necessary context for activation. Iterating backwards starting at \(x_{i-1}\), we check if the token matches any of the current node's children - specifically, whether the token is either in the current node's children, or whether the current node has an ignore token in its children. If at any point we reach a node that has an end node as a child, we have traced a valid path through the trie, so record the normalised activation stored on the node. We continue these steps until we run out of prior tokens, or the current token fails either of the matching conditions. We then return the maximum observed normalised activation, or 0 if we did not find any matching paths. This representation therefore allows us to directly measure the correspondence between a target neuron's activations and the predicted activations from the trie for any input text, which allows us to quantify the accuracy of the representation. In addition, we are able to visualise the representation to facilitate human interpretation. To do this, we remove ignore nodes as they do not contain relevant information, then at each layer in the trie we collapse identical tokens to form the directed graph structure. We colour context nodes as blue according to their importance, and activating nodes as red according to their normalised activation. Brighter colours indicate higher importance or activation, respectively. Additionally, we denote valid end nodes with a bold outline. We refer to this visualisation, as well as the underlying trie, as a **neuron graph**, which we can visually inspect or automatically evaluate. Figure 1 shows an example of a simple graph for a neuron in layer 1 of the model. The neuron graphs also facilitate new forms of analysis. For example, they provide a simple searchable structure. We provide the ability to efficiently search graphs by both activating tokens and context tokens. For example, we could search for a graph that activates on the token "everyone" when it occurs with the context token "except", to retrieve the graph in Figure 1. This search functionality allows us to quickly look for neurons with a particular behaviour in which we are interested. In comparison, attempting to search maximally activating dataset examples in the same way is much less useful, as they often contain significant amounts of unimportant information which renders any search on context tokens useless. ## 4 Results and Discussion We describe a quantitative analysis, comparing our methods to simple look-up methods (SS4.1) and two case studies where we use N2G to discover interesting neuron behaviour (SS4.2). We also provide additional case studies in the Appendix. ### Quantitative Results Given that the neuron graphs built by the algorithm can be directly used to process text and predict token activations, we can evaluate the degree to which they accurately capture the target neuron's behaviour by measuring the correspondence between the activations of the neuron and the predicted activations of the graph on some evaluation text. We conduct all experiments on the SoLU model described in SS3.1. For each neuron, we take the top \(20\) dataset examples from Neuroscience and randomly split them in half to form a train and test set. We give the training examples to N2G to create a neuron graph, then take the test examples compute the normalised token activations as described in SS3.2. We apply a threshold to the normalised token activations, defining an activation above the threshold as a _firing_ of the neuron, and an activation below the threshold as the neuron _not firing_. In these experiments we set the threshold to \(0.5\). We then process the test prompts with the neuron graph to produce predicted token firings. We can then measure the precision, recall, and \(F1\) score of the graph's predictions compared to the ground truth firings. Building a neuron graph for every neuron in the model took approximately 48 hours of processing on a single Tesla T4 GPU. To ground the results of N2G we compare to two simple baselines. The first is a per-neuron token lookup table. For each neuron, we take the same training set of 10 dataset examples which we give to the N2G algorithm, pass them through the model and record the activation of the neuron on each token. We then create a lookup table that maps every token to the maximum observed activation for any occurrence of that token. We can then predict token activations on text by outputting the stored activation for each token in the input (or 0 if we did not see the token when creating the lookup). Intuitively, this presents a very recall-focused baseline, as it ignores any context and instead just identifies whether a token has ever been seen to activate a neuron. The second baseline generalises the token lookup to an n-gram lookup, following the same process as above but storing the prior \(n\) context tokens for each token in the input. We choose \(n=5\) for our baseline, which provides a precision focused baseline that requires specific context for neuron activation on any given token. Table 1 shows the average precision, recall, and \(F1\) score of the three methods for each layer of the model, averaged across the neurons in that layer3. For each neuron, we only compute these statistics on the tokens that caused the neuron to fire. This is because any given neuron typically fires on very few tokens, so the prediction problem is very imbalanced - predicting the neuron will never fire would give very good performance but provide no useful information. Footnote 3: \(F1\) scores computed on a per-neuron basis then averaged, rather than computing on the average precision and recall. In layers 0 and 1 of the model, the neuron graphs built with N2G on average capture the behaviour of the neurons well, with high recall and decent precision on the firing tokens. In comparison to the baselines, N2G achieves close to the recall of the high recall baseline, with substantially better precision. However, the precision baseline achieves significantly higher precision than N2G, suggesting that the method is not fully capturing the context needed to precisely determine when the neuron will fire. However, as we progress to deeper layers of the model, the recall and precision of N2G generally decreases. This corresponds to neurons in the later layers on average exhibiting more complex behaviour that is less completely captured in the training examples. Specifically, neurons in early layers tend to respond to a small number of specific tokens in specific, narrow contexts, whereas later layers often respond to more abstract concepts represented by a wider array of tokens in many different contexts, which was similarly observed by [11]. We see similar trends for the recall baseline, and the decrease in recall in particular suggests that the training examples on which we fit our models do not fully capture the behaviour of neurons in later layers. This suggests there could be improvements from collecting a larger set of dataset examples for each neuron and demonstrates the value of augmentation to better explore the space of activating inputs. Interestingly, the recall of the n-gram lookup increases as we go deeper into the model. This is likely because later layers tend to require longer context and more specific context, whereas early layers may only require very short context, less than the 5 tokens used in the baseline. This demonstrates the importance of the pruning and saliency mechanisms, to adaptively identify the necessary context for activation on a per neuron basis. The trend of later layers requiring more context for activation is demonstrated by Figure 2 a). This shows how the normalised activation of a neuron on the pivot token in an input changes as we progressively remove tokens from the prior context. We refer to this as an **activation trajectory**, and compute the average trajectory across 100 neurons in each layer of the model. We observe that as we move from early to later layers the average activation trajectory becomes shallower, showing that neurons in later layers on average require a much longer context to activate than neurons in early layers. We show a typical example of a neuron from layer 4 of the model in Figure 2 (b). The average trajectories show a smooth decrease in activation as we remove context tokens, but the more typical \begin{table} \begin{tabular}{c|c c c|c c c|c c} \hline \hline \multicolumn{4}{c}{N2G} & \multicolumn{4}{c}{**Token Lookup**} & \multicolumn{4}{c}{**n-gram Lookup**} \\ \hline **Layer** & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** & **Precision** & **Recall** & **F1** \\ \hline 0 & 0.67 & 0.84 & **0.68** & 0.45 & 0.89 & 0.53 & 0.97 & 0.13 & 0.17 \\ 1 & 0.64 & 0.82 & **0.65** & 0.36 & 0.90 & 0.44 & 0.96 & 0.14 & 0.18 \\ 2 & 0.56 & 0.75 & **0.55** & 0.29 & 0.85 & 0.37 & 0.93 & 0.14 & 0.17 \\ 3 & 0.45 & 0.71 & **0.45** & 0.23 & 0.82 & 0.30 & 0.91 & 0.16 & 0.18 \\ 4 & 0.39 & 0.66 & **0.38** & 0.22 & 0.78 & 0.28 & 0.89 & 0.16 & 0.18 \\ 5 & 0.38 & 0.68 & **0.37** & 0.20 & 0.81 & 0.27 & 0.86 & 0.26 & 0.28 \\ \hline \hline \end{tabular} \end{table} Table 1: N2G compared to two baseline. Note the statistics are averaged across all neurons in a layer, and only computed for the tokens on which the neuron fired. Best \(F1\) scores in bold. Figure 2: (a) Activation trajectory averaged across 100 neurons from each layer of the model. (b) and (c): Activation and Importance trajectories for a neuron from layer 4. Points show occurrence of the important “idae” token. Below, graph for the corresponding neuron from layer 4. shape of any individual neuron is for there to be distinct jumps in activation as we reach important tokens. The trajectories start off identically for the first five tokens corresponding to "category: reptiles", which are important for activation and therefore included in the neuron graph at the bottom of the figure. There is then a longer-range dependency on the token "idae" (which for example, could occur at the end of the word "Gekkonidae", the scientific name for Geckos), which can be of variable length. There are distinct spikes where this token occurs, both in the activation and importance trajectories, marked with points in the graphs. This demonstrates how the neuron graph captures the key information for activation, whilst abstracting away information about the length dependency between tokens. Additionally, we can see that the importance measure from the saliency mechanism closely relates to the activation trajectories, with a spike in importance corresponding to the increase in activation. ### Large Scale Neuron Case Studies Constructing the neuron graphs for every neuron in a model enables new workflows for mechanistic interpretability research. They provide a simple searchable structure which we can use to identify neurons with specific properties, and can be compared to each other to find similarities between neurons. Here, we present some case studies demonstrating how these representations can enable these forms of investigation. **In-context Learning** In-context learning refers to the ability of Language Models to use prior information in their prompt to better predict later tokens, and is a defining property of these models. A key mechanism behind in-context learning is the induction head [24], an attention head which learns to look for repeated token sequences and increase the probability of predicting later tokens in the sequence if the beginning of the sequence is seen again. For example, if we see "abc...ab", an induction head would contribute to increasing the probability of predicting the token "c" will reoccur. We automatically searched the graph representations to identify graphs that contained any particular token in both the context tokens and the activating tokens. This enabled us to discover many neurons that activate on repeated tokens, and in particular we found what appears to be an in-context learning neuron. Shown in Figure 3, this neuron responds to a wide variety of apparently unrelated tokens, as long as they are part of a repeating sequence such as "ab...ab". This demonstrates how the neuron graph representation can enable researchers to much more efficiently explore and understand particular neuron behaviours via search, facilitating mechanistic interpretability research. **Discovering Neuron Groups** In addition, this representation offers the possibility of automatically identifying higher level structures within Language Models, such as simple circuits [13]. We provide an initial step towards this by automatically identifying neurons with similar behaviours by comparing their representations. Figure 4: A neuron graph that occurs for a neuron in Layer 1 and a neuron in Layer 4 of the model. The neurons have identical behaviour, and were recognised as a similar pair through an automated graph comparison process. Figure 3: Neuron graph for an in-context learning neuron that activates on repeated token sequences. Specifically, for every pair of neurons we measure the proportion overlap between the pairs' context tokens, as well as the proportion overlap between their activating tokens. We can then automatically identify similar neurons by retrieving pairs with more than 90% overlap for both context tokens and activating tokens. We identify 60 pairs of similar neurons within the model by automatically comparing pairs of neuron graphs. Figure 4 shows an example of a neuron graph that represents a neuron in layer 1 of the model. Our analysis identified that an identical graph also represents a neuron in layer 4 of the model. One possible explanation for the existence of such pairs of neurons is the spreading out of features due to superposition. Previous research has provided evidence for superposition, where models use individual neurons to represent multiple unrelated features. This phenomenon presents a barrier to interpretability as it makes understanding neuron function more challenging [11]. Superposition implies that features are spread across multiple neurons [22], to enable a model to represent more features than it has neurons. As such, it could provide an explanation for the occurrence of similar neurons. Alternatively, the model may be incentivized by dropout during the training process to represent highly important features in multiple neurons, so that dropping out any particular neuron does not hurt the loss. Further research into superposition and the existence of neurons with very similar behaviours would be beneficial to better understand the causes of these phenomena. Superposition also has implications for attempting to control undesirable behaviours. One method for preventing a model from exhibiting some negative behaviour, such as toxic outputs, would be to identify specific neurons that are responsible for this behaviour and ablate them. Indeed, [27] found such a sentiment neuron in an LSTM model, and used it to control the sentiment of generated text. Superposition implies that features such as sentiment or toxicity may be spread across multiple neurons [11], so ablating any a single neuron may not significantly affect the feature representation. The ability to identify similar neurons therefore could facilitate identifying clusters of neurons that contribute to a negative behaviour, and allow us to efficiently ablate all necessary neurons. This shows the potential for the N2G representations to facilitate diverse tasks in ML safety. ## 5 Conclusions and Limitations We presented N2G, an approach for converting neurons in LLMs into interpretable graphs that can be visualised, evaluated and searched. The degree to which a neuron graph captures the behaviour of a target neuron can be directly measured by comparing the output of the graph to the activations of the neuron, making this method a step towards scalable interpretability methods for LLMs. We find that neuron graphs capture neuron behaviour well for early layers of the model but only partially capture the behaviour for later layers due to increasingly complex neuron behaviour, and this problem will likely become more prominent in larger models. Additionally, we evaluate our method on a SoLU [11] model to minimise polysemanticity, though it can also be applied to models with more common activation functions. The greater prevalence of polysemantic neurons in typical Transformer models could reduce the ability of N2G to fully capture neuron behaviour. Future work could address these limitations by using more training examples to better cover the full extent of a neuron's behaviour, better exploring the input space via more sophisticated and extensive augmentation, and generalising from exact token matches to matching abstract concepts, for example, by using token embeddings. The neuron graphs built by N2G allow for new methods of analysis. They can be searched to identify neurons with interesting properties, which we demonstrate by finding an in-context learning neuron that responds to repeated sequences of the form "ab...ab". They can also be programmatically compared with each other, which we use to identify pairs of neurons with very similar behaviours. These case studies show how the graph representation can enable new forms of interpretability analysis and make some tasks much faster by automating parts of the discovery process. There could be significant scope to develop new processes that act on the neuron representations to assist with interpretability research. For example, future work could look at using the graph representations to automatically identify circuits of neurons within LLMs that together perform a particular task. For example, we could automatically look for instances where multiple neurons that each respond to a single concept are combined to form a neuron in a later layer that responds to all of those concepts, by extending the similarity comparison to identify neurons which have a subset of another neuron's behaviour. In general, once we have generated the neuron graphs for all neurons in a model, these become a valuable resource upon which researchers can develop new tools for analysis. Acknowledgments We'd like to thank Michelle Lo for providing assistance to the literature review and Charlotte Siegmann for comments on our previous draft, Mor Geva for encouraging the idea and Logan Smith for helpful discussions. We are also grateful to all members of the Cohort group at the School of Informatics for helpful comments and suggestions.
2309.13657
A Probabilistic Model for Data Redundancy in the Feature Domain
In this paper, we use a probabilistic model to estimate the number of uncorrelated features in a large dataset. Our model allows for both pairwise feature correlation (collinearity) and interdependency of multiple features (multicollinearity) and we use the probabilistic method to obtain upper and lower bounds of the same order, for the size of a feature set that exhibits low collinearity and low multicollinearity. We also prove an auxiliary result regarding mutually good constrained sets that is of independent interest.
Ghurumuruhan Ganesan
2023-09-24T14:51:53Z
http://arxiv.org/abs/2309.13657v1
# A Probabilistic Model for Data Redundancy in the Feature Domain ###### Abstract In this paper, we use a probabilistic model to estimate the number of uncorrelated features in a large dataset. Our model allows for both pairwise feature correlation (collinearity) and interdependency of multiple features (multicollinearity) and we use the probabilistic method to obtain upper and lower bounds of the same order, for the size of a feature set that exhibits low collinearity and low multicollinearity. We also prove an auxiliary result regarding mutually good constrained sets that is of independent interest. **Key words:** Data Redundancy, Feature Domain, Probabilistic Model, Mutually Good Constrained Sets. **AMS 2000 Subject Classification:** Primary: 60K35, 60J10; ## 1 Introduction The feature selection problem is a very important part of data preprocessing that crucially affects the overall performance in predictive analysis [4]. Given a large dataset, statistical tests are typically performed to estimate the correlation between pairs and subsets of features and a subset of the total feature set is then chosen using standard feature selection methods like filters and wrappers [3][5][6]. This is done to reduce data redundancy and also improve the performance of the statistical or machine learning methodology to which the resulting data is fed [7]. In this paper, we use a probabilistic approach to the data feature redundancy problem by defining a random graph model that allows for both collinearity and multicolllinearity among features. We use an auxiliary result regarding the size of mutually good constrained sets to obtain a lower bound on the minimum size of a feature set that has low collinearity and low multicollinearity. In the following section, we state and prove our main result regarding the size of feature sets with low collinearity and multicollinearity, using mutually good constrained sets. We also prove a Lemma regarding the size of mutually good constrained sets, that is of independent interest. ## 2 Feature Domain Redundancy In this Section, we study the data redundancy problem from the feature domain perspective where we seek a subset of data features that are nearly uncorrelated with each other. To motivate the problem, suppose \(W_{i}=(W_{i}(1),\ldots,W_{i}(m)),i\geq 1\) are independent and identically distributed (i.i.d.) elements belonging to some space \(\mathcal{W}.\) We refer to \(W_{i}\) as the \(i^{th}\) data point and \(W_{i}(j)\) as the \(j^{th}\)_feature_ of the \(i^{th}\) data point. In general, the \(m\) features in the dataset \(\{W_{i}\}\) may be correlated with each other; i.e. \(W_{i}(j)\) is not necessarily independent of \(W_{i}(k)\) for \(j\neq k\) and so statistical tests [4] are performed to obtain estimates for the correlation between distinct pairs of features. Using these estimates, we are interested in determining a "nice" subset \(\mathcal{S}\subset\{1,2,\ldots,m\}\) of nearly uncorrelated features. One heuristic method (see Chapter 3, pp. 47, [4]) is to remove the minimum number of features iteratively, in such a way that all _pairwise correlations_ (also known as collinearity) of the remaining features are below a predetermined threshold. It is also possible that the dataset exhibits _multicollinearity_ where multiple features are interdependent on each other and in our main result of this section, we use a probabilistic model to obtain high probability bounds for the minimum size of a nice feature set with low collinearity and low multicollinearity. We begin with a couple of definitions. Let \(K_{m}\) be the complete graph on \(m\) vertices and let \(\{X(h)\}_{h\in K_{m}}\) be i.i.d. Bernoulli random variables satisfying \[\mathbb{P}(X(h)=1)=p=1-\mathbb{P}(X(h)=0).\] Let \(\{\mathcal{T}(v)\}_{1\leq v\leq n}\) be random subsets of \(\{1,2,\ldots,m\}\) that possibly depend on \(\{X(.)\}.\) We assume that the sets \(\mathcal{T}(.)\) are consistent in the sense that \(u\in\mathcal{T}(v)\) if and only if \(v\in\mathcal{T}(u).\) We say that a set of vertices \(\mathcal{D}\) is _nice_ if: \((i)\) For any \(u,v\in\mathcal{D}\) we have \(X(u,v)=0\) and \((ii)\) There does not exist \(u,v\in\mathcal{D}\) such that \(u\in\mathcal{T}(v)\) (or \(v\in\mathcal{T}(u)\)). Letting \(N_{m}\) be the largest size of a nice subset of \(\{1,2,\ldots,m\},\) we have the following result. **Theorem 1**.: _For every \(\gamma>0\) we have that_ \[\mathbb{P}\left(N_{m}\leq(2+\gamma)\frac{\log m}{|\log(1-p)|}\right)\geq 1- \frac{1}{m^{\gamma}}. \tag{2.1}\] _Conversely if \(\max_{v}\mathcal{T}(v)\geq 1,\)_ \[p\geq\frac{1}{m^{\delta_{1}}}\text{ and }\tau:=\mathbb{E}\max_{v}\# \mathcal{T}(v)\leq m^{\delta_{2}} \tag{2.2}\] _for some constants \(0<\delta_{1},\delta_{2}<1,\) satisfying \(0<2\delta_{1}+\delta_{2}<1,\) then there is a constant \(\delta>0\) such that_ \[\mathbb{P}\left(N_{m}\geq(1-2\delta)\frac{\log m}{|\log(1-p)|}-\frac{\log \left(4\tau/p\right)}{|\log(1-p)|}\right)\geq 1-\frac{1}{m^{\delta}}. \tag{2.3}\] In the context of the feature subset problem discussed prior to the statement of Theorem 1, we could interpret \(p\) as probability that features \(u\) and \(v\) are correlated and the set \(\mathcal{T}(v)\) as a subset of features that exhibit multi-collinearity together with the feature \(v.\) For example, \(\mathcal{T}(v)\) could be a subset of the features that result in a variance inflation factor (VIF) [4] greater than \(\lambda_{mc}\) for the feature \(v,\) where \(\lambda_{mc}>0\) is a predetermined threshold. We recall that VIF measures the extent to which a particular feature depends on a subset of features and for more details, we refer to Chapter 3, [4]. Below, we use the following deviation estimate regarding of sums of independent Bernoulli random variables. Let \(\{W_{j}\}_{1\leq j\leq r}\) be independent Bernoulli random variables satisfying \(\mathbb{P}(W_{j}=1)=1-\mathbb{P}(W_{j}=0)>0\). If \(S_{r}:=\sum_{j=1}^{r}W_{j},\theta_{r}:=\mathbb{E}S_{r}\) and \(0<\gamma\leq\frac{1}{2}\), then \[\mathbb{P}\left(|S_{r}-\theta_{r}|\geq\theta_{r}\gamma\right)\leq 2\exp\left(- \frac{\gamma^{2}}{4}\theta_{r}\right) \tag{2.4}\] for all \(r\geq 1.\) For a proof of (2.4), we refer to Corollary \(A.1.14,\) pp. 312 of [1]. _Proof of Theorem 1_: We begin with the upper bound for \(N_{m}.\) Let \(G\) be the random subgraph of \(K_{m}\) obtained by retaining all edges \((u,v)\) satisfying \(X(u,v)=1.\) The probability that the vertices \(\{1,2,\ldots,T\}\) form a stable set in \(G\) (i.e. a set of vertices no two of which are adjacent in \(G\)) is \((1-p)^{\binom{T}{2}}\) and so the probability that there exists a stable set of size at least \(T\) in \(G\) is bounded above by \[m^{T}\cdot(1-p)^{T(T-1)/2}=\exp\left(-T\left(\frac{T-1}{2}|\log\left(1-p \right)|-\log m\right)\right)\leq\frac{1}{m^{\gamma}}\] provided \(T\geq 1+(2+\gamma)\frac{\log m}{|\log(1-p)|}.\) This obtains the upper bound for \(N_{m}\) in (2.1). In what follows, we obtain a lower bound for \(N_{m}\) using an estimate for the size of _mutually good constrained sets_ derived in Lemma 2 at the end of this section. We define the event \(E_{tau}:=\{\max_{v}\#\mathcal{T}(v)\leq\tau\cdot m^{\gamma}\}\) where \(\gamma>0\) is a constant to be determined later. From the Markov inequality we see that \[\mathbb{P}(E_{tau})\geq 1-\frac{1}{m^{\gamma}} \tag{2.5}\] and we henceforth assume that \(E_{tau}\) occurs. Next, we define the goodness function \(f(\mathcal{S})\) to be the set of all vertices not adjacent to any vertex of \(\mathcal{S}\) in \(G\) and set the constraint function \(g\) as \[g(x,\mathcal{I})=\left\{\begin{array}{ll}1&\mbox{ $x\in\bigcup_{v\in \mathcal{I}}\left(\mathcal{T}(v)\cup\{v\}\right)$}\\ 0&\mbox{ otherwise,}\end{array}\right. \tag{2.6}\] with \(\mathcal{E}=\{0,1\}\) and \(\mathcal{B}=\{1\}.\) The constraint \(g\) ensures that we "add" a new vertex in each iteration that is not adjacent to any of the previously added vertices and also does not belong to the "conflict" set \(\mathcal{T}(v)\) of a previously added vertex \(v.\) Let \(1\leq L\leq\frac{m}{2}\) be an integer to be determined later. Since \(E_{tau}\) occurs, each \(\mathcal{T}(v)\) has size at most \(\tau\cdot m^{\gamma}\) and so the parameter \(q_{i}\) defined in (2.14) is bounded above as \[q_{i}\leq\frac{i\tau\cdot m^{\gamma}}{m}\leq\frac{L\tau}{m^{1-\gamma}}. \tag{2.7}\] To estimate the term \(p_{i}\) in (2.13), we let \({\cal S}=\{x_{1},\ldots,x_{i}\}\) be any deterministic set of \(i\) vertices. A vertex \(u\notin\{x_{1},\ldots,x_{i}\}\) is good (i.e. not adjacent to any vertex of \({\cal S}\) in \(G\)) with probability \((1-p)^{i}\) and so the expected number of vertices that are good with respect to \({\cal S}\) is at least \((m-i)(1-p)^{i}\geq\frac{m(1-p)^{i}}{2}.\) By the standard deviation estimate (2.4), we therefore get that the set \(f({\cal S})\) of good vertices with respect to \({\cal S}\) has size at least \(\frac{1}{4}\cdot m(1-p)^{i}\) with probability at least \[1-e^{-C_{2}m(1-p)^{i}}\geq 1-e^{-C_{2}m(1-p)^{L}}\] for some constant \(C_{2}>0.\) Therefore considering all possible choices of \({\cal S}\) with \(i\leq L-1\) vertices, we get that the fraction \[p_{L-1}\geq\min_{\cal S}\frac{f({\cal S})}{m}\geq\frac{(1-p)^{L}}{4} \tag{2.8}\] with probability at least \(1-\zeta\) where \[\zeta := \sum_{i=1}^{L-1}{m\choose i}e^{-C_{2}m(1-p)^{L}} \tag{2.9}\] \[\leq L{m\choose L-1}\cdot e^{-C_{2}m(1-p)^{L}}\] \[\leq Lm^{L-1}\cdot e^{-C_{2}m(1-p)^{L}},\] by the unimodality of the Binomial coefficient for \(L\leq\frac{m}{2}.\) From the condition \(p_{L-1}>q_{L-1}\) in Lemma 2 and the estimates for \(p_{L-1}\) and \(q_{L-1}\) in (2.8) and (2.7) respectively, we get that if \[\frac{(1-p)^{L}}{4}>\frac{L\tau}{m^{1-\gamma}} \tag{2.10}\] then there exists a nice set of size \(L\) in \(G.\) Setting \(L=\min\left(1,\theta\frac{\log m}{|\log(1-p)|}\right)\) with \(0<\theta<1\) and using the inequality \(|\log(1-p)|>p,\) we see that (2.10) is true if \[\frac{1}{4m^{\theta}}>\frac{\theta\tau\log m}{pm^{1-\gamma}}\] or equivalently if \[\log\theta+\theta\log m<\log\left(\frac{p}{4\tau}\right)+(1-\gamma)\log m- \log\log m.\] We set \[\theta:=1-2\gamma+\frac{\log\left(p/4\tau\right)}{\log m}\] where \(\gamma>0\) is chosen such that \(\delta_{1}<2\gamma<1-\delta_{1}-\delta_{2}.\) This is possible by Theorem statement. Using the condition \(p\geq\frac{1}{m^{\delta_{1}}},\tau\leq m^{\delta_{2}}\) and the fact that \(\delta_{1}+\delta_{2}<1\) strictly (see Theorem statement), we get that \(\theta>0\) strictly and moreover, \[m(1-p)^{L}=m^{1-\theta}=m^{2\gamma}\cdot\frac{4\tau}{p}\geq m^{2\gamma} \tag{2.11}\] since \(p<1\) and \(\tau\geq 1,\) again by Theorem statement. Also \[L\leq\frac{\log m}{|\log(1-p)|}<\frac{\log m}{p}<m^{\delta_{1}}\log m \tag{2.12}\] and so plugging (2.12) and (2.11) into (2.9), we get \[\zeta\leq m^{\delta_{1}}\cdot\log m\cdot\exp\left(m^{\delta_{1}}(\log m)^{2} \right)\cdot\exp\left(-C_{2}m^{2\gamma}\right)\longrightarrow 0,\] by our choice of \(2\gamma>\delta_{1}.\) Combining the estimate (2.5) for the event \(E_{tau}\) and the estimate (2.9), we therefore get the lower bound in (2.3) and this completes the proof of the Theorem. ### Mutually Good Constrained Sets Let \(\mathcal{U}\) be a finite set containing \(N\) elements and \(2^{\mathcal{U}}\) be the set of all subsets of \(\mathcal{U}.\) We have the following definition. **Definition 1**.: _A map \(f:2^{\mathcal{U}}\to 2^{\mathcal{U}}\) is said to be a goodness function if for any two sets \(\mathcal{S}_{1},\mathcal{S}_{2}\subseteq\mathcal{U}\) we have: \((i)\) The set \(\mathcal{S}_{1}\subseteq f(\mathcal{S}_{2})\) if and only if \(\mathcal{S}_{2}\subseteq f(\mathcal{S}_{1}).\)\((ii)\) The set \(f(\mathcal{S}_{1}\cup\mathcal{S}_{2})=f(\mathcal{S}_{1})\cap f(\mathcal{S}_{2}).\)_ We use the notation \(f(\emptyset)=\mathcal{U}\) and say that \(f(\mathcal{S})\) is the set of elements that are \(f-\)good or simply good with respect to \(\mathcal{S}.\) A set of elements \(\mathcal{S}\subseteq\mathcal{U}\) is said to be _mutually good_ if for any \(\mathcal{I}\subset\mathcal{S},\) we have that \(\mathcal{S}\setminus\mathcal{I}\subseteq f(\mathcal{I}).\) For example, if \(\mathcal{U}\) is the set of vertices in a graph, then the function \(f_{0}(\mathcal{S})\) that determines the set of all vertices not adjacent to any vertex of \(\mathcal{S}\) is a goodness function. A stable set, i.e. a set of vertices no two of which are adjacent to each other, is a mutually good set with respect to the goodness function \(f_{0}\). For a set \(\mathcal{E}\), a \((\mathcal{U},\mathcal{E})-\)constraint or simply a constraint is a map \[g:\mathcal{U}\times 2^{\mathcal{U}}\ \rightarrow\ \mathcal{E}.\] For sets \(\mathcal{I}\subseteq\mathcal{U}\) and \(\mathcal{B}\subseteq\mathcal{E}\), we say that \(x\in\mathcal{U}\) satisfies the \(\mathcal{B}-\)constraint with respect to \(\mathcal{I}\) if \(g(x,\mathcal{I})\in\mathcal{B}.\) We also say that \(\mathcal{I}\) is a \(\mathcal{B}-\)_constrained_ set if each \(y\in\mathcal{I}\) satisfies the \(\mathcal{B}-\)constraint with respect to \(\mathcal{I}\setminus\{y\}.\) Finally, we define \[h(\mathcal{I}):=\{x\in\mathcal{U}:g(x,\mathcal{I})\notin\mathcal{B}\}\] to be the set of all elements that do not satisfy the \(\mathcal{B}-\)constraint with respect to \(\mathcal{I}.\) Continuing with the graph example, let \(\mathcal{E}=\{0,1\}\) and \(\mathcal{B}=\{1\}.\) The map \(g_{0}(x,\mathcal{I})\) which equals \(1\) if \(x\) is not adjacent to any vertex of \(\mathcal{I}\) and zero otherwise, is an example of a \(\mathcal{B}-\)constraint. Any stable set \(\mathcal{I}\) is a constrained set and the set \(h(\mathcal{I})\) is the set of all vertices adjacent to some vertex in \(\mathcal{I}.\) We have the following result regarding size of mutually good sets. **Lemma 2**.: _For sets \(\mathcal{U}\) and \(\mathcal{E}\) let \(f\) and \(g\) be the goodness and constraint functions, respectively, as defined above and let \(\mathcal{B}\subseteq\mathcal{E}\) be any subset. For integer \(1\leq i\leq N=\#\mathcal{U}\) let_ \[p_{i}:=\min_{\mathcal{I}\subseteq\mathcal{U}:\#\mathcal{I}\leq i}\frac{\#f( \mathcal{I})}{N} \tag{2.13}\] _be the minimum fraction of elements that are good with respect to \(\mathcal{B}-\)constrained sets of cardinality at most \(i.\) Similarly, let_ \[q_{i}:=\max_{\mathcal{I}\subseteq\mathcal{U}:\#\mathcal{I}\leq i}\frac{\#h \left(\mathcal{I}\right)}{N} \tag{2.14}\] _be the maximum fraction of elements not satisfying the \(\mathcal{B}-\)constraint with respect to \(\mathcal{B}-\)constrained sets of cardinality at most \(i.\) If \(p_{L-1}>q_{L-1},\) then there exists a mutually good \(\mathcal{B}-\)constrained set of cardinality \(L.\)_ Any single set in \(\mathcal{U}\) is assumed to be a mutually good set and so we always set \(p_{1}=1=1-q_{1}.\) In the expressions for \(p_{i}\) and \(q_{i}\) in (2.13) and (2.14), the minimum and maximum are respectively taken over all _constrained_ sets of size at most \(i.\) Therefore a lower bound for \(p_{i}\) and an upper bound for is simply obtained by considering the minimum and maximum, respectively, over _all_ sets (constrained or not) of cardinality at most \(i\). As we see from the graph theory example above, conditions could sometimes be posed both as a goodness function or as a constraint function. We pick the condition occurring with the lowest probability as a goodness function and identify the rest as constraints. We now use the probabilistic method to prove Lemma 2. _Proof of Lemma 2_: Let \(X_{1},\ldots,X_{L}\) be independently and uniformly chosen from \(\mathcal{U}.\) For \(1\leq i\leq L\) let \(E_{i}\) be the event that \(\{X_{1},\ldots,X_{i}\}\) is a mutually good set and let \(H_{i}\) be the event that \(\{X_{1},\ldots,X_{i}\}\) is a \(\mathcal{B}-\)constrained set and set \(J_{i}:=E_{i}\cap H_{i}.\) Clearly \(J_{i+1}\subseteq J_{i}\) for \(1\leq i\leq L-1\) and suppose that the event \(J_{L-1}\) occurs. Given \(\mathcal{F}_{L-1}:=\{X_{1},\ldots,X_{L-1}\}\) we have that \(X_{L}\in f(\{X_{1},\ldots,X_{L-1}\})\) with probability \[\frac{\#f(\{X_{1},\ldots,X_{L-1}\})}{N}\geq p_{L-1} \tag{2.15}\] since \(\{X_{1},\ldots,X_{L-1}\}\) is known to be a \(\mathcal{B}-\)constrained set, due to the occurrence of the event \(J_{L-1}.\) Again due to the event \(J_{L-1},\) we know that \(\{X_{1},\ldots,X_{L-1}\}\) is also a mutually good set. We now use the properties \((i)-(ii)\) in the Definition 1 to show that if \(X_{L}\in f(\{X_{1},\ldots,X_{L-1}\}),\) then \(\mathcal{S}:=\{X_{1},\ldots,X_{L}\}\) is a mutually good set as well. Indeed, let \(\mathcal{I}\subseteq\{X_{1},\ldots,X_{L}\}\) be any set. If \(X_{L}\in\mathcal{S}\setminus\mathcal{I},\) then \[X_{L}\in f(\{X_{1},\ldots,X_{L}\})\subseteq f(\mathcal{I}) \tag{2.16}\] by property \((ii)\) in Definition 1. By the mutual goodness of \(\{X_{1},\ldots,X_{L-1}\},\) we already have that \[\mathcal{S}\setminus(\mathcal{I}\cup\{X_{L}\})\subseteq f(\mathcal{I}) \tag{2.17}\] and so combining (2.16) and (2.17) we get \(\mathcal{S}\setminus\mathcal{I}\subseteq f(\mathcal{I}).\) On the other hand if \(X_{L}\in\mathcal{I},\) then using \(X_{L}\in f(\{X_{1},\ldots,X_{L-1}\}),\) we get that \[\mathcal{S}\setminus\mathcal{I}\subseteq\{X_{1},\ldots,X_{L-1}\}\subseteq f( \{X_{L}\}) \tag{2.18}\] by property \((i)\) in Definition 1. As before, by the mutual goodness of the set \(\{X_{1},\ldots,X_{L-1}\},\) we have that \[\mathcal{S}\setminus\mathcal{I}\subseteq f(\mathcal{I}\setminus\{X_{L}\}) \tag{2.19}\] and so combining (2.18) and (2.19) we get \[{\cal S}\setminus{\cal I}\subseteq f({\cal I}\setminus\{X_{L}\})\cap f(\{X_{L}\}) =f({\cal I})\] by property \((ii)\) in Definition 1. Summarizing we have that if \(J_{L-1}\) occurs and \(X_{L}\in f(\{X_{1},\ldots,X_{L-1}\}),\) then \(\{X_{1},\ldots,X_{L}\}\) is a mutually good set and so from the probability estimate (2.15), we get \[{\mathbb{P}}(E_{L}\mid{\cal F}_{L-1})\cdot{\bf 1}(J_{L-1})\geq p_{L-1}\cdot{\bf 1 }(J_{L-1}), \tag{2.20}\] where \({\bf 1}(.)\) refers to the indicator function. Similarly if the event \(J_{L-1}\) occurs, then \(\{X_{1},\ldots,X_{L-1}\}\) is already a constrained set and so the probability that \(\{X_{L}\}\) does not satisfy the \({\cal B}-\)constraint with respect to \(\{X_{1},\ldots,X_{L-1}\}\) is at most \(q_{L-1},\) by (2.14). Consequently, \[{\mathbb{P}}(H_{L}^{c}\mid{\cal F}_{L-1})\cdot{\bf 1}(J_{L-1})\leq q_{L-1} \cdot{\bf 1}(J_{L-1}). \tag{2.21}\] Using \({\mathbb{P}}(A\cap B)\geq{\mathbb{P}}(A)-{\mathbb{P}}(B^{c})\) with \(A=E_{L}\) and \(B=H_{L},\) we get from (2.20) and (2.21) that the conditional probability of both \(E_{L}\) and \(H_{L}\) happening is at least \(p_{L-1}-q_{L-1}.\) In other words, \[{\mathbb{P}}(J_{L}\mid{\cal F}_{L-1})\cdot{\bf 1}(J_{L-1}) = {\mathbb{P}}(E_{L}\cap H_{L}\mid{\cal F}_{L-1})\cdot{\bf 1}(J_{L-1}) \tag{2.22}\] \[\geq (p_{L-1}-q_{L-1})\cdot{\bf 1}(J_{L-1}).\] Taking expectations and using the fact that \(J_{L}\subset J_{L-1}\) we get that \[{\mathbb{P}}(J_{L})\geq(p_{L-1}-q_{L-1})\cdot{\mathbb{P}}(J_{L-1}).\] Continuing iteratively, we get that \[{\mathbb{P}}(J_{L})\geq\prod_{j=2}^{L-1}(p_{j}-q_{j})\cdot{\mathbb{P}}(J_{1}) =\prod_{j=2}^{L-1}(p_{j}-q_{j})\cdot(1-q_{1}) \tag{2.23}\] since \(p_{1}=1\) (see discussion following the statement of Lemma 2). By definition, \(p_{j}\) as defined in (2.13) is decreasing in \(j\) and \(q_{j}\) as defined in (2.14) is increasing in \(j.\) Therefore if \(p_{L-1}>q_{L-1},\) then we get that \({\mathbb{P}}(J_{L})>0\) and this proves the Lemma. _Acknowledgement_: I thank Professors Rahul Roy, Thomas Mountford, Federico Camia, Alberto Gandolfi, Lasha Ephremidze and C. R. Subramanian for crucial comments and also thank IMSc and IISER Bhopal for my fellowships.
2303.00116
Neural Auctions Compromise Bidder Information
Single-shot auctions are commonly used as a means to sell goods, for example when selling ad space or allocating radio frequencies, however devising mechanisms for auctions with multiple bidders and multiple items can be complicated. It has been shown that neural networks can be used to approximate optimal mechanisms while satisfying the constraints that an auction be strategyproof and individually rational. We show that despite such auctions maximizing revenue, they do so at the cost of revealing private bidder information. While randomness is often used to build in privacy, in this context it comes with complications if done without care. Specifically, it can violate rationality and feasibility constraints, fundamentally change the incentive structure of the mechanism, and/or harm top-level metrics such as revenue and social welfare. We propose a method that employs stochasticity to improve privacy while meeting the requirements for auction mechanisms with only a modest sacrifice in revenue. We analyze the cost to the auction house that comes with introducing varying degrees of privacy in common auction settings. Our results show that despite current neural auctions' ability to approximate optimal mechanisms, the resulting vulnerability that comes with relying on neural networks must be accounted for.
Alex Stein, Avi Schwarzschild, Michael Curry, Tom Goldstein, John Dickerson
2023-02-28T22:36:00Z
http://arxiv.org/abs/2303.00116v1
# Neural Auctions Compromise Bidder Information ###### Abstract Single-shot auctions are commonly used as a means to sell goods, for example when selling ad space or allocating radio frequencies, however devising mechanisms for auctions with multiple bidders and multiple items can be complicated. It has been shown that neural networks can be used to approximate optimal mechanisms while satisfying the constraints that an auction be strategyproof and individually rational. We show that despite such auctions maximizing revenue, they do so at the cost of revealing private bidder information. While randomness is often used to build in privacy, in this context it comes with complications if done without care. Specifically, it can violate rationality and feasibility constraints, fundamentally change the incentive structure of the mechanism, and/or harm top-level metrics such as revenue and social welfare. We propose a method that employs stochasticity to improve privacy while meeting the requirements for auction mechanisms with only a modest sacrifice in revenue. We analyze the cost to the auction house that comes with introducing varying degrees of privacy in common auction settings. Our results show that despite current neural auctions' ability to approximate optimal mechanisms, the resulting vulnerability that comes with relying on neural networks must be accounted for. Machine Learning, ICML ## 1 Introduction An auction is an economic mechanism that elicits bids from bidders and allocates goods in exchange for a fee. In the private-value auction setting, bidder valuations are drawn from some known prior probability distribution, while their true valuations are kept private from the mechanism and other bidders. Therefore, when the auctioneer elicits bids, the bidders may bid strategically by misreporting their valuations in order to benefit themselves. The auctioneer's goal is to design an auction that has desirable properties in the face of such strategic behavior. Auctions designers generally want mechanisms to ensure two properties: auctions should be strategyproof and maximize revenue in expectation. An auction mechanism is considered strategyproof when bidders are no better off bidding anything other than their true valuations for the goods. Given this hard constraint of strategyproofness, the auctioneer would want to maximize expected revenue. However, strategyproof auctions inherently leak information about a bidders true valuation; by incentivizing bidders to bid truthfully, it might be possible to learn private bidder information by observing the resulting allocations and payments of a strategyproof auction. Finding optimal auction mechanisms has been a subject of great interest to economists for decades. The Myerson auction (Myerson, 1981) is optimal for selling one item to multiple bidders. Additionally, for selling multiple items to one bidder, optimality is also somewhat well-understood (Manelli & Vincent, 2006; Pavlov, 2011; Daskalakis et al., 2017; Kash & Frongillo, 2016). However, for selling multiple items to multiple bidders, optimal mechanisms are known only in very simple cases (e.g., Yao (2017)). Even for selling two items to two bidders with i.i.d. uniform valuations, the optimal mechanism is unknown. The apparent intractability of finding analytic results is what motivates the work of Dutting et al. (2019). Their neural network approximations of optimal auction mechanisms are not provably revenue maximizing or strategyproof, however they approximately recover optimal mechanisms of known auctions and, in settings without known optimal mechanisms, their revenues are higher than existing alternatives and the associated regret (a measurement of strategyproofness) is sufficiently small. The strategyproofness and revenue maximizing properties of these "neural auctions" are reasonably well understood, but bidders may also be concerned with privacy. If the mechanism is strategyproof, bidders are incentivized to reveal their true valuations to the mechanism itself, but they may still want their private valuations not to be revealed to other bidders or outside observers. For example, a business may not want its willingness to pay for different items to be known to competitors or counterparties. Yet if the mechanism's rules, allocations, and payments are made public, it might be possible to infer a large amount of information about a bidder's bid. In this work we analyze neural auctions of several bidder-item sizes and show that they are not private. In particular, using the outputs of the networks - the publicly available allocation and payment information - a malicious actor can invert the models to extract information about agents' valuations. Optimization-based model inversion is a common security vulnerability of neural models used for tasks ranging from facial recognition (Fredrikson et al., 2015) to natural language generation (Pan et al., 2020). In most settings, however, the fear of the practitioner is that the training data is retrievable. Our case deviates slightly from current analysis in that we are concerned with protecting single input examples at test time. After demonstrating that the mechanisms described in prior work are not private in this sense, we propose a technique to introduce privacy. We employ stochasticity by adding noise to the outputs to create private neural auctions, which we show empirically mitigates model inversion. This randomization process is non-trivial, as the resulting network needs to satisfy rationality and strategyproofness constraints. Furthermore, this method introduces a parameter - the standard deviation of the noise - to control the degree of privacy. It is critical to give auction designers this control since privacy comes at a cost to revenue for the auction house. We analyze this cost by characterizing the relationship between privacy and revenue and regret. ## 2 Related Work Differentiable economicsDifferentiable economics is an approach to automated mechanism design that uses rich function approximators to learn good mechanisms (Sandholm, 2003). Dutting et al. (2019) propose RegretNet, a neural auction, and show that neural networks can approximate optimal auction mechanisms. While our work builds directly on theirs, there have been a number of other recent contributions to topics within differentiable economics (many of which also build on the work). Rahme et al. (2021) expand on RegretNet, by proposing a new training algorithm for similar auction networks as well as providing a metric for measuring performance. Feng et al. (2018) extend the RegretNet framework to consider private budget constraints as well as proving optimality in some specific and solved auction settings. Duan et al. (2022) and Ivanov et al. (2022) use attention layers to produce networks which are permutation equivariant, generalize to unseen data, and perform better. These extensions focus on revenue maximization for nearly regret-free auctions, but do not consider provably strategyproof settings. Furthermore, Curry et al. (2020) introduce explicit certificates for the degree to which strategyproofness is violated. Some architectures are successful at finding perfectly strategyproof auctions, however each implementation only applies in limited settings. In the same paper that introduces RegretNet, Dutting et al. (2019) propose RochetNet, which is strategyproof by construction but is restricted to the single bidder setting. Similarly, Shen et al. (2019) design MenuNet, a framework which also learns perfectly strategyproof auctions, but is limited to the single agent setting. Additionally, in settings with various bidder demands, mechanisms can be learned with differentiable matching (Curry et al., 2021). While these models consider only the single bidder setting, Curry et al. (2022) designs Affine Maximizer Auctions (AMAs) that are provably strategyproof and handle multi-agent auctions, however they are not necessarily revenue maximizing. Other forms of mechanism design beyond auctions have been explored as well. Early work by Narasimhan et al. (2016) sets the stage for RegretNet by exploring the use of machine learning to learn strategyproof mechanisms that optimize social choice problems. Additionally, they introduce the concept of optimizing stability within two-sided matching problems. Ravindranath et al. (2021) further study the matching problem and quantify the trade-off between strategyproofness and stability. Finally, Golowich et al. (2018) use deep learning to find strategyproof models for minimizing expected social cost within multi-facility location mechanisms. Privacy in mechanism designThere is a longstanding thread of research on privacy in mechanism design theory - see Pai & Roth (2013) for a survey and introduction. The typical aim of privacy tools for auctions is to guarantee a differential privacy (DP) property with respect to the bids. Of particular relevance is work that constructs mechanisms satisfying this property by averaging over random noise, which shows that the resulting smoothness gives both privacy and some notion of strategyproofness (McSherry & Talwar, 2007). We do not directly deal with the differential privacy formalism - instead we view privacy from the perspective of model inversion attacks in deep learning. Importantly, DP is concerned with recovering training data and DP training algorithms aim to find models that would be no different with or without the inclusion of small amounts of data (Abadi et al., 2016). However, in practice, auctions can (and are) trained with synthetic data and the exact training data can be made public without breaching any personal privacy. The issue at hand in this paper is about inferring inputs at test time from publicly available outputs - not a common issue in other domains. We do, however, make use of random noise, which has some connections to DP (Lecuyer et al., 2019). The setting we analyze includes outputs that are publicly available after the completion of the auction. This also stands in contrast to cryptographically secure markets such as dark pools and transactions that rely on secure multi-party computation (MPC). Jutla (2015) uses MPC to implement a framework for stock market participants to avoid a potentially untrustworthy auctioneer. For analysis on using MPC to privately clear transactions in continuous double auctions see (e.g., Cartlidge et al. (2019); Asharov et al. (2020)). Model InversionInverting machine learning models to recover data used for training has been studied extensively. Wu et al. (2016) formalize a framework for analyzing model inversion in both the black box (oracle access) and white box (where the adversary has access to the composition of the model) settings. Yang et al. (2019) show that an adversary can (with black box access to the model) train an inversion model to effectively target a machine learning model by augmenting the inversion training with auxiliary data. While these works discuss the inversion of neural networks, the auction setting is unique in that the test-time outputs are public and the adversary is attempting to recover test-time inputs, rather than inverting the model to learn train-time data. In the mechanism design literature, we generally assume white box access to the model for all participants. Additionally, we are unaware of any prior work focusing on understanding the trade-off between model privacy and cost in terms of revenue. ## 3 Problem Setting For ease of discussion, we define the problem of finding an optimal auction and review common notation. Consider an auction consisting of a set of \(n\) bidders \(N=\{1,...,n\}\) and a set of \(m\) items \(M=\{1,...,m\}\) where each bidder \(i\) has some private valuation \(v_{i}\geq 0\) drawn from some distribution \(P_{i}\) over \(V_{i}\) the possible valuations for each of the \(m\) goods. In general, bidders can either have _unit demand_, where their valuation of \(S\) items is \(v_{i}(S)=\max_{j\in S}v_{i}(j)\), or they can have _additive demands_, where each bidder wants as many of the items as possible. The bidders each present bids to the auction mechanism and these bids may or may not truthfully represent their valuations. After eliciting bids from each of the \(n\) agents, the auction allocates the \(m\) goods to the bidders charging each of them some payment. Formally, define an auction as \(f=(g,p)\) or a set of allocation and payment rules \(g(b)\) and \(p(b)\) that take as input the bidders' bids \(b=(b_{1},b_{2},...b_{n})\). An auction acts in its best interest, which is often some form of revenue maximization (more detail below). Each bidder is trying to maximize their own _utility_, \(u_{i}(v_{i};b)=v_{i}(g_{i}(b))-p_{i}(b_{i})\), where \(v_{i}\) is their valuation, \(b\) is the bid profile of all the bids, and \(g_{i},p_{i}\) are the allocations and payments for bidder \(i\) given the bid profile \(b\). Also, let \(b_{-i}\) be the bid profile without bidder \(i\). If bidder utility is maximized by bidding truthfully _even without knowledge of how other bidders are acting_, then this is considered a _dominant strategy incentive compatible_ (DSIC), or strategyproof, auction. Formally, DSIC holds if the following is satisfied. \[\forall i\,\forall v,b:\,u_{i}(v_{i};v_{i},b_{-i})\geq u_{i}(v_{i};b_{i},b_{-i }). \tag{1}\] Identifying a DSIC set of rules \((g,p)\) that maximize revenue in expectation is an open problem for multi-agent multi-item auctions. To address this, Dutting et al. (2019) propose RegretNet - a neural network that can learn a near optimal mechanism from data. Recent advances in deep learning for auctions include ALGNet (Rahme et al., 2021), which uses a modified training algorithm, as well as RegretFormer (Ivanov et al., 2022) and CITransNet (Duan et al., 2022), both of which use self-attention layers to make mechanisms that are expressive and permutation-equivariant. What all of these approaches have in common is that they parameterize the auction with a neural network. In this paper, we focus on the original RegretNet approach, where the network backbone is a simple multi-layer perceptron. ### Learning problem To explicitly formulate strategyproof auctions as a deep learning problem in terms of minimizing regret, we must first define the concepts of regret and bidder utility. An auction \(f=(g,p)\) can be parameterized \(f(b;\theta)=[g(b),p(b)]\), where \(\theta\) denotes the trainable parameters. Let bidder \(i\) have a utility function defined as follows - the _welfare_ from items received, minus the _payment_. \[u_{i}^{\theta}(v_{i};b)=v_{i}(g_{i}^{\theta}(b))-p_{i}^{\theta}(b) \tag{2}\] Using this definition of bidder utility, we can define a bidder's regret as the difference in utility between bidding truthfully and bidding to maximize utility. For bidder \(i\), **rgt** as a function of the private valuations \(v\) is defined as follows. \[\textbf{rgt}_{i}(v)=\max_{b_{i}\in V_{i}}u_{i}^{\theta}(v_{i};(b_{i},v_{-i}))- u_{i}^{\theta}(v_{i};(v_{i},v_{-i})) \tag{3}\] Where **rgt** without a specific index is the expected value of \(\textbf{rgt}_{i}\) for all \(i\), which we compute as the average regret value over a large sample of inputs. Also, agents are subject to individually rationality (IR), and would not participate in an auction that could lead to negative utility. Finally, the optimization problem to find a feasible, strategyproof, revenue-maximizing auction can be formulated by minimizing the expected negated total payment subject to no-regret and IR. \[\min_{\theta\in\mathbb{R}^{d}} \quad\operatorname*{\mathbb{E}}_{v\sim P}\left[-\sum_{i\in N}p_{i}^ {\theta}(v)\right] \tag{4}\] \[\text{s.t.}\ \textbf{rgt}_{i}(v) \approx 0,\forall i\in N,v\] \[u_{i}(v) \geq 0\] \[\sum_{i\in N}g_{i,j}(v) \leq 1,\forall j\in M\] The neural architecture we use to solve this problem has three components. The _backbone_ is a multi-layer perceptron with ReLU activations where the depth and width are specified per auction size below. This component takes as input the \(n\times m\) bid array (bids from all bidders for all items) and produces a feature vector. The other two components of the network are output modules, one for allocations and one for payments. The _allocation module_ takes the feature vector and passes it through a single fully connected layer to produce an \((n+1)\times m\) output. We compute an item-wise softmax over the allocations to ensure that no more than one unit of each item is allocated. Note, the extra row in the allocation output allows for allocating less than a whole item to the bidders, accounting for times when the auction house may not sell the entire good. The _payment module_ is similar, consisting of one fully connected layer to map the feature vector to a vector of length \(n\), to which we apply a entry-wise sigmoid. These values correspond to payments, which are a fraction of bidder welfare. The actual fee each agent is charged is computed by multiplying their payment fraction by their welfare. To estimate the degree of regret, there is an inner loop during training, performing gradient steps on the inputs to approximately maximize the utility from misreporting. Given these basic design choices, RegretNet maximizes payment and includes the (estimated) regret in an augmented Lagrangian term to enforce the constraints. For the complete training details, see the original paper (Dutting et al., 2019). After training, we have allocation and payment networks that have very low regret (so the mechanism is very nearly DSIC). ### Performance Baselines To contextualize the benefit of using neural networks, it is critical to study the alternatives. Without learning auctions from data, one might execute independent Myerson auctions. For \(m\) items, it is possible to run \(m\) independent Myerson auctions, preserving strategyproofness. While these are not revenue-optimal, they at least have revenue which is optimized separately for each item, and in the limit of the number of bidders they approach optimality (Palfrey, 1983) so they are useful baseline for learned mechanisms. An alternative baseline for strategyproof auctions is AMAs. Curry et al. (2022) employ Lottery AMAs to improve revenue compared to the item-wise Myerson auction, while remaining fully strategyproof. These auctions are not revenue-optimal but as opposed to other possible baselines (i.e., MenuNet, RochetNet, etc) AMAs are applicable to the multi-agent setting. Table 1 shows the revenue and regret of the mechanisms we study. These figures are consistent with the notion that without considering privacy, RegretNet provides the auction house with more revenue than our baseline auctions and is nearly DSIC. We study the most common setting, where bidders draw valuations independently from \(U[0,1]\). See Appendix A.1 for additional training hyperparameters and model architecture details. For further detail we show the revenue and regret numbers for different baselines for the two agent, two item setting in Table 2. ## 4 Privacy in Auctions The central line of inquiry in our work aims in part to answer the question: Given the auction mechanism and knowledge of the results of a particular auction, how easy is it to recover bidders' private valuations using model inversion techniques? We show that in a neural auction, an outside observer can, in fact, retrieve information about the bids. Furthermore, we show that a participant in the auction, who may use their own bid information in the process, can recover even \begin{table} \begin{tabular}{l l l l} \hline \hline & \multicolumn{2}{c}{RegretNet} & \multicolumn{1}{c}{Myerson} \\ \cline{2-3} Size & Revenue & Regret & Revenue \\ \hline \(1\times 2\) & \(0.5720\) & \(0.0011\) & \(0.500\) \\ \(2\times 2\) & \(0.8818\) & \(0.0008\) & \(0.833\) \\ \(3\times 2\) & \(1.1284\) & \(0.0028\) & \(1.063\) \\ \(2\times 3\) & \(1.3522\) & \(0.0040\) & \(1.250\) \\ \(3\times 3\) & \(1.6837\) & \(0.0046\) & \(1.594\) \\ \hline \hline \end{tabular} \end{table} Table 1: **Average revenue and regret for the mechanisms we consider.** All agents have additive valuations drawn from \(U[0,1]\). Myerson Auctions have 0 regret by construction. For RegretNet metrics, we show averages over ten random seeds. \begin{table} \begin{tabular}{l l l} \hline \hline Auction & Revenue & Regret \\ \hline Item-Wise Myerson & \(0.833\) & 0 \\ Lottery AMA (Curry et al., 2022) & \(0.868\) & 0 \\ ALGNet (Rahme et al., 2021) & \(0.879\) & \(<0.001\) \\ RegretNet (Dutting et al., 2019) & \(0.882\) & \(<0.001\) \\ \hline \hline \end{tabular} \end{table} Table 2: **Revenue for \(2\times 2\) auctions for different baselines as compared to RegretNet.** Baselines with zero regret are zero regret by construction more information about other bidders. This vulnerability renders these mechanisms flawed in settings where bidder information must remain private. ### Threat Model We consider two cases where the adversary has varying degrees of information at their disposal. First, we treat the case where an outside observer with no knowledge of any bids attempts to invert the model after observing the allocations and payments. We also look at the situation where one of the bidders, who knows their own bid, aims to recover other bidders' information. In both cases, we assume that the mechanism itself (i.e. the network weights) is publicly available - this is a reasonable assumption as it is typically assumed that auction participants know what the mechanism is. An adversary can attempt to guess bids \(x^{*}\) by selecting the inputs that produce payments and allocations close to the true payments and allocations. It is important to note that this strategy assumes that for a given set of payment and allocation outputs, there is only one input - or that the model is a bijective function. This assumption is strong for general mechanisms, but in practice we find _neural_ auctions to be invertible suggesting that it is often the case. Therefore, identifying \(x^{*}\) by solving the optimization problem below is akin to finding the true bids \(b\). \[x^{*}=\operatorname*{arg\,min}_{x\in\text{supp}(P)}\|g(x)-g(b)\|_{\ell_{2}}+ \|p(x)-p(b)\|_{\ell_{2}} \tag{5}\] As is standard in the auction literature, our implementation assumes the adversary knows the distribution \(P\) from which agents draw their valuations - this is utilized by initializing the guess to be a random sample from that distribution (see A.2 for more details). However it is worth mentioning that even in settings without a view about the bid generating distribution, an adversary that initializes their bids with zeros is able to successfully learn private information about bids (see Appendix A.5) Note that in the setting where the adversary is also participating in the mechanism as a bidder, the free variables in the Equation (5) correspond only to the other bidders' bids. ### Privacy Metrics and Baselines In order to quantify how successful an adversary is able to invert an auction we introduce privacy metrics and baselines. We measure the adversary's success in two ways. The first is by computing the percentage of recovered bids that are within some tolerance of the true bids. The second method is to compute the average absolute error in the bid reconstruction over some large set of examples. Recall that our focus is the setting in which the adversary knows the distribution from which agents draw their bids. Complete privacy would ensure that our inversion is more accurate than a random draw from the bid generating distribution. However, a more applicable goal would be to beat the privacy of the itemwise Myerson auctions, since they often serve as a baseline for comparison for learned mechanisms. Because we know that Myerson auctions are exactly strategyproof, any optimal neural auction must be at least as private as a Myerson auction to be strictly more desirable. See Appendix A.3 for a discussion of the privacy of Myerson auctions. ### Neural Auctions Are Not Private We examine auctions of five sizes and for each, we show how much information the adversary can extract. When the mechanism is learned, and therefore differentiable, the adversary uses gradient-based optimization to invert the model. Specifically, we execute gradient descent to solve the problem in Equation (5), projecting the bids onto \([0,1]^{mn}\) at each iteration. Section A.2 gives the model inversion algorithm. It is important to note that as opposed to most cases of model inversion that focus on the setting where an adversary is trying to learn training inputs, here the adversary cares about learning test inputs. This separates our analysis from prior work on the vulnerabilities of neural networks. Table 3 shows the recovery rates for several sizes of auctions at both levels of information access. We measure the recovery rates here as a percentage of bids that were successfully extracted from the model inversion to within a tolerance of \(\pm 0.02\). This cutoff was chosen arbitrarily but serves mainly as a way to demonstrate being "close" to exact. For a distance measure, we also included the absolute error of the prediction in table 4. These figures do not give the adversary any guarantees. In particular, we do not claim a bound on the error for any given entry in their reconstructed bid matrix. However, this is still a major vulnerability for the auction house since the adversary has a high likelihood that their estimate of any particular bid is correct. While the figures below are for the case where the adversary knows that the bids are drawn from a uniform distribution \(P_{i}\sim U[0,1]\) in Section A.5 we show similar results even when the adversary has no information about \(P\) (except that no types are negative) and instead initializes guesses to zeros. Furthermore, we compare to the recovery rate one might get from randomly guessing bids from the known valuation distribution. With the figures in the last row of Table 3, we highlight just how much more information the adversary has as a direct result of inverting the mechanism. Recovery rates are a relatable figure and for this reason we choose to describe the adversary's success this way, but the results in Table 3 are dependant on the tolerance we set. For metrics that are perhaps less relatable but that do not depend on any hand-picked tolerance, we look at the mean absolute error (MAE). Specifically, Table 4 shows the average distance from the adversary's reconstruction to the true bid value. Whether considering MAE or recovery rates, it is clear that inverting neural auctions to uncover private bidder information is a real privacy vulnerability. ## 5 Non-Determinism for Privacy With a clear picture of how vulnerable neural models are, we propose a method to prevent inversion. Our technique, which employs stochasticity to help obscure the inputs, comes at a cost to the auction house in expected revenue. We describe our defense to inversion attacks and we characterize the relationship between degrees of privacy and the cost to the auction house. ### Stochasticity That Satisfies Auction Constraints Many previous works (e.g. Sandholm, 2003) discuss stochasticity in mechanisms, often for the sake of computational efficiency. We use stochasticity to make inversion more difficult - we perturb the allocations and payments output by the network. To satisfy individual rationality and ensure that at most one unit of each item is allocated, this perturbation requires care as follows. RegretNet, by default, employs several architectural choices to meet the ex post individual rationality criteria. Specifically, the model first allocates all items to \(n+1\) bidders, where the extra "ghost" bidder accounts for portions not sold. Using this ghost bidder allows the network to enforce, via softmax, that the allocations per item sum to one. We refer to the input to this softmax as _allocation logits_ and the output as _allocations_. RegretNet also generates a payment factor, or a positive fraction less than one, and multiplies this by the value of items allocated to each bidder to compute payments that are guaranteed to satisfy ex post individual rationality. With this in mind, we choose to perturb the allocations matrix before the softmax is applied. If we perturb the allocations after the softmax, we risk entries in the allocation matrix being negative or the total allocation of a single item exceeding one unit. We add independent mean zero Gaussian noise to each of the allocation logits and then use softmax to constrain the allocations matrix to have entries between zero and one and column-wise sums equal to one. The magnitude of this noise is controlled by the standard deviation \(\sigma\) and we explore how this parameter affects the privacy metrics. ### Metrics of Randomized Auctions Measuring the payment and regret of a model, for example when evaluating performance after training, requires estimating two expectations. First, we estimate the average regret over bidders (and samples). \[\textbf{rgt}(\theta)=\operatorname*{\mathbb{E}}_{v\sim P}\left[\textbf{rgt}(v)\right] \tag{6}\] \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{Bidders \(\times\) Items} \\ & 1 \(\times\) 2 & 2 \(\times\) 2 & 3 \(\times\) 2 & 2 \(\times\) 3 & 3 \(\times\) 3 \\ \hline W/ bids & – & \(0.02\) & \(0.03\) & \(0.01\) & \(0.021\) \\ W/O any bids & \(0.03\) & \(0.01\) & \(0.03\) & \(0.10\) & \(0.064\) \\ \hline \hline \end{tabular} \end{table} Table 4: **Bid reconstruction error.** Adversary’s MAE for different auctions. Note, for random guess sampled from the same distribution as the valuations, the MAE is \(\frac{1}{3}\) on average. Also, the standard error for every entry in this table is \(<0.01\). Figure 1: **Privacy and revenue in 2 \(\times\) 2 auctions.** The recovery rate measures how often the inverted bids are within \(\pm 0.02\) of the true values. With small \(\sigma\) values that barely affect the expected revenue, we can mitigate model inversion. For example, note that when \(\sigma=0.2\), the revenue is steadily above the Myerson baseline but the recovery rate is cut in half. \begin{table} \begin{tabular}{l c c c c c} \hline \hline & \multicolumn{5}{c}{Bidders \(\times\) Items} \\ & 1 \(\times\) 2 & 2 \(\times\) 2 & 3 \(\times\) 2 & 2 \(\times\) 3 & 3 \(\times\) 3 \\ \hline W/ bids & – & \(84.98\pm 0.50\) & \(82.62\pm 0.39\) & \(91.44\pm 0.45\) & \(82.97\pm 6.58\) \\ W/O any bids & \(77.92\pm 1.47\) & \(93.08\pm 0.64\) & \(78.27\pm 0.60\) & \(25.69\pm 1.40\) & \(53.53\pm 4.15\) \\ \hline \hline \end{tabular} \end{table} Table 3: **Bid recovery rates.** For different auction sizes, we show the portion of the bids (in percentage \(\pm\) standard error) that the adversary can recover within a tolerance of \(\pm 0.02\). Note, for bids drawn i.i.d. from \(U[0,1]\) a random guess sampled from the same distribution is within the tolerance \(3.96\%\) of the time. Whether the adversary has any true bidder information or not, their recovery rate is well above random. Second, we need to estimate the average payment, \[\mathbf{p}(\theta)=\mathop{\mathbb{E}}_{v\sim P}\left[\sum_{i\in N}p_{i}^{\theta}( v)\right]. \tag{7}\] Our models have stochasticity added "inside" the mechanism before the last layer. However, due to the use of linear, additive utilities, the expected welfare from winning items averaged over the stochasticity is the same as the welfare of the expected allocation. That is to say that because the welfare is a linear function of the allocation, we can use the average allocation to compute the expected welfare without having to recompute the average welfare directly. Thus, when estimating payments, we can simply average over a large number of sampled bid profiles with sampled noise, as usual. To estimate regret under a single bid profile, we need to compute two utility terms in the regret for a given set of bids. We sample bid profiles and compute optimized misreports for each one. Then, given bids and misreports, we can compute expected allocations/payments in order to compute the two utility terms in Equation (3). ### Cost of Privacy For an auction house interested in carrying out private auctions, our stochastic technique comes with a trade-off. As the magnitude of the noise increases the auction becomes more private but it also realizes less revenue on average. Consider the \(2\times 2\) auction, where RegretNet offers bidders access to nearly all of the other bidders' bids. In Figure 1, we show that by adding a stochastic layer to the network whose noise has standard deviation \(\sigma=0.2\) the auction becomes twice as private (see the decay in the blue curves). Furthermore, with \(\sigma=1.0\), the adversary's bid recovery rate drops to around 10%. By examining the orange curve in Figure 1, we see that the cost to the auction house is about 0.005 units or just over 0.5%. We show similar trends in the \(3\times 2\) and \(2\times 3\) auctions, whose privacy-revenue trade-offs are shown in Figures 3 and 4. From these results, we identify two major take-aways. First, the cost to the auction house to reduce the recovery rates to 10% is less than 1% of the average revenue from unperturbed auctions; and these final reduced revenues are all above the Myerson threshold. Second, the scales on the left-hand vertical axes of Figures 3 and 4 indicate that even with more items these mechanisms leak private information. An auction house that requires complete privacy may be interested in finding a \(\sigma\) value large enough that the adversary is not able to extract more information than a random guess. For example, for \(2\times 2\) auctions, Figure 5 shows that the recovery rate decays exponentially but the revenue can fall below the Myerson revenue. With this in mind, we show that for total privacy, \(\sigma\) must be large enough that the revenue falls by around 20%. In other words, there is a huge cost for complete privacy in these mechanisms. See Appendix A.4 for similar plots from other sizes of auctions. Figure 4: **Privacy and revenue in 3 \(\times\) 3 auctions.** The recovery rate measures how often the inverted bids are within \(\pm 0.02\) of the true values. Figure 3: **Privacy and revenue in 2 \(\times\) 3 auctions.** The recovery rate measures how often the inverted bids are within \(\pm 0.02\) of the true values. Figure 2: **Adversary’s MAE in 2 \(\times\) 2 auctions.** For a metric without a hand-picked parameter, we show MAE. Lower values correspond to more privacy. With this metric, it is clear that a lot of privacy can be gained with very little noise, but ensuring that the inversion is only as good as a random guess requires a large \(\sigma\) value. ## 6 Discussion We illuminate a privacy vulnerability of learned auction mechanisms that warrants attention. Auction houses are interested in offering their bidders privacy and we show that with neural networks this comes at a cost. We propose a defense for model inversion attacks that can introduce privacy to already-trained models. Lastly, we analyze how much revenue the auction house can expect to lose for various degrees of privacy and we find that total privacy comes at an extremely high cost. We believe that privacy in auctions is a critical consideration when evaluating the efficacy of the mechanism. As such, neural auctions lack of privacy should be considered a significant flaw and is worth exploring further. In particular, there might be other inversion methods that would pose an even greater threat to the privacy of neural auctions. Additionally, While Differential Privacy is outside the scope of this paper given that it is primarily concerned with learning information about training data rather than the mechanism, auctions trained on real data would have the additional requirement of safeguarding training datasets. We see our approach as a proof of concept that neural auctions are not private. To our knowledge, neural auction mechanisms have not yet been deployed in practice, so we do not think that our proof-of-concept attacks pose an immediate risk to privacy. We view thinking through the privacy of neural auctions as being of interest in itself, and important before they are deployed. Even in the face of a very naive attack, it is quite easy to infer information about bids by looking at the allocations. We present a method that can prevent this; however, our current model of an attacker is relatively naive and it is possible that a crafter adversary may be successful yet. We do not give any formal guarantees in the spirit of differential privacy, although this might be a fruitful direction for future work (for example, by bounding the Lipschitz constants of the auction networks (Anil et al., 2019; Yoshida and Miyato, 2017)). Future work might also wish to consider larger auctions, and investigate the privacy properties of other neural auction architectures besides RegretNet (e.g., Duan et al. (2022); Ivanov et al. (2022); Rahme et al. (2021)).
2309.14886
Two-Loop Amplitude Reduction with HELAC
We discuss recent progress towards extending the Helac framework to the calculation of two-loop amplitudes. A general algorithm for the automated computation of two-loop integrands is described. The algorithm covers all the steps of the computation, from the generation of loop topologies up to the construction of recursion relations for two loop integrands. Finally, first steps towards the formulation of a new approach for reducing two-loop amplitudes to a basis of master integrals are discussed.
Giuseppe Bevilacqua, Dhimiter Canko, Costas Papadopoulos
2023-09-26T12:38:49Z
http://arxiv.org/abs/2309.14886v4
# Two-Loop Amplitude Reduction with HELAC ###### Abstract: We discuss recent progress towards extending the Helac framework to the calculation of two-loop amplitudes. A general algorithm for the automated computation of two-loop integrands is described. The algorithm covers all the steps of the computation, from the generation of loop topologies up to the construction of recursion relations for two-loop integrands. Finally, first steps towards the formulation of a new approach for reducing two-loop amplitudes to a basis of master integrals are discussed. Introduction Continuous improvements in the precision of the data collected at the LHC are demanding corresponding improvements of theoretical predictions. This is imperative for QCD processes in particular, given that hadronic interactions are ubiquitous at the LHC. The calculation of hard-scattering cross sections is at the core of perturbative QCD and is organised in terms of an expansion in powers of the strong coupling constant. On a general ground, the precision frontier of perturbative calculations to date is set to NNLO1. An up-to-date reference list of processes which are desired with the highest accuracy possible can be found in Ref.[1]. Footnote 1: For \(2\to 2\) processes the current frontier is N\({}^{3}\)LO. Generally speaking, the bottleneck of NNLO calculations is identified in double-virtual corrections, more precisely in the calculation of two-loop amplitudes. Although the workflow of two-loop calculations varies upon the specific method considered, the key aspects can be summarized as follows: 1. _construction_ of two-loop integrands. This can be achieved either computing individual Feynman diagrams or using recursion relations; 2. _reduction_ of two-loop amplitudes in terms of Master Integrals [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21]. This can be achieved at integrand level or at integral level; 3. _calculation_ of Master Integrals [22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40] based on analytical or (semi-)numerical methods. Each of these steps comes with its own challenges, and the developements during the last years have yielded a number of results for \(2\to 3\) cross sections at NNLO accuracy [41, 42, 43, 44, 45]. In this contribution we present steps towards the construction of Helac-2loop, a framework for automated two-loop calculations. We focus in particular on items 1 and 2 in the list above. In Section 2 we illustrate the basic details of the algorithm for the computation of two-loop integrand functions. A few benchmark results are presented in Section 3. Finally, in Section 4 we sketch the approach for the reduction of two-loop amplitudes that we plan to implement in our framework. ## 2 Construction of two-loop integrands Before starting the discussion, it is convenient to introduce some basic notation. We denote a generic scattering process described by \(n\) external particles with the flavors of the latter, \(\{f_{1},f_{2},\ldots,f_{n}\}\). Color degrees of freedom are treated in the _color-flow_ representation [46]. In this representation external gluons are labelled with a doublet of color indices \(\{i,j\}\), while external quarks and anti-quarks receive the labels \(\{i,0\}\) and \(\{0,j\}\) respectively. A generic color state is uniquely identified by the product \(\delta_{j_{1}}^{\sigma_{1}}\delta_{j_{2}}^{\sigma_{2}}\cdots\delta_{j_{n}}^{ \sigma_{m}}\), where \(\{\sigma_{1},\sigma_{2},\ldots,\sigma_{n}\}\) denotes a permutation of the set \(\{1,2,\ldots,n\}\). Thus, for a generic process consisting of \(n_{g}\) external gluons and \(n_{q}+n_{\bar{q}}\) quarks, the number of color states is \((n_{g}+(n_{q}+n_{\bar{q}})/2)!\) (see _e.g._ Ref. [47, 48] for more details). The calculation of scattering amplitudes is organised recursively. The building blocks of the recursion are objects named currents, _i.e._ tree-level sub-amplitudes built out of a subset \(S_{B}\subset\{1,\ldots,n\}\) of the external particles. Currents are combined into sub-amplitudes of increasing complexity till the full amplitude is reached (see Ref. [47, 48] for more details). Each current (hereafter also denoted _blob_, or \(B\)) is labeled with a unique integer \(\mathrm{ID(B)}=\sum_{i\in\mathbb{S}_{\mathbf{b}}}2^{i-1}\). The first step towards the construction of two-loop integrands consists in the generation of _loop topologies_. A key observation is that all two-loop topologies describing arbitrary processes in the Standard Model2 (SM) fall into one of the three master categories shown in Figure 1, that we name "_Theta_", "_Infinity_" and "_Dumbbell_" for brevity. The shaded-gray areas \(K_{i}\) appearing therein, that we also refer to as _sectors_, denote schematically a set \(\{B_{1},\ldots,B_{L_{i}}\}\) of tree-level blobs inserting to the \(i\)-th propagator of the loop, where \(K_{i}=\sum_{j=1}^{L_{i}}\mathrm{ID(B_{j})}\). Similarly \(A,B\) denote generically a blob that can possibly insert into each of the loop vertices. The number of blobs appearing in each sector, together with the ordering in which they are displaced, defines a topology. We note, however, that there is no one-to-one correspondence between blob orderings and loop topologies: the reason is that the latter are symmetric under the operations of left/-right and up-down reversal. Furthermore, _Theta_ topologies are symmetric under any permutation of the sectors \(K_{1},K_{2},K_{3}\). Configurations of blob orderings that are left-right or up-down symmetric are identified at this stage in order to avoid double counting. For the same reason, all _Theta_ topologies are generated according to the rule \(K_{1}>K_{2}>K_{3}\). Further details concerning the topological generation phase can be found in Ref. [49]. Footnote 2: More generally, this statement is true for any model whose Feynman rules include up to four-particle vertices The topologies generated at this stage carry no information about flavor and color of the propagators in the loop. This information is generated in the next step, referred as _color-flavor dressing_ (schematically illustrated in Figure 3 in the context of a simple example). The procedure can be summarised in the following steps: 1. we cut one propagator in the two-loop topology (using the conventions shown in Figure 1). In this way we take the perspective of a one-loop topology describing a \((n+2)\)-particle process with flavors \(\{f_{1},f_{2},\ldots,f_{n},F_{n+1},F_{n+2}\}\). We refer to the latter as the _equivalent one-loop configuration_ for brevity. The flavors \(F_{n+1},F_{n+2}\) refer to the two additional particles originated by the cut and can take any value allowed to run into the loop. This step is schematised in Figure 2. We draw blobs using two different colors to stress that they are computed using different approaches. The blue blobs denote tree-level currents computed with standard Dyson-Schwinger recursion, _i.e._ all possible sub-amplitudes compatible with the external-particle content are incorporated. By contrast, the recursion defining the red Figure 1: Master categories of two-loop topologies: _Theta_ (a), _Infinity_ (b), _Dumbbell_ (c). The red lines indicate the propagator that is cut in order to express the two-loop topology as an equivalent \((n+2)\)-particle process at one-loop (explained in the text). blobs is partially constrained according to the non-trivial internal structure induced by the loop propagator that has been cut (shown in yellow); 2. we treat \(F_{n+1},F_{n+2}\) as external particles with truly independent degrees of freedom in the context of the one-loop process \(\{f_{1},f_{2},\ldots,f_{n},F_{n+1},F_{n+2}\}\). This allows us to reuse (with proper modifications) the apparatus for one-loop integrand computations already developed in Helac-1loop[50, 51]. Of course, \(F_{n+1},F_{n+2}\) are constrained in both flavor and color as they are originated by cutting a propagator. Thus, in our strategy, a redundant number of degrees of freedom is considered at this intermediate stage. The latter shall be projected into the lower-dimensional space describing the physical \(n\)-particle process3; Footnote 3: For the color degrees of freedom, the projection is understood as a contraction of the form \(\left(\delta_{j_{1}}^{i_{\sigma_{1}}}\delta_{j_{2}}^{i_{\sigma_{2}}}\cdots \delta_{j_{n}}^{i_{\sigma_{n}}}\delta_{j_{n+1}}^{i_{\sigma_{n+2}}}\right) \left(\delta_{i_{n+2}}^{i_{\sigma_{1}}}\delta_{j_{2}}^{i_{\sigma_{2}}}\cdots \delta_{j_{n}}^{i_{\sigma_{n}}}\right)\), where \(\{\sigma_{1}^{\prime},\ldots,\sigma_{n}^{\prime}\}\) (\(\{\sigma_{1},\ldots,\sigma_{n+2}\}\)) is a permutation of \(\{1,\ldots,n\}\) (\(\{1,\ldots,n+2\}\)), \(N_{C}\) is the number of colors, and \(\alpha=0\) or \(1\) depending on the configuration. 3. with reference to the one-loop configuration mentioned in the previous step, the construction of the integrand proceeds following the conceptual design of Helac-1loop. The one-loop propagators are dressed with colors using all possible configurations which conserve color flow at each vertex of the loop. Similarly, propagators are dressed with flavors in all ways that are compatible with the SM. The one-loop topology, in turn, is cut, originating two additional external particles \(F_{n+3},F_{n+4}\). This sets the stage of a tree-level, \((n+4)\)-particle process with flavors \(\{f_{1},f_{2},\ldots,f_{n},F_{n+1},F_{n+2},F_{n+3},F_{n+4}\}\). At this point the computation of one-loop numerator functions can be addressed with fully automated tree-level technology. More technical details can be found in Ref. [50, 51]. In the last step, a set of recursion relations is constructed for each two-loop topology. These encode the instructions to compute numerators that will be used by Helac during numerical evaluations. They can be generated once and stored in a file named _skeleton_. For illustrative purposes, we provide in Figure 4 an example showing how a two-loop numerator is represented Figure 2: Relations among cut two-loop topologies and their equivalent one-loop configuration in \((n+2)\) kinematics: _Theta_ (a), _Infinity_ (b), _Dumbbell_ (c). The blobs denote tree-level currents which are attached to the loop propagators. in our framework. The example refers to the 110th numerator out of a total of 332 numerators contributing to a specific color connection stored in the skeleton. Each line reports the instructions to compute a single current involved in the recursive calculation of the numerator. The information concerning the two-loop topology is stored in the last line. We have performed a number of tests at the level of individual two-loop numerators in order to validate our implementation. Our results have been found in excellent agreement with the numerical output generated with the help of FeynArts + FeynCalc[52, 53]. ## 3 Benchmark results To give an idea of the computational cost of two-loop numerator generations, we provide in Table 1 some details concerning the requirements of CPU time and disk occupancy for a few benchmark processes. We focus in particular on four-gluon and five-gluon amplitudes. We consider the Leading Color (LC) approximation as benchmark for our comparisons. For the four-gluon case we also show results based on Full Color (FC). The purpose is to understand how the computational cost scales with increasing multiplicities and/or improved color treatment. In all cases we also report the total number of two-loop numerators as a measure of the complexity of the calculation. Comparing the first two rows of Table 1 we observe that, when going from LC to FC, the memory Figure 3: Schematic illustration of color-flavor dressing for a simple two-loop topology. size increases by one order of magnitude. This is consistent with the larger number of numerators that are generated and stored in FC. Similarly the CPU time scales almost by a factor of 30. A more dramatic impact on scaling is observed, in contrast, when adding one more gluon in the computation while taking the LC viewpoint, as can be inferred by comparing the second and third lines of Table 1. In this case, CPU time and memory scale approximately by factors 90 and 30 respectively. We would like to stress that there is still room for better organisation of numerators and optimisation of the whole generation chain, which is currently under study. This is expected to improve the performance both in terms of computing efficiency and storage. ## 4 Steps towards a new approach for two-loop amplitude reduction Let us consider a two-loop topology characterised by \(N_{P}\) loop propagators for a generic process with \(n\) external particles. Let \(\{D_{1},\ldots,D_{N_{P}}\}\) be the denominators of the loop propagators (we will refer to them as _propagators_ in the following for brevity), \(\{p_{1},\ldots,p_{n}\}\) the four-dimensional momenta of the external particles and \(\{k_{1},k_{2}\}\) the loop momenta. The latter, expressed in \(d\) space-time dimensions, are decomposed as \(k_{i}=\bar{k}_{i}+k_{i}^{*}\) where \(\bar{k}_{i}\) is the four-dimensional and \begin{table} \begin{tabular}{|c|c|c|c|c|c|} \hline **Process** & \(\mathbf{N_{Loops}}\) & **Color** & **Numerators** & **Size [MB]** & **Timing [s]** \\ \hline \hline \(gg\to gg\) & 2 & Full & 89392 & 111 & 415 \\ \hline \(gg\to gg\) & 2 & Leading & 4560 & 9 & 15 \\ \hline \(gg\to ggg\) & 2 & Leading & 81480 & 300 & 1303 \\ \hline \hline \(gg\to gg\) & 1 & Full & 768 & 0.5 & 2 \\ \hline \(gg\to ggg\) & 1 & Full & 11496 & 15 & 533 \\ \hline \end{tabular} \end{table} Table 1: Details of CPU time and disk space required to generate and store all two-loop numerators for a few reference processes. _Leading_ refers to leading-color approximation, _Full_ denotes full treatment of color degrees of freedom. All numbers have been obtained on an Intel 1.80 GHz processor with the gfortran compiler using the option -O3. Results of one-loop generations are also reported for comparison. Figure 4: Internal representation of a two-loop numerator in the Helac framework. All lines except the last one are in the same format as in Helac-1Loop[51]. The last line includes in addition information on the two-loop topology [54]: the second and third numbers, 1 and 12, are used to specify the denominator structure of the given numerator. the extra-dimensional part. The scalar product reads \(k_{i}\cdot k_{j}=\bar{k}_{i}\cdot\bar{k}_{j}+\mu_{ij}\) (\(i=1,2\)), where we have defined \(\mu_{ij}=k_{i}^{*}\cdot k_{j}^{*}\). The most general two-loop integrand can be constructed out of the Lorentz scalars \[p_{i}\cdot p_{j}\,,\qquad k_{i}\cdot k_{j}\,,\qquad k_{i}\cdot p_{j}\,,\qquad k _{i}\cdot\eta_{j}\,, \tag{1}\] where \(\eta_{j}\) are _transverse vectors_ defined such that \(\eta_{i}\cdot p_{j}=0\). The integrand is understood to be a rational function of the form \[\mathcal{R}=\frac{\mathcal{N}}{\mathcal{D}}\equiv\frac{\sum_{a}c_{a}(\vec{s}, \varepsilon)\,\left(z_{1}^{(a)}\right)^{\beta_{1}}\cdots\,\left(z_{n_{a}}^{(a )}\right)^{\beta_{n_{a}}}}{D_{1}\cdots D_{N_{P}}}\,, \tag{2}\] where the \(\beta\)'s are integers, \(\vec{s}\) denotes generically scalar products of the form \(p_{i}\cdot p_{j}\) and \(z^{(a)}\in S=\{k_{i}\cdot k_{j},\ k_{i}\cdot p_{j},\ k_{i}\cdot\eta_{j}\}\). Those scalars \(z^{(a)}\) which admit a decomposition in terms of linear combinations of propagators \(D_{i}\) cancel with the denominator. Then, the integrand can be recast in the form \[\mathcal{R}=\sum_{m=0}^{N_{P}}\sum_{\sigma}\frac{\sum_{j}\tilde{c}_{j}^{( \sigma)}(\vec{s},\varepsilon)\,\prod_{k=1}^{n_{\text{TASD}}^{(m)}}\left(\bar{ z}_{k}^{(\sigma)}\right)^{\alpha_{k}^{(j)}}}{D_{\sigma_{1}}\cdots D_{\sigma_{m}}} \tag{3}\] where the \(\alpha\)'s are integers and \(\sigma\) denotes any possible subset of \(\{1,\ldots,N_{P}\}\) consisting of \(m\) elements. The residual scalar products appearing in the numerator (that we label as \(\bar{z}_{k}^{(j)}\) for clarity) include both the transverse ones (T), \(k_{i}\cdot\eta_{j}\), as well as what is known as _irreducible scalar products_ (ISP). Based on Eq.(3), one can express the numerator function \(\mathcal{N}\) using the following equation: \[\mathcal{N}=\sum_{m=0}^{N_{P}}\sum_{\sigma}\sum_{j}\tilde{c}_{j}^{(\sigma)}( \vec{s},\varepsilon)\,\prod_{k=1}^{n_{\text{TASD}}^{(m)}}\left(\bar{z}_{k}^{( \sigma)}\right)^{\alpha_{k}^{(j)}}\prod_{i\notin\sigma}D_{i}\,. \tag{4}\] Our goal is to find a method to extract numerically the coefficients \(\tilde{c}_{j}^{(\sigma)}(\vec{s},\varepsilon)\) at the _integrand_ level. Terms in Eq.(4) which do not depend explicitly on transverse vectors (\(\eta_{j}\)) lead to Feynman integrals which can be addressed using integration-by-part (IBP) techniques [55, 56, 57, 58, 59]. Scalar products of the form \((k_{i}\cdot\eta_{j})^{P}\) (where \(P\) is a positive integer) can be eliminated, if present, using the following considerations: * if \(P\) is _odd_, the corresponding terms vanish upon integration; * if \(P\) is _even_, the corresponding terms can be cast in a form which vanish upon integration using so-called _traceless completion_[2, 60]. This is equivalent to a change of basis of the form \[(k_{i}\cdot\eta_{j})\,(k_{l}\cdot\eta_{j})\longrightarrow(k_{i}\cdot\eta_{j}) \,(k_{l}\cdot\eta_{j})-\frac{\mu_{il}}{d-4}\] (5) \[(k_{i}\cdot\eta_{j})^{2}\,(k_{l}\cdot\eta_{j})^{2}\longrightarrow(k_{i} \cdot\eta_{j})^{2}\,(k_{l}\cdot\eta_{j})^{2}-\frac{(k_{i}\cdot\eta_{j})^{2}\mu _{ll}+(k_{l}\cdot\eta_{j})^{2}\mu_{ii}+4(k_{i}\cdot\eta_{j})\,(k_{l}\cdot\eta _{j})\mu_{il}}{2(d-4)}\] (6) Following the paradigm developed for one-loop calculations, and having all the aforementioned in mind, a possible way of implementing the amplitude reduction within Helac-2Loop can be sketched in the following steps: 1. extracting the 4-dimensional part of \(\tilde{c}_{j}^{(\sigma)}(\vec{s},\epsilon)\) using values for the loop-momenta obtained from the cut equations \(D_{\sigma_{1}}=\cdots=D_{\sigma_{m}}=0\) in an OPP-like approach [61, 62]; 2. finding the polynomial dependence on \(\mu_{ij}\) terms (analogously to the treatment of \(\mathcal{R}_{1}\) terms in OPP reduction); 3. determining the missing \(\epsilon\)-dimensional part of the numerator using two-loop rational terms (\(\mathcal{R}_{2}\) terms). 4. reducing the Feynman integrals containing ISP monomials to Master Integrals using IBP reduction. Notice that all four steps described above are universal. The integrand basis (steps 1 and 2), the \(\mathcal{R}_{2}\) terms (step 3) and the IBP reduction (step 4) are independent of the specific process under consideration. On the other hand, we expect that the IBP reduction presents technical challenges when applied to certain two-loop topologies with 5 or more external legs. Moreover, the implementation of \(\mathcal{R}_{2}\) rational terms (following up Ref. [63, 64, 65]) requires further study. ## 5 Summary We have presented an approach for computing two-loop integrands for arbitrary processes in a fully automated way. Furthermore we have sketched a possible strategy for the reduction of two-loop amplitudes to Master Integrals. These results are part of the ongoing efforts to develop Helac-2loop, a framework for automated two-loop amplitude calculation. ## Acknowledgments The work of D.C. is supported by the Hellenic Foundation for Research and Innovation (HFRI) under the HFRI Ph.D. Fellowship grant (Fellowship Number: 554). The work of G.B. and C.P. is supported by the Hellenic Foundation for Research and Innovation (H.F.R.I.) under the "2nd Call for H.F.R.I. Research Projects to support Faculty Members & Researchers" (Project Number: 02674 HOCTools-II).
2309.03322
REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation
Dexterous manipulation tasks involving contact-rich interactions pose a significant challenge for both model-based control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance non-prehensile forces, and control large degrees of freedom. Reinforcement learning (RL) offers a promising approach due to its general applicability and capacity to autonomously acquire optimal manipulation strategies. However, its real-world application is often hindered by the necessity to generate a large number of samples, reset the environment, and obtain reward signals. In this work, we introduce an efficient system for learning dexterous manipulation skills with RL to alleviate these challenges. The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping. This combination allows us to utilize data from different tasks or objects as a starting point for training new tasks, significantly improving learning efficiency. Additionally, our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy as well as learned reward functions, eliminating the need for manual resets and reward engineering. We demonstrate the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisition of intricate manipulation skills in the real world on a four-fingered robotic hand. (Videos: https://sites.google.com/view/reboot-dexterous)
Zheyuan Hu, Aaron Rovinsky, Jianlan Luo, Vikash Kumar, Abhishek Gupta, Sergey Levine
2023-09-06T19:05:31Z
http://arxiv.org/abs/2309.03322v1
# REBOOT: Reuse Data for Bootstrapping Efficient Real-World Dexterous Manipulation ###### Abstract Dexterous manipulation tasks involving contact-rich interactions pose a significant challenge for both model-based control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance non-prehensile forces, and control large degrees of freedom. Reinforcement learning (RL) offers a promising approach due to its general applicability and capacity to autonomously acquire optimal manipulation strategies. However, its real-world application is often hindered by the necessity to generate a large number of samples, reset the environment, and obtain reward signals. In this work, we introduce an efficient system for learning dexterous manipulation skills with RL to alleviate these challenges. The main idea of our approach is the integration of recent advances in sample-efficient RL and replay buffer bootstrapping. This combination allows us to utilize data from different tasks or objects as a starting point for training new tasks, significantly improving learning efficiency. Additionally, our system completes the real-world training cycle by incorporating learned resets via an imitation-based pickup policy as well as learned reward functions, eliminating the need for manual resets and reward engineering. We demonstrate the benefits of reusing past data as replay buffer initialization for new tasks, for instance, the fast acquisition of intricate manipulation skills in the real world on a four-fingered robotic hand. (Videos: [https://sites.google.com/view/reboot-dexterous](https://sites.google.com/view/reboot-dexterous)) Keywords:Dexterous Manipulation, Reinforcement Learning, Sample-Efficient ## 1 Introduction Dexterous manipulation tasks involving contact-rich interaction, specifically those involving multi-fingered robotic hands and underactuated objects, pose a significant challenge for both model-based Figure 1: REBOOT achieves **2X** sample efficiency boost on learning a variety of contact-rich real-world dexterous manipulation skills on three different objects autonomously by bootstrapping on prior data across different objects and tasks with sample-efficient RL and imitation learning-based reset policies. control systems and imitation learning algorithms. The complexity arises from the need for multi-fingered robotic hands to dynamically establish and break contacts, balance non-prehensile forces, and control a high number of degrees of freedom. Reinforcement learning (RL) offers a promising solution for such settings. In principle, RL enables a robot to refine its manipulation skills through a process of trial-and-error, alleviating the requirement for strong modeling assumptions. However, making RL methods practical for learning such complex behaviors directly in the real world presents a number of obstacles. The main obstacle is sample efficiency: particularly for tasks that require complex interactions with many possibilities for failure (e.g., in-hand reorientation where the robot might drop the object), the number of trials needed for learning a skill with RL from scratch might be very high, requiring hours or even days of training. Additionally, real-world learning outside of the lab requires the robot to perform the entire training process using its own sensors and actuators, evaluating object state and rewards using camera observations, and resetting autonomously between trials. Because of these challenges, many prior works on RL for dexterous manipulation have explored alternative solutions, such as sim-to-real transfer [1, 2, 3], imitation learning [4, 5, 6], or the use of tools like motion capture [7, 2] or separately-engineered reset mechanisms [8, 9]. In this paper, we instead propose a system that is designed to make direct RL in the real world practical without these alternatives, so as to take a step toward robots that could one day learn under assumptions that are sufficient for autonomously acquiring new skills in open-world settings, even outside the laboratory. This means that the entire learning process must be conducted using the robot's own sensors and actuators, without simulation or additional instrumentation, and be efficient enough to learn skills quickly. We posit that a key enabling factor for this goal is to reuse data from past skills, and we instantiate this with a simple buffer initialization method, where the replay buffer of each skill is initialized with data from other tasks or even other objects. In combination with a vision-based method for learning reward functions from user-provided images and a learned reset procedure to automatically pick up an object between trials, we demonstrate that our system enables a robotic hand to learn in-hand reorientation skills in just a few hours of fully autonomous training, using only camera observations and joint encoder readings. Our main contribution is **REBOOT**, a system to **Reuse** Data for **Bootstrapping Real-World Dexterous Manipulation, which we illustrate in Figure 2. By simply preloading the replay buffer using prior data from other objects and tasks, our system avoids starting from scratch for every new task. By combining recent advances in sample-efficient online RL [10] with buffer initialization to bootstrap learning from prior tasks and objects, we show that in-hand manipulation behaviors can be learned in a few hours of autonomous practicing. We additionally use learned reset skills to make training autonomous, and extend adversarially learned rewards to handle our buffer initialization method, allowing users to specify tasks with a few examples of desired object poses and without manual reward engineering. Some of the skills Figure 2: **REBOOT** System Overview: Our method learns various dexterous manipulation skills in the real world using raw image observations. This is enabled by using sample-efficient RL and bootstrapping with data from other tasks and even other objects, with autonomous resets. learned by our system, shown in Figure 3, include in-hand reorientation of a three-pronged object, handling a T-shaped pipe, and manipulating a toy football. ## 2 Related Work A number of designs for anthropomorphic hands have been proposed in prior work [11; 12; 13]. Prior learning-based methods to control such hands utilize trajectory optimization [14; 15], policy search [16; 17; 18], demonstration-based learning [19; 20; 21; 22], simulation to real-world transfer [3; 23; 24; 25], reinforcement learning directly in the real world [26; 8; 27; 28; 29], or a combination of these approaches [30]. Most of the aforementioned works leveraged accurate simulations or engineered real-world state-estimation systems to provide compact state representations. In contrast, we seek to learn visuomotor policies autonomously and entirely in the real world without access to privileged state information, under assumptions that more closely reflect what robots might encounter when learning "on the job" outside of laboratory settings. Prior work has explored learning these policies in simulation [31; 32], where autonomy is not of concern due to the ability to reset the simulated environment. Most real-world methods either rely on instrumentation for state estimation [28] or deal with simpler robots and tasks [27]. An important consideration in our system is the ability to specify a task without manual reward engineering. Although task specification has been studied extensively, most prior works make a variety of assumptions, ranges from having humans-provided demonstrations for enabling imitation learning [4; 33; 34], using inverse RL [35; 36; 37], active settings where users can provide corrections [38; 39; 40], or ranking-based preferences [41; 42]. Our in-hand RL training phase learns from raw high-dimensional pixel observations in an end-to-end fashion using DrQ[43] and VICE[44], although our system could use any reward inference algorithm based upon success examples [45]. With users defining the manipulation task by providing a small number of image goals instead of full demonstrations, our method not only removes the barrier to orchestrate high-dimensional finger motions [46; 47] but also accelerates robot training progress by offering sufficient reward shaping for RL in real-world scenarios without per-task reward engineering. While AVAIL [29] also learns dexterous manipulation skills from raw images, we show in our comparison that our system is faster, and our buffer initialization approach significantly speeds up the acquisition of in-hand manipulation skills compared to starting from scratch. Buffer initialization has also been employed by Smith et al. [48] in the context of transfer learning for robotic locomotion, where a similar approach was used to create a curriculum for locomotion skills or adapt to walking on new terrains. Our method differs in several significant ways. First, our method learns from raw image observations with learned reward functions defined through a few example images, whereas [48] uses hand-programmed rewards. Second, our focus is on learning intricate dexterous manipulation skills from scratch in the real world, whereas [48] uses initialization in simulation. Although the methodology is closely related, our proposed system extends the methodology in significant ways, enabling the use of vision and learned rewards in a very different domain. Reset-free learning is essential for autonomous real-world training of dexterous skills (see [49] for a review of reset-free methods). Most of the prior works [27; 28; 29; 50; 51; 52; 53; 54] rely on "backward" policies to reset the environment so the "forward" policy can continue learning the task of interest. Similarly, we divide training into two phases due to different skills having unique demands for control complexities and user-provided supervision. Specifically, the skill needed to pick up objects in reset is better studied and developed for immediate usage through imitating user-provided demonstrations [55]. ## 3 Robot Platform and Problem Overview In this work, we use a custom-built, 4-finger, 16-DoF robot hand mounted to a 7-DoF Sawyer robotic arm for dexterous object manipulation tasks. Our platform is shown in Figure 3. Our focus is on learning in-hand reorientation skills with reinforcement learning. During the in-hand manipulation phase, the RL policy controls the 16 degrees of freedom of the hand, setting target positions at 10 Hz, with observations provided by the joint encoders in the finger motors and two RGB cameras, one overhead and another embedded in the palm of the hand. To facilitate autonomous training, we also use imitation learning to learn a reset policy to pick up the object from the table in-between in-hand manipulation trials. This imitation policy uses a 19-dimensional action space, controlling the end effector position of the wrist and 16 finger joints to pick up the object from any location. Our tasks are parameterized by images of desired object poses in the palm of the hand. Since the reset policy can grasp the object in a variety of poses, the in-hand policy must learn to rotate and translate the object carefully to achieve the goal pose. We train and evaluate our method entirely in the real world. In the following sections, we describe how data from different objects can be used to bootstrap new manipulation skills for more efficient learning. ## 4 Reinforcement Learning with Buffer Initialization In this work, we propose a system for efficiently learning visuomotor policies for dexterous manipulation tasks via bootstrapping with prior data. We describe our learning method and real-world considerations for our system in the following subsections. Problem setting.Our method leverages the framework of Markov decision processes for reinforcement learning as described in [56]. In RL, the aim is to learn a policy \(\pi(a_{t}|s_{t})\) that obtains the maximum expected discounted sum of rewards \(R(s_{t},a_{t})\) under an initial state distribution \(\rho_{0}\), dynamics \(\mathcal{P}(s_{t+1}|s_{t},a_{t})\), and discount factor \(\gamma\). The formal objective is as follows: \[J(\pi)=\mathbb{E}_{\begin{subarray}{c}s_{0}\sim\rho_{0}\\ a_{t}\sim\pi(a_{t}|s_{t})\\ s_{t+1}\sim\mathcal{P}(s_{t+1}|s_{t},a_{t})\end{subarray}}\left[\sum_{t=0}^{ T}\gamma^{t}R(s_{t},a_{t})\right] \tag{1}\] The particular reinforcement learning algorithm that we build on in this work is RLPD [10], a sample-efficient RL method that combines a design based on soft actor-critic (SAC) [57] with a number of design decisions to enable fast training. This approach trains a value function \(Q^{\pi}(s_{t},a_{t})\) in combination with an actor or policy \(\pi(a_{t}|s_{t})\), though in principle our system could be compatible with a variety of off-policy RL algorithms that utilize a replay buffer. For more details on RLPD, we refer readers to prior work [10]. Reinforcement learning with buffer initialization.While using a sample-efficient RL algorithm such as RLPD to acquire in-hand manipulation skills can be feasible in the real world, the training process can take a very long time (see Section 5). A central component of our system design is to utilize data from other tasks or even other objects to bootstrap the acquisition of new in-hand manipulation skills. In our experiments, we will show that a very simple procedure can make this possible: for every RL update, we sample half the batch from the growing buffer for the current task, and half the batch from a buffer containing the experience from all of the prior tasks. Thus, if \(n-1\) skills have been learned, to learn a new \(n\)-th skill, we pre-load the replay buffer with trajectories from each of the \(n-1\) prior skills and sample half of each training batch from prior data and the other half from the new agent's online experience. This 50-50 sampling method has been used in some prior works, including Figure 3: Depiction of our hardware platform and tasks. **(a)** custom-built 16 DoF robotic hand **(c)** teleoperation using the 3-D mouse, to interact with the following objects in-hand **(b)** blue football, **(d)** 3-pronged valve, **(e)** T-shaped pipe. RLPD [10, 58], in order to initialize online RL with offline data from _the same task_. However, in our system, we adapt this procedure to bootstrap a behavior from _other_ skills. Since all of the tasks use visual observations, the generalization ability of the value function and policy networks can then learn to make use of this prior experience to assist in learning the new task. Note that it is not at all obvious that prior experience like this should be directly useful, as other tasks involve visiting very different states or manipulating different objects. However, if the networks are able to extract for example a general understanding of contacts or physical interactions, then we would expect this to accelerate the acquisition of the new task. Demonstration-based reset-free learning.In-between in-hand manipulation trials, the robot may drop the object and need to pick it back up again to attempt the task again. To automate training, we must also acquire an autonomous pick-up policy to serve as a reset mechanism for the in-hand task, retrieving objects that may have fallen out of the hand during in-hand manipulation. We observe that the reset task is composed of essentially the same reaching, power grasping, and lifting up skills across different objects. Unlike complex manipulation tasks in the in-hand phase, a human operator can provide demonstrations for these skills more conveniently and effectively, overcoming the wide initial state distribution issue due to the fact that objects can fall to anywhere in the environment. As shown in prior work [27], exploration is especially challenging for RL in such settings. Thus, we use behavioral cloning (BC) to train policies for the reset phase from simple demonstrations provided with a 3D mouse and a discrete finger closing command. Note that no demonstrations are used for the actual in-hand reorientation skill (which is difficult to teleoperate), only for the comparatively simpler reset skill, which only requires picking up the object. Reward learning via goal images with buffer initialization.Our aim is to enable our system to learn under assumptions that are reasonable outside of the laboratory: the robot should use the sensors and actuators available to it for all parts of the learning process, including using an autonomous reset policy and eschewing ground truth state estimation (e.g., motion capture) in favor of visual observations that are used to train an end-to-end visuomotor policy. However, this requires us to be able to evaluate a reward function for the in-hand RL training process from visual observations as well, which is highly non-trivial. We therefore use an automated method that uses goal _examples_ provided by a person (e.g., positioning the object into the desired pose and placing it on the hand) to learn a success classifier, which then provides a reward signal for RL. Thus, for each in-hand manipulation task \(\mathcal{T}_{i}\), we assume a set \(\mathcal{G}_{i}\) consisting of a few goal images depicting the successful completion of the task. Naively training a classifier and using it as a reward signal is vulnerable to exploitation, as RL can learn to manipulate the object so as to fool the classifier [44]. We therefore adapt VICE [44] to address this challenge, which trains an adversarial discriminator pre-defined goal images as positives (\(y=1\)) and observation samples from the replay buffer as negatives (\(y=0\)). However, it is necessary to adapt this method to handle our buffer initialization approach, since VICE is by design an on-policy [44]. We first summarize the VICE algorithm and the regularization techniques we employ to make it practical for vision-based training, and then discuss how we adapt it to handle buffer initialization. A common issue with adversarial methods such as VICE is instability and mode collapse. We found strong regularization techniques based on mixup [59] and gradient penalty [60] to be essential to stabilize VICE for learning image-based tasks, and these regularizers additionally aid the RL process by causing the classifier to produce a smoother, more shaped reward signal. The VICE classifier predicts \(\log p_{\theta}(g|o_{t})\), the log probability that the observation \(o_{t}\) corresponds to the goal \(g\), which can then be used as a reward signal for RL training. The VICE classifier parameterized by \(\theta\), \(D_{\theta}\), is then optimized by minimizing a regularized discriminator loss: \[\mathcal{L}(x;\theta)=\lambda\cdot\mathcal{L}_{\lambda}(x;\theta)+(1-\lambda )\cdot\mathcal{L}_{1-\lambda}(x;\theta)+\alpha(\|\nabla_{x}D_{\theta}(x)\|_{2 }-1)^{2} \tag{2}\] where the input \(x\) is a batch of evenly mixed user-defined goal images and observations collected during training, \(\mathcal{L}_{\lambda}\) and \(\mathcal{L}_{1-\lambda}\) are the Binary Cross Entropy (BCE) loss terms for mixed-up samples and labels, \(\alpha=10\) is the weight for Gradient Penalty loss. Applying this method with buffer initialization, where prior data from other tasks and objects is included in the replay buffer, requires additional care. Naively, if we train a new VICE classifier with user-provided goal images for the current task as positives, then almost all previous experiences from other tasks and objects are likely to be assigned a negligible reward during training, which would not result in beneficial learning signals for the RL agent. Instead, for tasks from other objects in the prior dataset, rewards are labeled using a task-specific VICE classifier which was trained when that data was collected _for its own task_. These classifier rewards are computed and saved prior to training a new skill, and they remain static throughout training, in contrast to the rewards for online data and offline data from the same object, which depend on the changing VICE classifier. We hypothesize that initializing the buffer in this way with data from other objects, or other tasks for the same object, will allow the RL algorithm to learn more quickly by transferring knowledge about finger-object interactions, actuation strategies for the hand, and other structural similarities between the tasks. Of course, the degree to which such transfer can happen depends on the degree of task similarity, but as we will show in the next section, we empirically observe improvement from prior data even when transferring to an entirely new object. ## 5 Experimental Results In our experiments, we aim to study the following questions: 1. Can our system learn dexterous manipulation skills autonomously in the real world? 2. Can prior data from one object improve the sample efficiency of learning new skills with the same object? 3. Can data from different objects be transferred to enable faster acquisition of skills with new objects? We perform experiments with 3 objects of various shapes and colors: a purple 3-pronged object, a black T-shaped pipe, and a blue football. For each manipulation task, we collected a set of 400 success example images, as described in the Appendix E. Figure 4: Successful rollouts of in-hand object manipulation policies for the three objects: purple 3-pronged object (Pose B), black T-shaped pipe, and blue football. The boxes on the right (outlined in green) are representative user-provided success state examples for each task. Note that the autonomous pickup policy picks up the object in a variety of different poses across episodes, requiring the in-hand manipulation skill to reorient it into the target pose from many starting configurations. We also provide demonstrations per object for the reset policy to enable in-hand training. We present details of demonstration collection, training procedure, and success rates in Appendix F. Each demonstration takes roughly 30 seconds to collect, totaling less than 2 hours to collect the necessary demonstrations. Please check our website [https://sites.google.com/view/reboot-dexterous](https://sites.google.com/view/reboot-dexterous) for videos and further experimental details. Task transfer.To answer Question 1, we evaluated our method on each of the 3 objects with varying amounts of prior data. We first trained a 3-prong object manipulation policy (for a goal pose we call Pose A, shown in Figure 5) without prior data in order to gather data to initialize training for subsequent objects/tasks. We then trained another 3-prong manipulation policy for a different goal pose (Pose B, shown in Figure 4) as well as a T-pipe manipulation policy, both using prior data from the first 3-prong experiment. Finally, we trained a football manipulation policy using the 3-prong and T-pipe experiments as prior data. Our method's success rate is shown in Figure 6, and film strips of various manipulation policy successes during training are shown in Figure 4. Our behavior-cloned reset policy was sufficient as a reset mechanism for in-hand training. Furthermore, our in-hand policies are able to successfully pose the 3-prong and T-pipe objects more than 50% of the time. To answer Question 2, we consider the Pose B 3-prong experiment described previously. Since reorienting to both Pose A and B uses the same 3-prong object, we expect the task difficulty to be similar for both poses. A comparison between training Pose A from scratch and training Pose B with a pre-loaded replay buffer is shown in Figure 5. The Pose B experiment with our method outperforms the Pose A experiments training from scratch in terms of training time. Our method reaches 80% success in around 6 hours while training from scratch yields poor performance at that point. It takes more than 10 hours for learning from scratch to achieve a comparable success rate. This suggests that our method can significantly reduce training time when using prior data from the same object for a new manipulation task. Object transfer.To answer Question 3, we consider the T-pipe and football experiments described above. We compare our method to learning from scratch without prior data and display the results in Figure 8. Our method with prior data from other objects is significantly faster than learning from scratch for both objects. For the T-pipe experiments, our method achieves a 60% success rate at 6 hours compared to 13 hours for training from scratch. Furthermore, the from-scratch runs have absolutely no success in evaluation prior to 5 hours of training, while our method achieves some initial success as early as 1 hour into training. The football task appears to be significantly more challenging than the 3-prong and T-pipe tasks, as shown in Figure 8, with no methods performing above a 30% success rate. However, our method still outperforms learning from scratch, achieving a 30% success rate with 5 hours of training; the from-scratch runs required at least 16 hours of training to achieve a lower 20% success rate. Ablation Studies.Finally, we conduct ablation experiments in both simulation and the real world to compare the effects of varying the initial buffer size, the order in which the buffer is initialized, transfer learning from a trained policy, and training for an extended period of time. Results and in-depth analysis are provided in Appendix C and Appendix D. ## 6 Discussion, Limitations, and Future Work We presented a system for learning in-hand manipulation skills directly by training in the real world with RL, without simulation, and using only onboard sensing from encoders and cameras. Our system enables sample-efficient and autonomous learning by initializing the replay buffer of an efficient online RL method with data from other tasks and even other objects. We extend adversarially-learned classifier-based rewards into this setting to make it possible for users to define tasks with a collection of goal images, and implement automated resets using an imitation-learned reset policy, providing a pipeline for fully autonomous training. The complete system avoids any strong instrumentation assumptions, using the robot's own sensors and actuators for every part of training, providing a proof-of-concept for an efficient real-world RL system that could operate outside of laboratory conditions. **Limitations:** Our experimental evaluation does have a number of limitations. Although we show that reusing data from one or two prior tasks improves learning efficiency, a more practical general-purpose robotic system might use data from tens, hundreds, or even thousands of skills. Evaluating the potential for such methods at scale might require additional technical innovations, as it is unclear if buffer initialization with very large datasets will be as effective. Additionally, our evaluation is limited to in-hand reorientation skills. While such skills exercise the robot's dexterity and physical capabilities, many other behaviors that require forceful interaction with the environment and other manipulation skills could require a different reset process or might require a different method for reward specification (for example to handle occlusions). Exploring these more diverse skills is an exciting direction for future work. The current manipulation setup is training with fairly robust objects where fragility or wear and tear are not major concerns. As we move to more dexterous tasks, a more directed approach may be required to handle fragile objects or perform tasks that require force-sensitive interaction. Studying how to integrate our system with tactile sensing is another exciting avenue to explore. Figure 8: Learning curve showing the performance as a function of training time for the T-pipe and football objects. In both cases, buffer initialization is about two times faster than learning from scratch, though particularly the football object is harder to reorient for all methods. #### Acknowledgments This research was partly supported by the Office of Naval Research (N00014-20-1-2383), and ARO W911NF-21-1-0097. We would like to thank Ilya Kostrikov for the initial versions of the simulator and codebase, and everyone at RAIL for their constructive feedback.
2309.11726
Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs
Programmers and researchers are increasingly developing surrogates of programs, models of a subset of the observable behavior of a given program, to solve a variety of software development challenges. Programmers train surrogates from measurements of the behavior of a program on a dataset of input examples. A key challenge of surrogate construction is determining what training data to use to train a surrogate of a given program. We present a methodology for sampling datasets to train neural-network-based surrogates of programs. We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path. We next provide a program analysis to determine the complexity of different paths in a program. We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy.
Alex Renda, Yi Ding, Michael Carbin
2023-09-21T01:59:20Z
http://arxiv.org/abs/2309.11726v1
# Turaco: Complexity-Guided Data Sampling for Training Neural Surrogates of Programs ###### Abstract. Programmers and researchers are increasingly developing _surrogates_ of programs, models of a subset of the observable behavior of a given program, to solve a variety of software development challenges. Programmers train surrogates from measurements of the behavior of a program on a dataset of input examples. A key challenge of surrogate construction is determining what training data to use to train a surrogate of a given program. We present a methodology for sampling datasets to train neural-network-based surrogates of programs. We first characterize the proportion of data to sample from each region of a program's input space (corresponding to different execution paths of the program) based on the complexity of learning a surrogate of the corresponding execution path. We next provide a program analysis to determine the complexity of different paths in a program. We evaluate these results on a range of real-world programs, demonstrating that complexity-guided sampling results in empirical improvements in accuracy. Key words and phrases: programming languages, surrogate models, neural networks + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: + Footnote †: journal: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + FootnoteFootnote †: + Footnote †: + Footnote †: Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: + Footnote †: ### Surrogate Programming Beyond these examples, surrogate programming has developed into a diverse set of techniques and applications across many areas of computing and science. Renda et al. (2021) organize the uses of neural network based surrogates of programs into three categories. _Surrogate compilation._ In _surrogate compilation_, programmers develop a surrogate that replicates the behavior of a program to deploy to end-users in place of that program. For example, in addition to Esmaeilzadeh et al. (2012) (who use a surrogate to speed up numerical programs), Mendis (2020) use a surrogate to speed up compiler autovectorization by replacing an integer linear program (ILP) solver with a surrogate. Key benefits of surrogate compilation include the ability to execute the surrogate on different hardware and the ability to bound or to accelerate the execution time of the surrogate. _Surrogate adaptation._ In _surrogate adaptation_, programmers first develop a surrogate of a program and then further train that surrogate on data from a different task. For example, in addition to Kustowski et al. (2020) (who use a surrogate to improve the accuracy of inertial confinement fusion simulations), Tercan et al. (2018) use this technique to improve the accuracy of computer simulations of plastic injection molding. Key benefits of surrogate adaptation include that it makes it possible to alter the semantics of the program to perform a different task of interest and that it may be more data-efficient or result in higher accuracy than training a model from scratch for the task. _Surrogate optimization._ In _surrogate optimization_ programmers develop a surrogate of a program, use gradient descent against the surrogate to identify program inputs that optimize a downstream objective, then use the inputs for executing the original program. In addition to Renda et al. (2020) (who use this technique to optimize CPU simulation parameters), Tseng et al. (2019) use this technique to find parameters for camera pipelines that lead to the most photorealistic images. She et al. (2019) use this technique to find inputs that trigger branches that may cause bugs in the original program. The key benefit of surrogate optimization is that it can identify optimal inputs faster than optimizing directly against the program, due to the faster execution speed of the surrogate and that the surrogate is differentiable even when the original program is not (allowing the use of gradient descent). ### Dataset Generation In each of these scenarios, training a surrogate of a program requires measuring the behavior of the program on a dataset of input examples. There are three common approaches to collecting this dataset. The first is to use data instrumented from running the original program on a workload of interest (Esmaeilzadeh et al., 2012; Renda et al., 2020). In the absence of an available workload, another is to uniformly sample (or sample using another manually defined distribution) from the input space of the program (Kustowski et al., 2020; Tseng et al., 2019). The third is to use _active learning_(Settles, 2009), a class of online methods that iteratively query labels for the data points that are most useful (however defined) for further training the surrogate (Ipek et al., 2006; Pestourie et al., 2020; She et al., 2019). Each of these approaches face challenges on programs with different behaviors in different regions of the input space. For example, Renda et al. (2020, Section IV.A) identify a scenario in which an instrumented dataset does not exercise a set of control flow paths in the program enough times for the surrogate to learn the program's behavior along those paths, resulting in a surrogate that generates highly inaccurate predictions for inputs in the regions of the input space corresponding to those paths. ### Our Approach: Complexity-Guided Sampling Rather than treating the program as a black box, our approach uses the source code and semantics of the program under study to guide dataset generation for training a surrogate of the program. The core concept is to allocate samples based on both the the _complexity_ of learning the program's behavior on a given path and the frequency of that path in the input data distribution. _Complexity-guided sampling._ Our objective is to find how many samples to allocate to each region of the input space to minimize the expected error of the resulting surrogate. To reason about the error of a surrogate, we use neural network sample complexity bounds for learning analytic functions [Agarwala et al., 2021; Arora et al., 2019]. These bounds give an upper bound on how many samples are required to learn a surrogate of an analytic function to a given error as a function of a _complexity measure_ of that function. Our approach calculates a complexity measure for the function induced by each control flow path in the program and combines that with the frequency of each path according to an input data distribution. The output of the approach is the proportion of samples to allocate to each region of the input space, minimizing an upper bound on the stratified surrogate's error. _Stratified functions._ Our core modeling assumption is to represent the program as a _stratified function_, a piecewise1 function across different regions (_strata_) of the input space. We use _stratified surrogates_ to model such functions. To construct a stratified surrogate, we train independent surrogates of each component of the stratified function. During evaluation, a stratified surrogate uses the original program to check which stratum an input is in, then applies the corresponding surrogate. Footnote 1: We choose the term _stratified_ by analogy with the technique of stratified sampling. _Complexity analysis._ We present a programming language, Turaco, in which programs denote stratified functions with well-defined complexity measures (specifically, stratified analytic functions). We provide a static program analysis for Turaco programs that automatically calculates an upper bound on the complexity of each component of the stratified function that the program denotes. _Evaluation._ To demonstrate that complexity-guided sampling using our complexity analysis improves surrogate error on downstream tasks, we evaluate our approach on a range of programs, finding that across this selection of programs complexity-guided sampling improves error relative to baseline distributions by around 5%. We demonstrate that a 5% improvement in error of a surrogate can result in a 28% improvement in execution speed in an application with a maximum error threshold. We then analyze the classes of problems for which complexity-guided sampling excels, finding potential improvements in error of up to 30%, and the classes of problems for which complexity-guided sampling using Turaco's complexity analysis sampling fails, finding deteriorations in error of up to 500%. _Renderer demonstration._ We further present a case study of learning a surrogate of a renderer in a video game engine. We show that our complexity-guided sampling approach results in between 17% and 44% lower error than training using baseline distributions that do not take into account path complexity. These error improvements correspond with perceptual improvements in the generated renders. _Contributions._ We present the following contributions: * An approach to allocating samples among strata to train stratified neural network surrogates of stratified analytic functions that minimizes an upper bound on the surrogate's error. * A programming language, Turaco, in which all programs are stratified analytic functions, and a program analysis to bound the complexity of learning surrogates of those programs. * Empirical evaluations on real-world programs demonstrating that complexity-guided sampling using Turaco's complexity analysis results in empirical improvements in error, and that these improvements in error result in improvements in downstream applications. * Further empirical evaluations of the classes of problems where complexity-guided sampling using Turaco's complexity analysis succeeds and fails. We lay the groundwork for analyzing complexity-guided sampling approaches for training surrogates of programs. Our results hold out the promise of surrogate training approaches that intelligently use the program's semantics to guide the design and training of surrogates of programs. ## 2. Example Figure (a)a presents an example distilled from our evaluation (Section 5.2) that we use to demonstrate how complexity-guided sampling results in a more accurate surrogate than _frequency-based sampling_, sampling according to the frequency of paths alone. Program under study.Figure (a)a presents a graphics program that calculates the luminance (i.e., brightness) at a point in a scene as a function of sunPosition, the height of the sun in the sky (i.e., the time of day), and emission, which describes how reflective the material is at that point. The program first checks whether it is nighttime (Line 2), and sets the ambient lighting variable to zero accordingly. The program next checks whether the sun position is below a threshold indicating direct sunlight (Line 7) and sets the emission variable accordingly. The output is then the sum of the ambient light and the light emitted by the material. Figure (b)b presents the output of this program over the valid input range of sunPosition and emission (i.e., between \(-1\) and \(1\) for both variables). The path conditions (Lines 2 and 7) partition the program into three traces: nighttime, when sunPosition is less than 0 (Figure (c)c); twilight, when sunPosition is between 0 and 0.1 (Figure (d)d); and daytime, when sunPosition is greater than 0.1 (Figure (e)e). These paths are separated by dashed black lines in Figure (b)b. Complexity.Training a surrogate of this program poses a particular challenge because these traces have not only different behavior but also different relative complexities: when sunPosition is less than 0.1 the function is linear, but when sunPosition is above 0.1 the function is quadratic. This notion of complexity is quantified by the _sample complexity_ of each trace: traces that are more complex Figure 1. Example program, outputs, and traces. require more samples to learn to a given error than traces that are less complex. Figure 1(a) presents the error as a function of the training dataset size of surrogates of each trace trained in isolation, showing that indeed the quadratic daytime path the has the highest error, followed by twilight then nighttime. Complexity-guided samplingOur objective is to find the number of data points to sample from each path to minimize the expectation of error of a surrogate of the overall program, given a data distribution and a data budget. To accomplish this, our approach leverages the complexity of each path and the frequency of each path in the data distribution, prioritizing sampling paths that are more complex (requiring more samples to learn) and that are more frequent (and thus more important to learn). First our approach determines the sample complexity of each trace along each path, the number of samples required to learn a surrogate of the trace (in isolation) to a given error. Our approach extends the sample complexity results of Agarwala et al. (2021), who give an upper bound on the number of samples required to learn a neural network approximation of a given analytic function. Using this bound (as implemented by our Turaco analysis described in Section 4), our approach determines that the twilight path takes \(1.4\times\) as many samples to train a surrogate to a given error as the nighttime path, and the daytime path requires \(3.7\times\) as many samples. Then given a distribution with the frequency of each path, our approach determines the complexity-guided sampling rates for each path. In this example we assume that the data has a uniform distribution over inputs between \(-1\) and \(1\), resulting in path frequencies for the nighttime path (\(\mathsf{sunPosition}<\emptyset\)) of \(50\%\), the twilight path (\(\emptyset<\mathsf{sunPosition}<\emptyset.1\)) of \(10\%\), and the daytime path (\(\emptyset.1<\mathsf{sunPosition}\)) of \(40\%\). With this, our approach determines that the nighttime path should be sampled at \(36.9\%\) of the data budget (undersampling relative to its frequency because it is simple to learn), the twilight path at \(14.0\%\), and the daytime path at \(49.1\%\) (oversampling relative to its frequency because it is complex to learn). Stratified surrogatesThe class of surrogate model for which we derive the above approach is that of a _stratified neural surrogate_ - a set of disjoint neural networks which are applied based on which path the inputs induce in the program. Concretely, this means that we train one surrogate per path, and pick which to apply for each input at evaluation time. For this example program, picking which surrogate to apply just requires comparing \(\mathsf{sunPosition}\) against constant threshold values. Figure 2: Per-path surrogate errors (left) and combined errors (right) for the example. Results.Figure 2 presents the error as a function of the training dataset size of stratified surrogates of the entire program for a baseline of sampling according to path frequency alone and for complexity-guided sampling. Figure 2 shows that the complexity-guided sampling approach results in lower error than sampling according to path frequency alone. For datasets of total size below 70 samples, the surrogate trained with complexity-guided sampling has a geometric mean decrease in error of 27.5%. For datasets of total size above 70 samples, the surrogate trained with complexity-guided sampling has a geometric mean decrease in error of 5.5%. Across the entire range of dataset sizes evaluated in this plot, the surrogate trained with complexity-guided sampling has a geometric mean decrease in error of 15%. In sum, our approach results in a surrogate that produces a more accurate luminance calculation, and therefore a better final output from the graphics program, than a surrogate trained using frequency-based path sampling. ## 3. Complexity-guided sampling In this section we present the stratified surrogate sample allocation problem and derive our solution, complexity-guided stratified surrogate dataset selection. ### The Stratified Surrogate Sample Allocation Problem Our goal is to learn a _stratified surrogate_, \(\hat{f}\), of a _stratified function_, \(f\), constrained by a _sample budget_, \(n\), that defines the number of data samples to be used by the learning algorithm. We approach this problem through the definition of a stratified function as a piecewise function; we term each piece a _stratum_. We then define a stratified surrogate as a stratified function itself, with each stratum a surrogate of a corresponding stratum of the stratified function. Learning a stratified surrogate therefore requires learning a surrogate for each stratum. We assume that we have a technique for learning a surrogate of a function, \(f\), given a sample budget. Our precise goal is thus to partition the overall sample budget, \(n\), into per-stratum sample budgets for each stratum of the stratified function, with the objective of minimizing the overall error of the stratified surrogate. #### 3.1.1. Stratified Functions and Surrogates We define a _stratified function_\(f\) as follows: \[f(x)\triangleq\begin{cases}f_{1}(x)&\text{if }x\in s_{1}\\ \vdots\\ f_{c}(x)&\text{if }x\in s_{c}\end{cases}\] where \(f\) and each \(f_{i}\) is a function from inputs \(x:\mathcal{X}\) to outputs \(y:\mathcal{Y}\), \(c\) is the number of strata, \(\{s_{i}\}_{i=1}^{c}\) are strata, and where \(\cup_{i}s_{i}=\mathcal{X}\) and \(\forall i\neq j\). \(s_{i}\cap s_{j}=\varnothing\). We define a _stratified surrogate_\(\hat{f}\) as a stratified function with components \(\hat{f}_{i}\). #### 3.1.2. The Stratified Surrogate Sample Allocation Problem To restate, our goal is to learn a stratified surrogate \(\hat{f}\) of a stratified function \(f\). Formally, we define a _learning algorithm_, a function that learns a surrogate of a given input function, as a random function \(tr:(\mathcal{X}\rightarrow\mathcal{Y})\times\mathcal{D}\times\mathbb{N}\times (\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{\geq 0})\rightarrow( \mathcal{X}\rightarrow\mathcal{Y})\) that takes a function \(f:\mathcal{X}\rightarrow\mathcal{Y}\) from inputs \(x:\mathcal{X}\) to outputs \(y:\mathcal{Y}\), a distribution \(D:\mathcal{D}\) over inputs \(x\), a number of training examples \(n:\mathbb{N}\), and a loss function \(\ell:\mathcal{Y}\times\mathcal{Y}\rightarrow\mathbb{R}_{\geq 0}\) which measures the cost of an incorrect prediction, and returns a function (representing the output surrogate) \(\hat{f}:\mathcal{X}\rightarrow\mathcal{Y}\). We also define notation for data distributions \(D\). Let \(D(x)\) be the probability that \(x\) is sampled from \(D\), and \(\int_{x\in s_{i}}D(x)\mathrm{d}x\) be the probability mass of all data points within \(s_{i}\) over \(D\) (reducing to a summation for discrete distributions). Let \(D(x|s_{i})\), the distribution of \(x\) within stratum \(s_{i}\), be defined as: \[D(x|s_{i})\triangleq\begin{cases}\frac{D(x)}{\int_{s^{\prime}\in s_{i}}D(x^{\prime} )\mathrm{d}x^{\prime}}&x\in s_{i}\\ 0&\text{otherwise}\end{cases}\] We next define a _stratified learning algorithm_. A stratified learning algorithm learns a stratified surrogate of a stratified function by learning each component surrogate independently (given their respective dataset budgets). We use the following notation to denote the operation of a stratified learning algorithm, where \(\vec{n}\) is a vector of sample budgets for each stratum: \[\hat{f}\sim tr(f,D,\vec{n},\ell)\triangleq\left\{\hat{f}_{i}\sim tr(f_{i},D(x| s_{i}),\vec{n}_{i},\ell)\right\}\] We formalize stratified surrogate sample allocation with the following optimization problem: \[\operatorname*{arg\,min}_{\vec{n}}\mathbb{E}_{\hat{f}\sim tr(f,D,\vec{n},\ell )}\left[\operatorname*{\mathbb{E}}_{\pi\sim D}\left[\ell\left(\hat{f}(x),f(x) \right)\right]\right]\text{ such that }\sum_{i}\vec{n}_{i}\leq n \tag{1}\] The objective of this problem is to find a vector of per-stratum sample budgets \(\vec{n}\) that in the expectation over the outcomes of the stratified surrogate learning algorithm (the outer expectation) minimize the expected loss over the data distribution (the inner expectation), subject to a constraint that the total number of samples used is no more than \(n\). ### Complexity-Guided Stratified Surrogate Dataset Selection In this section, our goal is to solve Equation (1). To solve this optimization problem, we need to model the relationship between the sample budget afforded to the learning algorithm for each stratum and the error of the resulting surrogate. We leverage the PAC learning framework for neural networks to derive a conservative probabilistic upper bound on the error of the surrogate. We then solve this optimization problem with our derived upper bound in place of our original objective. #### 3.2.1. PAC Learning To reason about the error of a surrogate, we use the _probably approximately correct_ (PAC) learning framework (Valiant, 1984). The PAC learning framework bounds the number of examples needed to learn a surrogate as a function of the allowable error threshold for the surrogate. Equation (2) defines a given function \(f\) as _probably approximately correctly learnable_(Valiant, 1984) (abbreviated as learnable) for a given learning algorithm \(tr\) and loss function \(\ell\) if for all distributions \(D\), with probability \(1-\delta\) the learning algorithm returns a surrogate \(\hat{f}\) that approximately matches the original function \(f\) over the distribution \(D\) (i.e., the expectation of the error is bounded by \(\epsilon\)): \[\forall D,\epsilon\in(0,1),\delta\in(0,1).\ \exists n.\Pr_{\hat{f}\sim tr(f,D,n, \ell)}\left(\operatorname*{\mathbb{E}}_{x\sim D}\left[\ell\left(\hat{f}(x),f( x)\right)\right]\leq\epsilon\right)\geq 1-\delta \tag{2}\] #### 3.2.2. Neural Network Sample Complexity Measures It is an open problem to determine the exact relationship between the number of samples \(n\) and the target error threshold \(\epsilon\) in the PAC bound (Equation (2)) for neural networks on arbitrary target functions \(f\). Rather than use the exact relationship, we use an upper bound on \(\epsilon\) as a function of \(n\), and minimize the induced upper bound. Arora et al. (2019) and Agarwala et al. (2021) present such an upper bound for learning analytic functions \(f\) with neural networks. Agarwala et al. define a _sample complexity measure_\(\zeta(f)\in\mathbb{R}_{\geq 0}\), where higher values denote functions that require more samples \(n\) to learn \(f\) to a given error \(\epsilon\). With this sample complexity measure \(\zeta(f)\), Equation (2) holds for all analytic \(f\), \(n\), \(D\), \(\epsilon\), and \(\delta\) with: \[\exists K.\ \epsilon\leq K\sqrt{\frac{\zeta(f)+\log(\delta^{-1})}{n}} \tag{3}\] where \(K\) is an unknown constant. Agarwala et al. define \(\zeta(f)\) using the _tilde_\(\tilde{f}\) of \(f\), defined as follows for univariate functions: \[f(x)=\sum_{k=0}^{\infty}a_{k}x^{k}\qquad\qquad\tilde{f}(x)\triangleq\sum_{k=0}^{ \infty}|a_{k}|x^{k} \tag{4}\] The tilde measures the magnitude of each coefficient of \(f\)'s analytic representation; this is a measure of the influence of hard-to-model higher-order terms. We work with the following generalization of the tilde to multivariate analytic functions, where \(\vec{x}\|1\) denotes concatenating a \(1\) to \(\vec{x}\): \[f(\vec{x})=\sum_{k=0}^{\infty}\sum_{v\in V_{k}}a_{v}\prod_{i=1}^{k}\left( \beta_{v,i}\cdot\vec{x}\|1\right)\qquad\qquad\tilde{f}(x)=\sum_{k=0}^{\infty} \left(\sum_{v\in V_{k}}|a_{v}|\prod_{i=1}^{k}\left\|\beta_{v,i}\right\|_{2} \right)x^{k} \tag{5}\] Agarwala et al. present the multivariate generalization; we contribute the novel generalization to \(\vec{x}\|1\), which allows us to handle functions that are not analytic around \(0\) such as log. With the definition of the tilde, we now present Agarwala et al.'s core theorem, which says that the tilde induces a sample complexity measure for analytic functions: Theorem 1 ().: _For a sufficiently wide (see Arora et al. (2019, Theorem 5.1)) 2-layer neural network trained with gradient descent for sufficient steps (ibid.), if \(\vec{f}\) is analytic, \(\vec{x}\) is on the \(\hat{d}\)-dimensional unit sphere, and \(\ell\) is 1-Lipschitz, then \(f(\vec{x})\) is learnable in the sense of Equations (2) and (3) with:_ \[\zeta(f)=\tilde{f}^{\prime}(1)^{2}\] We present the proof of this theorem in Appendix B. The proof is a novel extension to inputs \(\vec{x}\|1\) of Agarwala et al. (2021)'s proof. #### 3.2.3. Complexity-Guided Stratified Surrogate Dataset Selection Coming back to the stratified surrogate sample allocation problem (Equation (1)), our goal is to find per-stratum sample budgets \(\vec{n}\) that minimize the expectation of error of the stratified surrogate. To help solve this optimization problem, we can refactor Equation (1) to separate out each stratum as follows: \[\operatorname*{arg\,min}_{\vec{n}}\operatorname*{\mathbb{E}}_{s_{i}\sim f_{ sc_{k}}D(x)\operatorname{dx}}\left[\operatorname*{\mathbb{E}}_{\hat{f} \sim t(f_{i},D(x|s_{i}),\vec{n}_{i},r)}\left[\operatorname*{\mathbb{E}}_{x \sim D(x|s_{i})}\left[\ell\left(\hat{f}_{i}(x),f_{i}(x)\right)\right]\right] \right]\right]\text{ such that }\sum_{i}\vec{n}_{i}\leq n \tag{6}\] This refactoring exploits that a stratified learning algorithm learns the surrogate for each stratum independently:2 we can decompose the expected loss of the learning algorithm into the expectation over strata (the outermost expectation in Equation (6)) of the expectation over the outcomes of the surrogate learning algorithm on that stratum (the middle expectation) of the expected loss of the expected loss over the data distribution on that stratum (the inner expectation). Footnote 2: Specifically, we decompose the innermost expectation in Equation (1) over strata using the law of total expectation, move the expectation over strata to the outside using that expectation is linear, then rewrite the expectation over the stratified learning algorithm to be the expectation over the single stratum under consideration, using that the stratified learning algorithm learns the surrogate for each stratum independently. #### 3.2.4. Predicted Error of a Surrogate Instead of optimizing Equation (6) directly, our approach is to optimize the conservative probabilistic upper bound \(\epsilon_{i}\) given by the PAC framework for each surrogate. We define the _predicted error_\(\hat{\epsilon}_{f_{i},n_{i},\delta_{i}}\) of a stratified surrogate component to be the upper bound (with probability \(1-\delta_{i}\)) of the error of the surrogate \(\hat{f}_{i}\) against the function \(f_{i}\). Concretely, the predicted error is the error for a given \(n_{i}\) and \(\delta_{i}\) assuming that Equation (3) is tight with \(K=1\) (the value of \(K\) cancels out in our analysis, so this choice is just for notational convenience): \[\hat{\epsilon}_{f_{i},n_{i},\delta_{i}}\triangleq\sqrt{\frac{\zeta(f_{i})+ \log(\delta_{i}^{-1})}{n_{i}}} \tag{7}\] We then replace the expectation of error in Equation (6) in each stratum with the predicted error of that stratum, resulting in the objective that our approach optimizes: \[\operatorname*{arg\,min}_{\vec{n}}\operatorname*{\mathbb{E}}_{s_{i}\sim\int_{x \in s_{i}}D(x)\mathrm{d}x}\left[\hat{\varepsilon}_{f_{i},\vec{n}_{i},\delta_{i }}\right]\text{ such that }\sum_{i}\vec{n}_{i}\leq n \tag{8}\] The objective of this problem is to find a vector of per-stratum sample budgets \(\vec{n}\) that in the expectation over strata (the outer expectation) minimize the predicted error of the surrogate for that stratum, subject to a constraint that the total number of samples used is no more than \(n\). Finally we can solve this optimization problem. For a given stratified function \(f\), sample budget \(n\), and per-stratum failure probabilities \(\delta_{i}\): **Theorem 2**.: _Equation (8) is minimized at:_ \[\vec{n}_{i}=n\frac{\left(\left(\int_{x\in s_{i}}D(x)\mathrm{d}x\right)\sqrt{ \zeta(f_{i})+\log(\delta_{i}^{-1})}\right)^{\frac{2}{3}}}{\sum_{j=1}^{c}\left( \left(\int_{x\in s_{j}}D(x)\mathrm{d}x\right)\sqrt{\zeta(f_{j})+\log(\delta_{ j}^{-1})}\right)^{\frac{2}{3}}} \tag{9}\] Theorem 2 defines how much data our complexity-guided sampling approach samples from each stratum. Specifically, data is sampled from each stratum proportionally to: \[\left(\left(\int_{x\in s_{i}}D(x)\mathrm{d}x\right)\sqrt{\zeta(f_{i})+\log( \delta_{i}^{-1})}\right)^{\frac{2}{3}} \tag{10}\] This term incorporates the frequency of that stratum (\(\int_{x\in s_{i}}D(x)\mathrm{d}x\)), the complexity of that stratum (\(\zeta(f_{i})\)), and a term from the failure probability \(\delta_{i}\). We present the proof of Theorem 2 in Appendix A. For convenience, throughout the rest of this paper we assume that all \(\delta_{i}\) are set to be equal (\(\forall i\), \(j\). \(\delta_{i}=\delta_{j}\)). Because each surrogate training is independent, this induces an overall PAC failure probability \(\delta=1-\prod_{i}(1-\delta_{i})\). #### 3.2.5. Tightness of the Predicted Error Optimization We note that the optimal solution to Equation (8) is not necessarily the optimal solution to Equation (1). First, optimizing the predicted error is not the same as optimizing the expectation of error: specifically, there is a gap between the optimal solution to Equation (8) and the optimal solution to Equation (1). We note that assuming that the per-example loss is bounded by some value \(L\), the expectation of error found by the optimal \(\vec{n}\) for Equation (8) is bounded by: \[\operatorname*{\mathbb{E}}_{s_{i}\sim\int_{x\in s_{i}}D(x)\mathrm{d}x}\left[( 1-\delta_{i})\sqrt{\frac{\zeta(f_{i})+\log(\delta_{i}^{-1})}{\vec{n}_{i}}}+ \delta_{i}L\right] \tag{10}\] Second, the bound on the predicted error itself may be loose. We note that while the predicted error itself may be a loose bound on the error, our approach does not require exact values from these bounds, but instead compares the predicted error of each different component of the stratified function to minimize the overall predicted error. ## 4. Turaco: Programs as Stratified Functions In this section we present Turaco, a programming language in which programs denote learnable stratified functions. We provide a program analysis for Turaco programs that calculates an upper bound on the complexity of each component of the stratified function that the program denotes. ### Syntax and Standard Interpretation Figure 3 presents the syntax of Turaco, a loop-free language similar to IMP (Winskel, 1993). A Turaco program \(p\) takes a list of inputs \(x\), executes a top-level statement \(s\), and returns a single variable \(x\). Statements \(s\) are skips, sequences, assignments, or if statements. Expressions \(e\) are variables \(x\), floating-point values \(v\), binary operations, or unary operations. Turaco supports analytic functions (e.g., sin, exp), including those which are analytic only on a subset of the reals (e.g., log). We restrict the supported operations to those required to implement the evaluation in Section 5. Standard execution semantics.Figure 4 presents the big-step evaluation relation for expressions in Turaco. The expression relation \(\langle\sigma,e\rangle\Downarrow v\) says that under variable store \(\sigma\) (assigning values to all variables in \(e\)), the expression \(e\) evaluates to value \(v\). These semantics are standard to IMP-like languages with the exception of that for \(\texttt{log}\{b\}\{e\}\): note that the expression \(\texttt{log}\{b\}\{e\}\) takes an additional parameter \(b\) and requires \(|b-v|<b\). We discuss this requirement in Section 4.2 and Appendix E. The big-step evaluation relation for statements and full Turaco programs are standard and are presented in Appendix C. ### Complexity Analysis We now present a program analysis that gives an upper bound on the complexity of _traces_ of Turaco programs, sequences of statements without if statements. The analysis uses two core concepts: a _complexity interpretation_ of expressions to calculate an upper bound on the tilde of expressions (Section 3.2.2), and a standard _dual-number execution_(Griewank and Walther, 2008; Wengert, 1964) of the complexity interpretation to calculate the derivative of the upper bound on the tilde, which as we show below is also an upper bound on the derivative of the tilde. The result of the dual-number execution allows us to upper bound the complexity of a trace of a Turaco program. #### 4.2.1. Program Analysis First we walk through the rules of the program analysis, presented as a big-step evaluation relation. Figure 5 presents the relation used to calculate the tilde for expressions in Turaco. The relation \(\langle\tilde{\sigma},e\rangle\Downarrow(\tilde{v},\tilde{v}^{\prime})\) says that under the variable complexity mapping \(\tilde{\sigma}\) (mapping variables to tuples with their respective tilde and tilde derivative), the expression \(e\) has \(\tilde{e}\leq\tilde{v}\) and \(\tilde{e}^{\prime}\leq\tilde{v}^{\prime}\). Broadly, we define the rules in Figure 5 using the definition of the tilde, the fact that the tilde is compositional (as we prove in Section 4.2.2) and the definition of a dual-number execution. For Figure 4. Big-step evaluation relation for expressions in Turaco. Figure 3. Syntax of Turaco. instance, the tilde of a constant \(v\) is the absolute value \(|v|\) of that constant with a derivative of 0, and the tilde of \(e_{1}+e_{2}\) is the sum of the tilde of each expression with a derivative of the sum of their derivatives. A slightly more complex rule is that of \(\sin(e)\), which computes \(\sin(x)=\sum_{n=0}^{\infty}\frac{(-1)^{n}}{(2n+1)!}x^{2n+1}\). Thus, \(\widetilde{\sin}(x)=\sum_{n=0}^{\infty}\left|\frac{(-1)^{n}}{(2n+1)!}\right|x^ {2n+1}=\sum_{n=0}^{\infty}\frac{x^{2n+1}}{(2n+1)!}=\sinh(x)\). The derivative is then \(\sinh^{\prime}(x)=x^{\prime}\cosh(x)\). Because we know \(\langle\tilde{\sigma},e\rangle\ \tilde{\Downarrow}\ (\tilde{v},\tilde{v}^{\prime})\) and because the tilde is compositional (Lemma 4.1), we can plug in \(\tilde{v}\) and \(\tilde{v}^{\prime}\) to get \((\sinh(\tilde{v}),\tilde{v}^{\prime}\cosh(\tilde{v}))\). The most complex rule is the rule for \(\log\{b\}\langle e\rangle\). To handle that \(\log(x)\) is not analytic around 0, \(\log\{b\}\langle e\rangle\) expands \(\log(x)\) around \(x=b\). The value of \(b\) is a nuisance parameter that must be set to allow the expansion around \(x=b\) to converge for all inputs (inducing the \(|b-v|<b\) requirement in Figure 4) while minimizing the overall program complexity. Note that the condition \(b>\tilde{v}\sqrt{b^{2}+1}\) can always be satisfied by applying the identity \(\log(x)=\log\bigl{(}\frac{x}{c}\bigr{)}+\log(c)\). Figure 6 presents the relation for calculating the tilde of all variables computed by traces (branch-free statements) in Turaco. The relation \(\langle\tilde{\sigma},t\rangle\ \tilde{\Downarrow}\ \tilde{\sigma}^{\prime}\) says that under the variable complexity mapping \(\tilde{\sigma}\), executing the trace \(t\) computes variables with tildes and tilde derivatives upper-bounded by those of \(\tilde{\sigma}^{\prime}\). \[\frac{\langle\{x_{i}\mapsto(1,1)\},t\rangle\ \tilde{\Downarrow}\ \tilde{ \sigma}\qquad\tilde{\sigma}(x)=(\tilde{v},\tilde{v}^{\prime})}{\zeta_{\tilde{ \Downarrow}}\ (t,x)\leq\tilde{v}^{\prime 2}}\] Figure 5: Tilde relation for expressions in Turaco. Figure 6: Tilde relation for traces in Turaco. Figure 7: Complexity relation for traces in Turaco. Figure 7 presents the complexity relation for traces in Turaco. The relation \(\zeta_{\hat{\|}}\left(t,x\right)\leq z\) says that the trace \(t\) has complexity upper bounded by \(z\) for computing variable \(x\) (under the assumptions in Agarwala et al. (2021)). Figure 8 presents the trace collection relation for Turaco statements. The relation \(\langle\tau,s\rangle\leadsto\tau^{\prime}\) says that under the trace mapping \(\tau\) (mapping paths that reach this statement to the trace of statements executed thus far), executing the statement \(s\) can result in possible paths and corresponding traces \(\tau^{\prime}\). Figure 9 presents the trace collection relation for Turaco programs. The trace collection relation \(\langle\fun\left(x_{0},x_{1}\ldots,x_{n}\right)\left\{s\;;\return\;x\right\}\rangle\leadsto\tau\) says that executing the program can result in possible paths and corresponding traces \(\tau\). #### 4.2.2. Tilde Calculus This section presents the core lemma stating that the upper bound on the tilde is compositional, and that the derivative of the upper bound is an upper bound on the derivative. The bounds on the tilde are from Agarwala et al. (2021); we extend these bounds to also bound the derivative of the tilde. The tilde and its derivative have upper bounds that are compositional with respect to \(f\): \[f(\vec{x})=g(\vec{x})+h(\vec{x}) \Rightarrow\forall x\geq 0.\,\tilde{f}(x)\leq\tilde{g}(x)+ \tilde{h}(x)\wedge\tilde{f}^{\prime}(x)\leq\tilde{g}^{\prime}(x)+\tilde{h}^{ \prime}(x)\] \[f(\vec{x})=g(\vec{x})\cdot h(\vec{x}) \Rightarrow\forall x\geq 0.\,\tilde{f}(x)\leq\tilde{g}(x)\cdot \tilde{h}(x)\wedge\tilde{f}^{\prime}(x)\leq\tilde{g}^{\prime}(x)\tilde{h}(x)+ \tilde{g}(x)\tilde{h}^{\prime}(x)\] \[f(\vec{x})=g(h(\vec{x})) \Rightarrow\forall x\geq 0.\,\tilde{f}(x)\leq\tilde{g}\Big{(} \tilde{h}(x)\Big{)}\wedge\tilde{f}^{\prime}(x)\leq\tilde{g}^{\prime}\Big{(} \tilde{h}(x)\Big{)}\cdot\tilde{h}^{\prime}(x)\] (when \[\tilde{h}(x)\] is in the radius of convergence of \[g\] ) The proof of this lemma is presented in Appendix B. ### Soundness This section proves that the Turaco complexity analysis is sound: that it computes an upper bound on the true complexity of learning a trace. We prove this by induction on expressions and traces. Our approach is based on the observation that at a given program point the value of each variable was computed by some function \(f_{x}\) applied to the input. We use the notation \(\{f_{x}\}\) as shorthand for \(\{f_{x}\mid x\in\sigma\}\), a set of functions indexed by \(x\in\sigma\). Our inductive hypothesis requires that \(\tilde{\sigma}\) contain Figure 8. Trace collection relation for Turaco statements, using. to denote string concatenation. Figure 9. Trace collection relation for Turaco programs, using. to mean the empty string. the tilde and tilde derivative of each of these functions (evaluated at 1, as in Theorem 1). We use the notation \(\tilde{\sigma}\vdash\{f_{x}\}\) to denote the predicate that each \(f_{x}\) have tilde and tilde derivative bounded by \(\tilde{\sigma}\): \[\tilde{\sigma}\vdash\{f_{x}\}\Leftrightarrow\forall x\in\tilde{\sigma}.\left( \tilde{\sigma}(x)=(\tilde{v},\tilde{v}^{\prime})\Rightarrow\left(0\leq\tilde{ f}_{x}(1)\leq\tilde{v}\wedge 0\leq\tilde{f}_{x}^{\prime}(1)\leq\tilde{v}^{\prime} \right)\right)\] We also note that the standard execution semantics big-step relation \(\|\) both for expressions and for traces is a function. We use \(\llbracket\cdot\rrbracket\) as notation to refer to that function for expressions and \(\llbracket\cdot\rrbracket_{x}\) to refer to that function for traces followed by taking the value of the variable \(x\): \[\llbracket e\rrbracket(\sigma)=v\Leftrightarrow\langle\sigma,e\rangle\ \|\ v\] \[\llbracket t\rrbracket_{x}(\sigma)=v\Leftrightarrow\langle\sigma,t\rangle\ \|\ \sigma^{\prime}\wedge\sigma^{\prime}(x)=v\] We use the notation \(\circ\) to denote function composition. The functions denoted by expressions and traces have multiple inputs; in this context, composition with a set of functions \(\{f_{x}\}\) is defined as follows: \[(\llbracket\cdot\rrbracket\circ\{f_{x}\})(\sigma)\triangleq\llbracket\cdot \rrbracket(\{x\mapsto f_{x}(\sigma)\})\] Now we state the core theorems of correctness for the Turaco analysis: Lemma 4.2 ().: _The tilde big-step relation for expressions upper bounds the tilde and tilde derivative:_ \[\left(\langle\tilde{\sigma},e\rangle\ \tilde{\|}\ (\tilde{v},\tilde{v}^{\prime}) \wedge\tilde{\sigma}\vdash\{f_{x}\}\right)\Rightarrow\left(\llbracket \overline{e\rrbracket\circ\{f_{x}\}}(1)\leq\tilde{v}\wedge\llbracket\overline{e \rrbracket\circ\{f_{x}\}}^{\prime}(1)\leq\tilde{v}^{\prime}\right)\] Lemma 4.3 ().: _The tilde big-step relation for traces upper bounds on the tilde and tilde derivative:_ \[\left(\langle\tilde{\sigma},t\rangle\ \tilde{\|}\ \tilde{\sigma}^{\prime} \wedge\tilde{\sigma}\vdash\{f_{x}\}\right)\Rightarrow\tilde{\sigma}^{\prime} \vdash\left\{\llbracket t\rrbracket_{y}\circ\{f_{x}\}\right\}\] Theorem 4.4 ().: _The complexity relation computes an upper bound on the true complexity:_ \[\zeta_{\tilde{\|}}\ (t,x)\leq z\Rightarrow\zeta_{\left(\llbracket t\rrbracket_{x} \right)}\leq z\] The proofs of Lemmas 4.2 and 4.3 and Theorem 4.2 are presented in Appendix D. ### Precision We note that the analysis is a sound but imprecise approximation of complexity, in that the upper bound it computes is not tight. For example, consider the expression \(x\)+\((-x)\): under the Turaco analysis, \(\langle\{x\mapsto(1,1)\},x\)+\((-x)\rangle\ \tilde{\|}\ (2,2)\) even though \(\llbracket x\)+\((-x)\rrbracket(\sigma)=0\). ### Extensions Our implementation extends Turaco to support vector-valued variables, applying all operations elementwise. Following Agarwala et al. (2021), we define the complexity of learning a vector-valued function to be the sum of the complexity of learning each output component. Our implementation also supports other syntactic sugar including a minus operation and division by constants. LoopsOur implementation of Turaco also supports fixed- and bounded-length loops, though they are not required for any case study in the paper (thus we do not present them in this paper). However, unbounded loops pose a challenge because our approach trains a distinct surrogate per path, which is not possible with unbounded loops. This restriction to statically bounded length loops is a common feature of analyses that reason about numerical approximation, including reliability analyses (Boston et al., 2015; Carbin et al., 2013; Misailovic et al., 2014) and floating-point error analyses (Darulova and Kuncak, 2014; Magron et al., 2017; Solovyev et al., 2018). Reasoning about loops with dynamic, input-dependent bounds requires separate techniques (e.g., Boston et al. (2015)). ## 5. Evaluation In this section we evaluate our complexity-guided sampling approach using Turaco's complexity analysis to determine sampling rates for a range of benchmark programs. We demonstrate that complexity-guided sampling consistently results in more accurate surrogates than those trained using baseline distributions (the frequency distribution of paths and the uniform distribution of paths). We also demonstrate that such an improvement in surrogate error can result in an improvement in execution speed in an application with a maximum error threshold. In Section 5.1 we first evaluate across both a set of real-world programs, showing expected error improvements in practice, and also a set of synthetic programs, showing cases where the complexity-guided sampling approach shines and cases where it fails. Then in Section 5.2 we dive into a case study on a specific large-scale program, a demonstration 3D renderer (Lettier, 2019), such as forms the core of a graphics rendering pipeline for a movie or 3D game engine (Christensen et al., 2018; Tatarchuk, 2006). ### Evaluation Across Programs In this section we evaluate our complexity-guided sampling approach using Turaco's complexity analysis to determine sampling rates for a range of benchmark programs. We evaluate both a set of real-world programs, showing expected error improvements in practice, and also a set of synthetic programs, showing cases where the complexity-guided sampling approach shines and cases where it fails. MethodologyFollowing the input scale assumptions from Agarwala et al. (2021), we sample each input variable uniformly between \([-1,1]\) or \([0,1]\) as appropriate for the program. We insert scale factors as appropriate given the expected data distribution of the original program. We then uniformly sample inputs from these ranges. This induces both a data distribution over inputs and a path frequency distribution. For all benchmarks other than the Jmeint benchmark, we evaluate using a training data budget using 10 points logarithmically spaced between 10 and 1000. For the Jmeint benchmark, which is more data intensive, we evaluate using a training data budget using 10 points logarithmically spaced between 1000 and 10000. When computing the complexity-guided sampling distribution, we use \(\delta=0.1\). For each path in each benchmark, we train a 1-hidden-layer MLP with 1024 hidden units with a ReLU activation, using 10,000 steps of Adam with learning rate 0.0005 and batch size 128. We run the training for 5 trials. \begin{table} \begin{tabular}{c c c|c c|c c} \hline \hline \multicolumn{2}{c|}{**Benchmark**} & \multicolumn{5}{c}{**Baseline**} \\ **Program** & **LoC** & **Paths** & **Frequency** & **Frequency** & **Uniform** & **Uniform** \\ & & & **(Predicted)** & **(Empirical)** & **(Predicted)** & **(Empirical)** \\ \hline Luminance & 14 & 3 & 2.58\% & 15.01\% & 6.97\% & 15.17\% \\ Huber & 13 & 3 & 0.49\% & 8.15\% & 1.93\% & 9.54\% \\ BlackScholes & 15 & 2 & 4.43\% & 3.61\% & 1.30\% & 4.00\% \\ Camera & 69 & 3 & 2.83\% & 0.56\% & 0.22\% & 1.36\% \\ EQuake & 34 & 2 & 7.45\% & 2.25\% & 7.45\% & 2.25\% \\ Jmeint & 176 & 18 & 2.34\% & 0.01\% & 8.44\% & 1.02\% \\ \hline Geomean & & & 3.33\% & 4.81\% & 4.33\% & 5.43\% \\ \hline \hline \end{tabular} \end{table} Table 1. Average change in error across all budgets from using complexity-guided sampling compared to baselines on each benchmark (higher values means complexity-guided sampling has lower error). We report both the predicted error (Equation (7)) improvement and the empirical improvement, the geometric mean improvement in error across trials. As in Section 5.2, improvement is defined as the mean percentage error between the predicted error for complexity-guided sampling and the baseline sampling method. #### 5.1.1. Results Table 1 presents the results of the evaluation across 6 benchmark programs: Luminance, Huber, BlackScholes, Camera, Equake, and Jmeint. Table 2 presents path and distribution statistics for each benchmark program. \begin{table} \begin{tabular}{c c c|c c c} \hline \hline \multirow{2}{*}{**Benchmark**} & \multirow{2}{*}{**Path**} & \multirow{2}{*}{**Complexity**} & \multicolumn{2}{c}{**Frequency**} & \multicolumn{1}{c}{**Uniform**} & \multirow{2}{*}{**Complexity**} \\ & & & **Distribution** & & **Distribution** \\ \hline \multirow{4}{*}{**Luminance**} & l1 & 0.01 & 50.00\% & 33.33\% & 36.94\% \\ & r1 & 1.21 & 10.00\% & 33.33\% & 13.98\% \\ & rr & 9.00 & 40.00\% & 33.33\% & 49.07\% \\ \hline \multirow{4}{*}{**Huber**} & l1 & 9.00 & 50.00\% & 33.33\% & 44.25\% \\ & lr & 9.00 & 25.00\% & 33.33\% & 27.88\% \\ & r & 9.00 & 25.00\% & 33.33\% & 27.88\% \\ \hline \multirow{2}{*}{**BlackScholes**} & l & 165.72 & 75.00\% & 50.00\% & 59.34\% \\ & r & 485.23 & 25.00\% & 50.00\% & 40.66\% \\ \hline \multirow{4}{*}{**Camera**} & l1 & 0.86 & 44.54\% & 33.33\% & 36.96\% \\ & lrl & 0.81 & 35.48\% & 33.33\% & 31.63\% \\ & rrr & 9.53 & 19.98\% & 33.33\% & 31.41\% \\ \hline \multirow{2}{*}{**EQuake**} & l & 56.29 & 50.00\% & 50.00\% & 26.99\% \\ & r & 1169.50 & 50.00\% & 50.00\% & 73.01\% \\ \hline \multirow{4}{*}{**EQuake**} & l1 & 7236100.00 & 18.74\% & 5.56\% & 13.25\% \\ & l1rrrrl & 7236100.00 & 5.31\% & 5.56\% & 5.72\% \\ & l1rrrrl & 7236100.00 & 5.31\% & 5.56\% & 5.71\% \\ & l1rrrrl & 7236100.00 & 5.30\% & 5.56\% & 5.71\% \\ & l1rrrrrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ & l1rrrrrlrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ & l1rrrrrrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ & l1rrrrrrlrl & 7236100.00 & 18.67\% & 5.56\% & 13.22\% \\ & rrrrrrrl & 7236100.00 & 5.29\% & 5.56\% & 5.71\% \\ & rrrrrrrlrl & 7236100.00 & 5.29\% & 5.56\% & 5.70\% \\ & rrrrrrrrl & 7236100.00 & 5.34\% & 5.56\% & 5.74\% \\ & rrrrrrrrlrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ & rrrrrrrrlrl & 7236100.00 & 2.51\% & 5.56\% & 3.47\% \\ & rrrrrrrrl & 7236100.00 & 5.29\% & 5.56\% & 5.70\% \\ & rrrrrrrrlrl & 7236100.00 & 2.53\% & 5.56\% & 3.49\% \\ & rrrrrrrrrlrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ & rrrrrrrrlrl & 7236100.00 & 2.51\% & 5.56\% & 3.47\% \\ & rrrrrrrrrrl & 7236100.00 & 5.29\% & 5.56\% & 5.70\% \\ & rrrrrrrrrlrl & 7236100.00 & 2.53\% & 5.56\% & 3.49\% \\ & rrrrrrrrrlrl & 7236100.00 & 2.52\% & 5.56\% & 3.48\% \\ \hline \hline \end{tabular} \end{table} Table 2. Benchmark statistics. We find that across this selection of programs, from predicted error improvements of 3.33% against the frequency sampling baseline, complexity-guided sampling results in an empirical improvement of 4.81%; from predicted error improvements of 4.33% against the uniform sampling baseline, complexity-guided sampling results in an empirical improvement of 5.43%. Such a magnitude of decrease in error can significantly affect a system end-to-end. For example, consider Table 3. This table shows the results of a hyperparameter search to choose the fastest-to-execute neural network that meets an error threshold of 10%3 (this table is a replication of Table 4 in Appendix A of Renda et al. (2021), with an added column of "Error - 5%"). Renda et al. chose the network with size 64, which has a 1.57\(\times\) speedup; however, a decrease in error of 5% would result instead in choosing the network with size 32, which has a 2.01\(\times\) speedup, a 28% improvement in application performance. Footnote 3: This methodology is also used by Esmaeilzadeh et al. (2012). _Luminance_. The luminance benchmark is that of Section 2, and is presented in Figure 0(a). This benchmark has 3 paths: when \(\mathsf{sunPosition}<0\) (path 1l with complexity 0.01), when \(0<\mathsf{sunPosition}<0.1\) (path \(r\)1 with complexity 1.2), and when \(\mathsf{sunPosition}>0.1\) (path \(r\)r with complexity 9). \begin{table} \begin{tabular}{c|c c c} \hline \hline **Embedding Width** & **Error** & **Error - 5\%** & **Speedup over W-128** \\ \hline 128 & 8.9\% & 8.5\% & 1\(\times\) \\ 64 & 9.5\% & 9.0\% & 1.57\(\times\) \\ 32 & 10.1\% & 9.6\% & 2.01\(\times\) \\ 16 & 10.8\% & 10.3\% & 2.22\(\times\) \\ \hline \hline \end{tabular} \begin{tabular}{c|c} \hline \hline fun(rate,time,sptprice,strike, \\ \(\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ \mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{\mathtt{ }}}}}}}}{}}{}}{{}}{{}{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{{ }}}}} } } }}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}}\) \\ \\ Against the frequency baseline, compared to a predicted error improvement of 2.58%, complexity-guided sampling results in an improvement of 15.01%. Against the uniform baseline, compared to a predicted error improvement of 6.97%, complexity-guided sampling results in an improvement of 15.17%. _Huber._ Figure 10a presents the Huber benchmark, which calculates the Huber loss for \(x\in[-1,1]\) and \(\mathsf{delta}\in[0,1]\). This benchmark has 3 paths: when \(-\mathsf{delta}<x<\mathsf{delta}\) (path 11 with complexity 9), when \(x<-\mathsf{delta}\) (path 1r with complexity 9), and when \(x>\mathsf{delta}\) (path \(r\) with complexity 9). Against the frequency baseline, compared to a predicted error improvement of 0.49%, complexity-guided sampling results in an improvement of 8.15%. Against the uniform baseline, compared to a predicted error improvement of 1.93%, complexity-guided sampling results in an improvement of 9.54%. _BlackScholes._ Figure 10b presents the BlackScholes benchmark, which performs a part of the Black Scholes option pricing model (with \(\mathsf{otype}\) positive for puts and negative for calls), for inputs uniform in \([0,1]\) (other than \(\mathsf{otype}\), which is uniform in \([-1,1]\)). This benchmark is a fragment of the BlackScholes benchmark in the AxBench benchmark suite [Yazdanbakhsh et al., 2017]. This benchmark has 2 paths: when \(\mathsf{otype}>0\) (path 1 with complexity 165.72; for puts) and when \(\mathsf{otype}<0\) (path \(r\) with complexity 485.23; for calls). Against the frequency baseline, compared to a predicted error improvement of 4.43%, complexity-guided sampling results in an improvement of 3.61%. Against the uniform baseline, compared to a predicted error improvement of 1.30%, complexity-guided sampling results in an improvement of 4.00%. _Camera._ Figure 18 in Appendix F presents the Camera benchmark, which performs a part of the conversion from blackbody radiator color temperature to the CIE 1931 x,y chromaticity approximation function, for inputs \(x\in[-1,1]\), \(y\in[-1,1]\), \(\mathsf{invKilok}\in[0,1]\), and \(T\in[0.1,0.5]\) (note that \(T\) is used exclusively to determine the path). This benchmark is included in the Frankencamera platform [Adams et al., 2010], and is based off of an implementation by Kang et al. [2002]. This benchmark has three paths: when \(T<0.2222\) (path 11 with complexity 0.86), when \(0.2222<T<0.4\) (path 1r1 with complexity 0.81), and when \(0.4<T\) (path _rrr_ with complexity 9.53). Against the frequency baseline, compared to a predicted error improvement of 2.83%, complexity-guided sampling results in an improvement of 0.56%. Against the uniform baseline, compared to a predicted error improvement of 0.22%, complexity-guided sampling results in an improvement of 1.36%. _EQuake._ Figure 19 in Appendix F presents the EQuake benchmark, which computes the displacement of an object after one timestep in an earthquake simulation. This benchmark is a fragment of the 183.equake benchmark in the SPECfp2000 benchmark suite [Henning, 2000]. This benchmark has 2 paths: when \(t>0.5\) (path 1 with complexity 56.29) and when \(t<0.5\) (path \(r\) with complexity 1169.50). Against both the frequency and uniform baselines, compared to a predicted error improvement of 7.45%, complexity-guided sampling results in an improvement of 2.25%. _Jmeint._ Figure 20 in Appendix F presents the Jmeint benchmark, which calculates whether two 3D triangles intersect, and several auxiliary variables related to their intersection. All inputs are sampled from \([-1,1]\). This benchmark is a fragment of the Jmeint benchmark in the AxBench benchmark suite [Yazdanbakhsh et al., 2017]. This benchmark has 18 paths; each path has the same complexity of 72361000, but with different frequencies. Against the frequency baseline, compared to a predicted error improvement of 2.34%, complexity-guided sampling results in an improvement of 0.01%, a negligible change in error. Against the uniform baseline, compared to a predicted error improvement of 8.44%, complexity-guided sampling results in an improvement of 1.02%. We note that this benchmark has the highest complexity of any evaluated program (requiring more samples and still resulting in higher overall errors), and also that empirically some paths do appear to be significantly easier to learn despite the matching complexities. #### 5.1.2. Analysis: Complexity-Guided Sampling Successes In this section, we demonstrate examples of where the complexity-guided sampling technique results in significantly better error than baselines. Complex pathsThe first case is when some paths are significantly more complex than others: neither the frequency nor the uniform baseline take into account path complexity, so we expect both baselines to undersample the complex path. Figure (a)a presents an example of such a case. In this example, the complexity of the \(x<0.5\) path (1) is \(137677\), while the complexity of the \(x<0.5\) path (\(r\)) is \(57\). The frequency of both the \(1\) and the \(r\) paths are \(50\%\). The complexity-guided sampling approach samples the \(1\) path with probability \(93\%\) and the \(r\) path with probability \(7\%\). Against both the frequency and uniform baselines, compared to a predicted error improvement of \(22.72\%\), complexity-guided sampling results in an improvement of \(10.9\%\). Skewed frequency distributionThe second case is when some paths are significantly more frequent than others. This confers advantages over the uniform baseline, which does not take into account path frequency, and also over the frequency baseline, which does not take into account the functional form of the learning bound in Equation (3) (i.e., that error decreases proportionally to the square root of the number of samples). Figure (b)b presents an example of such a case. In this example, all paths have a complexity of \(14\), while the frequencies are either \(10\%\) (for paths \(1\), \(r1\), and \(r1\)) or \(70\%\) (for path \(rrr\)). The complexity-guided sampling approach samples the \(10\%\)-frequency paths with probability \(15\%\), and the \(70\%\)-frequency path with probability \(55\%\). Against the frequency baseline, compared to a predicted error improvement of \(3.75\%\), complexity-guided sampling results in an improvement of \(28.38\%\). Against the uniform baseline, compared to a predicted error improvement of \(14.08\%\), complexity-guided sampling results in an improvement of \(20.76\%\). Note that this type of path distribution (with rare paths below \(1\%\) of the input data distribution) matches the distribution of paths in the renderer evaluation in Section 5.2, and results in a similar significant overperformance of the predicted error improvement. #### 5.1.3. Analysis: Complexity-Guided Sampling Failures In this section, we demonstrate core examples of where the complexity-guided sampling technique results in significantly worse error than baselines. Complexity imprecisionThe first case is when the complexity results in too loose of an upper bound on the resulting error of a surrogate of that function. In this case, the complexity-guided sampling approach can oversample from the corresponding path. Figure (c)c presents an example of such a case. In this example, the complexity of the \(1\) path is \(18638\) and the complexity of the \(r\) path is \(16\). Though we are not aware of any tighter bounds on the complexity of learning \(\sin(4x)\) for \(x\in[0.5,1]\), in practice we find that neural networks are able to learn this function to low error with relatively few samples. The frequency of each path is \(50\%\). The complexity-guided sampling approach samples the \(1\) path with probability \(90.9\%\) and the \(r\) path with probability \(9.1\%\). Against both the frequency and uniform baselines, compared to a predicted error improvement of \(20.88\%\), complexity-guided sampling results in a change of \(-92.59\%\), a significant increase in error. Nonuniform complexityThe second case is when the complexity of a learning function varies significantly across different scales. In this case, the complexity-guided sampling approach can oversample from the corresponding path. ``` fun(x,y){ if(x<0.5){ res=sin(5*y); }else{ res=sin(2*y); } returns; } ``` _(a)_ Success: synthetic example with skewed path complexities (137678 v.s. 57) where complexity-guided sampling significantly improves error. ``` fun(x,y){ if(x>0.5){ y=sin(4*x); }else{ y=y*4; } returny; } ``` _(c)_ Failure: synthetic example with a function on which the complexity bound is imprecise. Figure (d)d presents an example of such a case. In this example, the complexity of the \(1\) path is 12129 and the complexity of the \(r\) path is 2.38. This causes an issue with the complexity-guided sampling because with a large target error (e.g., \(\epsilon>0.01\)), the \(1\) path is essentially zero (and therefore should have low complexity). However, with a small target error (e.g., \(\epsilon<0.00001\)), the \(1\) path is very complex. Because the sample complexity bounds themselves are scale-independent upper bounds, they do not by default incorporate this knowledge. The frequency of each path in this example is \(50\%\). The complexity-guided sampling approach samples the \(1\) path with probability \(92.9\%\) and the \(r\) path with probability \(7.1\%\). Against both the frequency and the uniform baselines, compared to a predicted error improvement of \(22.7\%\), complexity-guided sampling results in a change of \(-351\%\), a \(3.5\times\) increase in error. Analysis imprecisionThe third case is when Turaco's analysis of the complexity is imprecise: Theorem 3 proves that Turaco's complexity analysis computes an upper bound on the tilde, but this upper bound may also be loose (as discussed in Section 4.4). In this case, the complexity-guided sampling approach can oversample from the corresponding path. Figure (e)e presents an example of such a case. In this example, the calculated complexity of the \(1\) path is \(161604\) and the calculated complexity of the \(r\) path is \(56.6\). If we were to perform algebraic simplification (which Turaco does not), we would find the complexity of the \(1\) path to instead be \(4\) Figure 11: Examples of complexity-guided sampling successes and failures. The frequency of each path is 50%. With Turaco's computed complexities, the complexity-guided sampling approach samples the 1 path with probability 93.3% and the \(r\) path with probability 7.7%. Against both the frequency and the uniform baselines, compared to a predicted error improvement of 23.03%, complexity-guided sampling results in a change of \(-491.22\%\), a \(5\times\) increase in error. ### Renderer Demonstration In this section we present a case study of our complexity-guided sampling results and complexity analysis. The program under study is a demonstration 3D renderer (Lettier, 2019), such as forms the core of a graphics rendering pipeline for a movie or 3D game engine (Christensen et al., 2018; Tatarchuk, 2006). Figures (a)a and (b)b show scenes that the renderer generates. We demonstrate that the sampling and analysis techniques in Sections 3 and 4 consistently result in more accurate surrogates than those trained using baseline distributions (the frequency distribution of paths and the uniform distribution). Figure 12. Ground-truth (top) and surrogate renderings (bottom) of scenes generated by the renderer. Compared to training surrogates on the frequency distribution of paths, complexity-guided sampling decreases error by 17%. Compared to training on the uniform distribution of paths, complexity-guided sampling decreases error by 44%. These improvements in error correspond to perceptual improvements in the generated images, as shown in Figures 11(c) to 11(e). #### 5.2.1. Program Under Study The full renderer program is a 2750 lines-of-code C++ program, which invokes 38 different GLSL shader programs totaling 2446 lines of code. We learn a surrogate of a \begin{table} \begin{tabular}{c c|c c c c c c c c} \hline \hline **Path** & **Irrllr** & **Irrlrlr** & **Irrlrlr** & **Irrrrlr** & **Irrrrl** & **Irrrrrrr** & **rrrrlr** & **rrrrlr** & **rrrrlr** & **rrrrlrlr** \\ \hline **Lines of Code** & 17 & 17 & 17 & 18 & 18 & 18 & 17 & 17 & 17 \\ **Complexity** & 6115 & 5806 & 6272 & 6401 & 6084 & 6562 & 8804 & 8433 & 8993 \\ **Description** & Twilight & Twilight & Twilight & Nighttime & Nighttime & Nighttime & Daytime & Daytime & Daytime \\ & Water & Smoke & Solids & Water & Smoke & Solids & Water & Smoke & Solids \\ \hline **Dataset** & **Distr.** & **Irrllr** & **Irrlrlr** & **Irrrrlr** & **Irrrrrl** & **Irrrrrrr** & **rrrrrrlr** & **rrrrrrlr** & **rrrrrrlr** & **rrrrrrlr** \\ \hline Front Day & Freq. & & & & & & 5.0\% & 7.9\% & 87.1\% \\ & Com. & & & & & & 11.0\% & 14.7\% & 74.3\% \\ \hline Front Night & Freq. & & & & & 5.0\% & 7.9\% & 51.4\% & & & 35.6\% \\ & Com. & & & & 8.9\% & 11.9\% & 42.4\% & & & 36.9\% \\ \hline Top Day & Freq. & & & & & & & 6.7\% & 13.1\% & 80.1\% \\ & Com. & & & & & & 12.9\% & 19.8\% & 67.4\% \\ \hline Top Night & Freq. & 0.16\% & 0.06\% & 1.2\% & 0.3\% & 0.1\% & 2.4\% & 6.3\% & 12.9\% & 76.5\% \\ & Com. & 0.87\% & 0.45\% & 3.3\% & 1.4\% & 0.7\% & 5.3\% & 11.1\% & 17.7\% & 59.2\% \\ \hline Front & Freq. & & & & 2.5\% & 4.0\% & 25.7\% & 2.5\% & 4.0\% & 61.4\% \\ & Com. & & & & 5.2\% & 7.0\% & 24.9\% & 5.8\% & 7.8\% & 49.4\% \\ \hline Top & Freq. & 0.08\% & 0.03\% & 0.6\% & 0.2\% & 0.1\% & 1.2\% & 6.5\% & 13.0\% & 78.3\% \\ & Com. & 0.56\% & 0.29\% & 2.1\% & 0.9\% & 0.5\% & 3.5\% & 11.7\% & 18.4\% & 62.1\% \\ \hline Day & Freq. & & & & & & & 5.9\% & 10.5\% & 83.6\% \\ & Com. & & & & & & & 11.9\% & 17.4\% & 70.7\% \\ \hline Night & Freq. & 0.08\% & 0.03\% & 0.6\% & 2.7\% & 4.0\% & 26.9\% & 3.1\% & 6.5\% & 56.1\% \\ & Com. & 0.50\% & 0.26\% & 1.9\% & 5.2\% & 6.7\% & 24.4\% & 6.4\% & 10.3\% & 44.3\% \\ \hline All & Freq. & 0.04\% & 0.02\% & 0.3\% & 1.3\% & 2.0\% & 13.5\% & 4.5\% & 8.5\% & 69.8\% \\ & Com. & 0.33\% & 0.17\% & 1.2\% & 3.4\% & 4.4\% & 16.0\% & 8.5\% & 12.8\% & 53.2\% \\ \hline \hline \end{tabular} \end{table} Table 4. Top: the identifier, lines of code, complexity, and description of each path present in our datasets. Bottom: the distribution (abbreviated distr.) of each path across each dataset: the frequency (Freq.) of each observed path, and the complexity-guided sampling rate (Com.) of that path. \begin{table} \begin{tabular}{c|c c c c c c c c c|c} \hline \hline **Baseline** & **Front** & **Front** & **Top** & **Top** & **Top** & **Top** & **Day** & **Night** & **All** & **Mean** \\ \hline Frequency & 5\% & -3\% & -1\% & 48\% & 3\% & 31\% & 2\% & 21\% & 27\% & 17\% \\ Uniform & 39\% & 31\% & 36\% & 40\% & 42\% & 61\% & 34\% & 52\% & 52\% & 44\% \\ \hline \hline \end{tabular} \end{table} Table 5. Average decrease in error across all budgets from using complexity-guided sampling compared to baselines on each dataset (higher values means complexity-guided sampling has lower error). section of one core shader, totaling 60 lines of code.4 Figure 21 in Appendix G presents the code for the renderer case study. Footnote 4: Lines 278 through 337 of [https://github.com/lettier/3d-game-shaders-for-beginners/blob/29700/demonstration/shaders/fra](https://github.com/lettier/3d-game-shaders-for-beginners/blob/29700/demonstration/shaders/fra) mgment/base.frag. _Input-output specification._ This program is a shader which assigns colors to pixels in the image based on the scene geometry, materials, lights, and other properties. The program is called for each pixel that is rendered in the image. Each invocation the of program takes as input a set of 11 fixed-size vectors, totaling 35 inputs. The program returns as output a set of 4 fixed-size vectors, totaling 8 outputs. These outputs represent two RGBA colors, the first representing the base color of the pixel, and the second representing the color and intensity of a specular map at that pixel. _Scenes and datasets._ We evaluate the renderer on four different scenes, which we combine into nine different datasets. Figures 11(a) and 11(b) present two of the four different scenes under consideration; the four scenes are all combinations of views from the front and top, during the day and night. We combine these scenes into nine datasets: a dataset with each scene, a dataset combining each scene from each angle (front day and front night, top day and top night), a dataset combining each scene from each time of day, and a dataset combining all scenes. Figure 22 in Appendix G presents the full set of scenes under study. _Paths._ The program is a conjunction of 48 different paths, 9 of which are exercised by the renderer. The top part of Table 4 presents statistics about the paths under study, showing the identifier (a trace of 1 and \(r\) characters denoting which branch of each if statement the path takes), the lines of code in the corresponding trace, and the complexity of the corresponding trace according to the analysis in Section 4.2. The paths are broken up into a path for rendering smoke particles from the chimney, water particles in the river, and the solids of the ground and house. Each set of paths is duplicated for twilight, nighttime, and daytime. Within each time of day, the smoke paths are the least complex, followed by water then solids. Across different times, twilight paths are the least complex, followed by nighttime then daytime. Figure 13 shows side-by-side comparisons of the three classes of paths: water, smoke, and solids. In each of these images, one path returns red for all pixels while the other paths return black for all pixels. The base scene is the front daytime scene in Figure 11(a). Table 4 also presents the observed distribution and the complexity-guided distribution of paths for each dataset. In general, the twilight paths are rarer than the nighttime paths, which are rarer than the daytime paths: this is because data collection for the nighttime scenes Figure 13. Daytime scene with each different path highlighted red, and all others black. and into the morning. For all datasets, the smoke paths are rarer than the water paths, which are in turn rarer than the solids paths; this is purely due to the scene geometry. #### 5.2.2. Surrogate Training and Deployment Methodology To create and deploy a surrogate of the renderer, we train a surrogate of each path, then create a stratified surrogate which branches on the set of path conditions and applies the corresponding surrogate. Our goal is to compare the errors achieved by training on the complexity-guided distribution of paths against those of baseline distributions of paths. We compare the approaches across different training datasets, different total numbers of training data points, and evaluating across different evaluation sets, all with multiple trials. For the design of each surrogate, we use a simple MLP architecture with a single hidden layer of 512 units and a ReLU activations. This architecture matches that of Agarwala et al. (2021), except using 512 rather than 1000 hidden units (we found that the accuracy of each path surrogate plateaued by 512 hidden units). We train the surrogate using the Adam optimizer with a learning rate of 0.0001 and a batch size of 256 for 50,000 steps. We run 5 trials of all experiments, and report the error as an arithmetic mean when reported in isolation for a given surrogate (e.g., as in Figure 15) and a geometric mean error when comparing relative error rates across different settings (e.g., as in the headline error improvement numbers in Table 5). #### 5.2.3. Surrogate Errors Table 5 presents the geometric mean decrease in error of using complexity-guided path sampling compared to each baseline, on each dataset. Across most datasets, complexity-guided path sampling results in lower error than both frequency-based path sampling and uniform path sampling. On datasets with few paths (Front Day) and in which all paths are well represented (minimum 5% frequency), the gap is minimal, and frequency-based path sampling matches or outperforms complexity-guided path sampling. On datasets with more and rarer paths (e.g., Top Night), the gap widens and complexity-guided path sampling outperforms frequency-based path sampling; we discuss this phenomenon in Section 5.1.2. On all datasets, complexity-guided path sampling outperforms uniform path sampling. Figure 14 presents the correlation between the predicted error (Equation (7)) and the observed empirical error for each dataset, showing a strong correlation. The left plot shows this correlation for surrogates trained with frequency-based path sampling, and the right plot shows this correlation for surrogates trained with uniform path sampling. The x axis is the decrease in predicted error (specifically, the mean percentage error between the predicted error for complexity-guided sampling Figure 14. Correlation between predicted and empirical surrogate error decreases for the renderer case study. and the sampling method in the plot), and the y axis is the decrease in empirical error (the mean percentage error between the error observed from complexity-guided sampling and the sampling method in the plot). Each point represents a different dataset (e.g., front-day, top-night, etc.). The red dotted line shows the line of best fit. For the frequency-based surrogates, the Pearson correlation is \(r=0.86\) and the Spearman correlation is \(\rho=0.97\). For the uniform surrogates, the Pearson correlation is \(r=0.82\) and the Spearman correlation is \(\rho=0.87\). Figure 15 presents the error of surrogates on each dataset. Each plot shows the error for a different dataset. Each plot has three different lines, respectively showing the error of each surrogate training distribution (complexity-guided, frequency-based, and uniform). Each x axis is the total training data budget. Each y axis is the error of the resulting stratified surrogate. For a given dataset budget, our complexity-guided sampling approach results in lower error than baseline sampling approaches. Generally, increasing this dataset budget also results in lower error for all approaches. These two approaches to decreasing surrogate error (better sampling techniques and sampling more data) are not in conflict with each other. In Figure 15, the sampling approaches converge in error with large dataset budgets. This convergence is due to our evaluation methodology: following the convention of prior work which established these bounds (Agarwala et al., 2021; Arora et al., 2019), we use a fixed width for all neural Figure 15. Errors of stratified surrogates of each dataset. networks, resulting in neural networks that saturate in error with large datasets. An alternative methodology would be to grow the width of the neural network along with the size of the dataset, requiring a full hyperparameter search at each network scale. With such a methodology, the errors would not plateau in the way that they do in Figure 15. #### 5.2.4. Visualization Figure 12 presents the renderings generated by the surrogates for the Front Day and the Top Night scene. These budgets correspond to the smallest budget that lead to a validation error less than 2%, which was qualitatively chosen as a threshold around which surrogate renders converge on the ground truth (i.e., the rendered scenes visually approach the quality of the original scene). The top row shows the Front Day scene using surrogates trained on the Front Day dataset. In this scene, the complexity-guided and frequency-based surrogates result in similarly accurate renders, with the primary difference being that the frequency-based surrogate rendering has slightly darker green shadows on the front of the house. This similarity is expected given the similar errors observed in Table 5. Uniform sampling results in an inaccurate render, as expected given its high error. The bottom row show the Top Night scene using surrogates trained on the dataset combining all scenes. In this scene the complexity-guided surrogate has the most accurate render, as expected given the errors observed in Table 5. The frequency-trained surrogate colors everything much darker purple. The uniform-trained surrogate in contrast colors everything much more tan. In sum, the error improvements in Table 5 correspond with improvements in the rendered images. ## 6. Related Work In this section we survey related work for each contribution. Optimal stratified samplingOptimal stratified sampling is a classic area in statistics (Thompson, 2012). Most work in this domain focuses on optimal parameter estimation, and uses stratified sampling to reduce the variance of estimates by ensuring sufficient independent samples are taken from each subpopulation. Our approach is novel in the assumptions we make for training stratified surrogates of programs, and in the specific sample complexity bounds we base our results on. Santner et al. (Santner et al., 2018) survey sampling techniques for computer experiments. Chapter 5.2.3 discusses stratified random sampling in particular, showing optimality criteria for sampling for unbiased estimators. These approaches are generic for minimizing the variance of estimators, and do not consider specifically training a neural network. These approaches also do not consider the different complexity of different strata. Cortes et al. (Cortes et al., 2019) present an active learning approach for learning in the regime where the input space is partitioned into separate regions (strata, using our terminology) and a separate hypothesis (surrogate) is trained of each, and derive a similar allocation of data points. This approach has several differences from our approach. First, it assumes a different form for sample complexity and derives correspondingly different sampling bounds than ours. The definition of complexity (\(\zeta\) in our formalism) that Cortes et al. use is a function of the number of hypotheses in the hypothesis class, the total number of data points used, and the number of data points for a given stratum that have been queried thus far; it is not a function of any complexity metric of the function being learned. More concretely, Cortes et al.'s approach assumes a small, finite hypothesis class (the set of possible outputs of the training algorithm) of binary classifiers, and has runtime proportional to the size of the hypothesis class, requires samples proportional to the log of the size of the hypothesis class, and bounds the error of the result as a function of the log of the size of the hypothesis class. In their evaluation, Cortes et al. use a hypothesis class of a set of 3000 random hyperplanes. However, this approach is not tractable when using a neural network as the hypothesis class: a neural network with 43,000 32-bit floating point weights (as in the case study in Section 5.2) induces a hypothesis class of size \(10^{414,217}\). This results in intractable runtime and large or meaningless bounds. Beyond these distinctions, Cortes et al.'s approach is also an active learning approach that determines whether or not to query a label of a given data point for an input stream of data points, whereas our approach operates offline. Cortes et al.'s approach is thus a better fit when learning stratified functions of unknown complexity (e.g., non-analytic functions) using a finitely sized hypothesis class (not a neural network), and is targeted at the online setting when given a sampler of the overall data distribution but not one for each stratum. Our approach is a better fit when learning _a priori_ known stratified analytic functions with neural networks, and is targeted at the offline setting when given a sampler for each stratum. Sample complexity program analysisProgram analysis is a broad set of techniques to determine properties of programs (Cousot and Cousot, 1977; Nielson et al., 1999). Our analysis in Section 4.2 is a novel nonstandard interpretation calculating the tilde, combined with a standard implementation of forward-mode automatic differentiation (Griewank and Walther, 2008; Wengert, 1964) and a standard symbolic execution which executes all paths in the program (Cadar et al., 2008; King, 1976). Bao et al. (2012) present a program analysis that decomposes programs into continuous regions, with the goal of characterizing the sensitivity of each continuous region to input noise. This analysis computes a different notion of complexity than ours, and does not represent the sample complexity of learning a surrogate of each region. Hoffmann and Hofmann (2010) present a program analysis that calculates the algorithmic complexity of a program. This complexity again does not lead to bounds on the sample complexity of learning a surrogate of the program. ## 7. Assumptions and Limitations Our contributions in Sections 3 and 4 make assumptions about the programs under study, the functions that those programs denote, and the surrogate training algorithms. Here we document these assumptions and note possible failure modes for our techniques. Assumptions imported from prior workOur sample complexity results are subject to all assumptions from the prior work that gives the sample complexity bounds for neural networks that we use (Agarwala et al., 2021). These sample complexity bounds only apply to analytic functions. They further assume that inputs come from the unit sphere; this does not match many practical applications, including those in Section 5. Finally, these sample complexity bounds assume that the neural network under study is a 2-layer, sufficiently wide neural network trained with SGD with an infinitely small step size, using a 1-Lipschitz loss function. Despite these assumptions, Agarwala et al. (2021, Appendix B.2) empirically verify that the sample complexity bounds hold. We also show in Sections 2 and 5.2 that the theoretical sample complexity bounds correlate with empirical sample complexity results. Complexity-guided samplingThe first assumption is that we know the distribution of inputs ahead of time \(D(x)\), both in terms of the distribution of strata \(\int_{x\in s_{i}}D(x)\mathrm{d}x\) and the distribution of inputs within a given stratum \(D(x|s_{i})\). The second assumption is that optimizing the upper bound of the per-stratum loss results in a reasonable optimum for the combined surrogate. The third is the assumption we make that \(\forall i,j\). \(\delta_{i}=\delta_{j}\), which we make to ensure a closed-form solution; this is not guaranteed to be optimal. Convergence in the limit of strataIn the limit of infinite strata (\(\lim_{c\to\infty}\)), the complexity-guided sampling approach induced by Theorem 2 converges to sampling each stratum with probability proportional to \(\left(\int_{x\in s_{i}}D(x)\mathrm{d}x\right)^{\frac{2}{3}}\) (see Section A.1). In the limit, this distribution does not account for complexity. However in practice the complexity still guides sampling. Section 5.2 evaluates an example with all complexities \(\zeta(f_{i})\geq 5899\) and \(\delta=0.01\). For the \(\log(\delta_{i}^{-1})\) term to match the contribution of the complexity term there would need to be \(\approx 10^{2600}\) strata; this example only has 9. Thus while in the infinite limit of strata our approach is complexity-agnostic, in practice it is dominated by the complexity. We note that this property necessarily occurs with any underlying PAC-style bound with a term that sums complexity and \(\log\bigl{(}\frac{1}{\delta}\bigr{)}\) (e.g., those of Vapnik and Chervonenkis (1971) and Valiant (1984)): almost surely, paths are sampled with probability that does not depend on their complexity (see Note A.2). Program analysisThe main assumption here is that Agarwala et al. (2021)'s algebra on tilde functions results in a sufficiently precise upper bound on the tilde. This is not always the case, as discussed in Section 4.4. We also note that Turaco's program analysis could be make more precise in multiple well-known ways. For instance, we could perform constant propagation, algebraic simplification, and automatic inference of constraints (which would be useful for log expressions). We have excluded such extensions for the sake of simplicity of presentation. Analysis compute costFor a given path, computing the tilde and its derivative has essentially the same cost as executing the path twice. Thus in the most pessimistic case this would allocate 2 more samples per path to the baseline approaches. However, this pessimistic case assumes that sampling a program input is free, which it may not be: for example the renderer case study in Section 5.2 requires executing the video game engine (including running physics simulations) to get a program input for the shader of which we learn a surrogate. Stratified surrogatesWe provide sample complexity bounds for constructing stratified surrogates, assuming that for a given program every path is a different function. This assumes both that it is tractable to compute which stratum a given input resides in before applying the surrogate. This evaluation-time stratum check must not preclude the use of the surrogate for its downstream task. We therefore adopt a standard modeling assumption in the approximate computing literature: that precisely determining paths is an acceptable cost during approximate program execution (Carbin et al., 2013; Sampson et al., 2011).5;6 Footnote 5: “Enref... prohibit[s] approximate values in conditions that affect control flow.” (Sampson et al., 2011). This also assumes that there are a tractable number of paths, which excludes programs with a large number of if statements or loops. The assumption that there are a tractable number of paths is a common assumption among techniques like concolic testing (Cadar et al., 2008; King, 1976; Sen et al., 2005). Similar to prior work, we find that in practice the programs we evaluate only use a fraction of the syntactically possible paths (e.g., the Jmeint benchmark in Section 5 uses 18 out of 1728 possible paths). Empirical evaluation limitationsWe note two limitations in our empirical evaluations. The first is that some evaluations are in the ultra-low-data regime, where rounding to an integer number of data samples affects the accuracy. The second is that the \(\delta\) parameter is set to an arbitrary value. ## 8. Conclusion We present an approach to allocating samples among strata to train stratified neural network surrogates of stratified functions. We also present a programming language, Turaco, in which all programs are learnable stratified functions and a program analysis to determine the complexity of learning surrogates of those programs. Our results take a step towards a cohesive, end-to-end methodology for programming using surrogates of programs. ## Acknowledgments We would like to thank Alana Marzoev, Cambridge Yang, Charles Yuan, Ellie Cheng, Eric Atkinson, Jesse Michel, and the anonymous reviewers for their helpful comments and suggestions. This work was supported by the National Science Foundation (CCF-1918839 and CCF-2217064), and the Defense Advanced Research Projects Agency (#HR00112190046). Yi Ding is supported by the National Science Foundation under Grant No. 2030859 to the Computing Research Association for the CIFellows Project. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the sponsors.
2309.09169
Throughput Analysis of IEEE 802.11bn Coordinated Spatial Reuse
Multi-Access Point Coordination (MAPC) is becoming the cornerstone of the IEEE 802.11bn amendment, alias Wi-Fi 8. Among the MAPC features, Coordinated Spatial Reuse (C-SR) stands as one of the most appealing due to its capability to orchestrate simultaneous access point transmissions at a low implementation complexity. In this paper, we contribute to the understanding of C-SR by introducing an analytical model based on Continuous Time Markov Chains (CTMCs) to characterize its throughput and spatial efficiency. Applying the proposed model to several network topologies, we show that C-SR opportunistically enables parallel high-quality transmissions and yields an average throughput gain of up to 59% in comparison to the legacy 802.11 Distributed Coordination Function (DCF) and up to 42% when compared to the 802.11ax Overlapping Basic Service Set Packet Detect (OBSS/PD) mechanism.
Francesc Wilhelmi, Lorenzo Galati-Giordano, Giovanni Geraci, Boris Bellalta, Gianluca Fontanesi, David Nuñez
2023-09-17T05:46:12Z
http://arxiv.org/abs/2309.09169v2
# Throughput Analysis of IEEE 802.11bn Coordinated Spatial Reuse ###### Abstract Multi-Access Point Coordination (MAPC) is becoming the cornerstone of the IEEE 802.11bn amendment, alias Wi-Fi 8. Among the MAPC features, Coordinated Spatial Reuse (C-SR) stands as one of the most appealing due to its capability to orchestrate simultaneous access point transmissions at a low implementation complexity. In this paper, we contribute to the understanding of C-SR by introducing an analytical model based on Continuous Time Markov Chains (CTMCs) to characterize its throughput and spatial efficiency. Applying the proposed model to several network topologies, we show that C-SR opportunistically enables parallel high-quality transmissions and yields an average throughput gain of up to 59% in comparison to the legacy 802.11 Distributed Coordination Function (DCF) and up to 42% when compared to the 802.11ax Overlapping Basic Service Set Packet Detect (OBSS/PD) mechanism. ## I Introduction As the IEEE 802.11be amendment reaches its final stages and the advent of commercial Wi-Fi 7 certified products in early 2024 approaches, the groundwork is being laid for the next phase of Wi-Fi development, IEEE 802.11bn [1, 2, 3, 4]. This new standard will pave the way for Wi-Fi 8 devices and marks a significant milestone in the evolution of Wi-Fi by targeting ultra-high reliability and opening doors to a range of challenging and emerging use cases [5, 6]. Before the emergence of 802.11bn, Wi-Fi focused on enhancing throughput, spectral efficiency, and reducing latency [7, 8, 9, 10, 11]. However, these improvements lacked advanced coordination mechanisms among Access Points (APs). To address the issues arising from this coordination gap and enhance reliability, Wi-Fi 8 will pioneer Multi-Access Point Coordination (MAPC) [12], enabling APs across different Basic Service Sets (BSSs) to explicitly collaborate and optimize spectrum resource utilization. One notable feature within the MAPC framework is Coordinated Spatial Reuse (C-SR), which enables concurrent operations of multiple devices from distinct BSSs through adjusted transmit power management. This feature represents an improvement over its predecessor, the 802.11ax Overlapping BSS Packet Detect (OBSS/PD) SR mechanism [13, 14].1 In 802.11ax SR, devices can utilize more aggressive Clear Channel Assessment (CCA) policies, albeit at the cost of limiting their power during SR-based transmission opportunities (TXOPs). Unlike 802.11ax SR, which disregards interference at the receiver, C-SR leverages the exchanged information among APs to determine the best transmit power for both concurrent transmitters, based on the anticipated signal quality of their transmissions. Furthermore, the synchronization of simultaneous transmissions within the same TXOP grants enhanced control over power adjustment. Footnote 1: For convenience, we refer to ‘802.11ax OBSS/PD SR’ as ‘802.11ax SR’. In this paper, we introduce a new model based on Continuous Time Markov Chains (CTMCs) to analyze the throughput and spectral efficiency of IEEE 802.11bn C-SR. We further apply our model to several two-BSS topologies and explore the potential performance enhancements that C-SR offers compared to \((i)\) the legacy 802.11 Distributed Coordination Function (DCF) operation with Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) and \((ii)\) 802.11ax SR. Our main takeaways can be summarized as follows: * In all the scenarios considered, C-SR yields a higher mean throughput when compared to legacy DCF operation (up to 59%) and 802.11ax SR (up to 42%). * When BSSs are clearly separated (e.g., with each BSS occupying a separate cubicle), C-SR provides the highest gains in terms of throughput and spectral efficiency when transmission contenders are in close proximity. This can be attributed to C-SR's capacity to facilitate parallel transmissions while controlling interference. * In more challenging deployments, like those where transmitters and receivers are randomly positioned within the same cubicle, C-SR opportunistically leverages either alternate or parallel transmissions to achieve superior throughput than both legacy DCF and 802.11ax SR. ## II Related Work and Contribution Performance studies have remained a constant throughout the evolution of Wi-Fi, evaluating potential features for IEEE
2309.15568
Regulating spin dynamics in magnetic nanomaterials
Magnetic nanomaterials can be used in the construction of devices for information processing and memory storage. For this purpose, they have to enjoy two contradictory properties, from one side being able of keeping for long time magnetization frozen, hence information stored, and from the other side allowing for quick change of magnetization required for fast erasing of memory and rewriting new information. Methods of resolving this dilemma are suggested based on triggering resonance, dynamic resonance tuning, and on quadratic Zeeman effect. These methods make it possible to realize effective regulation of spin dynamics in such materials as magnetic nanomolecules and magnetic nanoclusters.
V. I. Yukalov, E. P. Yukalova
2023-09-27T10:48:14Z
http://arxiv.org/abs/2309.15568v1
# Regulating spin dynamics in magnetic nanomaterials ###### Abstract Magnetic nanomaterials can be used in the construction of devices for information processing and memory storage. For this purpose, they have to enjoy two contradictory properties, from one side being able of keeping for long time magnetization frozen, hence information stored, and from the other side allowing for quick change of magnetization required for fast erasing of memory and rewriting new information. Methods of resolving this dilemma are suggested based on triggering resonance, dynamic resonance tuning, and on quadratic Zeeman effect. These methods make it possible to realize effective regulation of spin dynamics in such materials as magnetic nanomolecules and magnetic nanoclusters. Magnetic nanomaterials find wide application in the creation of numerous devices in spintronics. There are several types of such materials. Among the most known, these are magnetic nanomolecules [1, 2, 3, 4, 5, 6, 7, 8, 9] and magnetic nanoclusters [10, 11, 12, 13, 14]. There exists the so-called magnetic graphene represented by graphene flakes containing defects [15, 16], including various magnetic defects [17, 18, 19, 20]. Trapped atoms, interacting through dipolar and spinor forces, form clouds possessing effective spins [21, 22, 23, 24, 25, 26, 27]. Quantum dots, often called artificial nanomolecules [28], also can have magnetization [29, 30, 31]. There exist as well nanomolecules, such as propanediol C\({}_{3}\)H\({}_{8}\)O\({}_{2}\) and butanol C\({}_{4}\)H\({}_{9}\)OH that, although do not have magnetization in their ground state, but can be polarized and can keep magnetization for very long times, for hours and months, depending on temperature [32, 4]. To be concrete, in the present paper we consider magnetic nanomolecules and magnetic nanoclusters, although the similar consideration is applicable to other nanomaterials. Magnetic nanomolecules have the degenerate ground state, when the molecular spin can be directed either up or down. These directions are separated by strong magnetic anisotropy, with the anisotropy barrier \(10-100\) K. Below the blocking temperature \(1-10\) K, the spin is frozen in one of the directions. The total spins of molecules can be different, between \(1/2\) and \(27/2\). Magnetic nanoclusters have many properties similar to magnetic nanomolecules. A magnetic nanocluster behaves as a magnetic object with a large spin summarizing the spins of particles composing the cluster. The number of particles forming a cluster can be up to \(10^{5}\). The magnetization blocking temperature is \(10-100\) K. A cluster radius is limited by _coherence radius_ that is between 1 nm to 100 nm. To form a single-domain magnet, a cluster radius has to be not larger than the coherence radius, otherwise the sample becomes divided into several domains. Magnetic nanomolecules or nanoclusters, to be used for memory devices, need to possess two properties contradicting each other. From one side, in order to keep memory for long time, the spin has to be well frozen, which can be achieved with strong magnetic anisotropy. But from the other side, in order to quickly change the magnetization, which is necessary for memory erasing or for rewriting the information content, it is required to have no magnetic anisotropy that hinders spin motion. Thus magnetic anisotropy leads to the dilemma: anisotropy is necessary for being able to keep well memory, but it is an obstacle for spin regulation. The goal of the paper is to suggest ways of resolving this dilemma. Let us consider, first, a single nanomagnet, either a nanomolecule or a nanocluster, whose Hamiltonians are practically the same by form, only with different values of the corresponding parameters. The typical Hamiltonian of a nanomagnet reads as \[\hat{H}=-\mu_{S}{\bf B}\cdot{\bf S}-DS_{z}^{2}+E(S_{x}^{2}-S_{y}^{2})\;, \tag{1}\] where \(D\) and \(E\) are the anisotropy parameters. The total magnetic field \[{\bf B}=(B_{0}+\Delta B){\bf e}_{z}+H{\bf e}_{x}+B_{1}{\bf e}_{y} \tag{2}\] contains an external constant magnetic field \(B_{0}\), and additional magnetic field \(\Delta B\) that can be regulated, a feedback magnetic field \(H\) created by a magnetic coil of an electric circuit, and a transverse anisotropy field \(B_{1}\). The existence of an electric circuit, with a magnetic coil, where the sample is inserted to, is the principal part of the setup we suggest. The action of a feedback field, created by the moving spin itself, is the most efficient way for spin regulation [3, 4, 33, 34, 35]. Let at the initial moment of time the sample be magnetized in the direction of spin up. At low temperature, below the blocking temperature, the spin direction is frozen, even if the external magnetic field is turned so that the sample is in a metastable state. The feedback field equation, obtained [33, 34, 35] from the Kirchhoff equation has the form \[\frac{dH}{dt}+2\gamma H+\omega^{2}\int_{0}^{t}H(t^{\prime})\;dt^{\prime}=-4 \pi\eta_{res}\;\frac{dm_{x}}{dt}\;, \tag{3}\] where \(\gamma\) is the circuit attenuation, \(\omega\), resonator natural frequency, \(\eta_{res}\approx V/V_{res}\), filling factor, \(V\), sample volume, \(V_{res}\), the volume of the resonance coil, and the electromotive force is produced by the moving average spin \[m_{x}=\frac{\mu_{S}}{V}\;\langle\;S_{x}\;\rangle\;. \tag{4}\] Spin operators satisfy the Heisenberg equations of motion. Averaging these equations, we are looking for the time dependence of the average spin components \[x\equiv\frac{\langle S_{x}\rangle}{S}\;,\qquad y\equiv\frac{\langle S_{y} \rangle}{S}\;,\qquad z\equiv\frac{\langle S_{z}\rangle}{S}\;. \tag{5}\] The presence of the magnetic anisotropy leads to the appearance in the equations of spin motion of the terms binary in spin operators, which need to be decoupled. The standard mean-field approximation cannot be applied, being incorrect for low spins. We use [35] the _corrected mean-field approximation_ \[\langle\ S_{\alpha}S_{\beta}+S_{\beta}S_{\alpha}\:\rangle=\left(2\;-\;\frac{1}{S} \right)\;\langle\ S_{\alpha}\:\rangle\langle\ S_{\beta}\:\rangle \tag{6}\] that is exact for \(S=1/2\) and asymptotically exact for large spins \(S\gg 1\), where we keep in mind \(\alpha\neq\beta\). To write down the equations of spin motion in a compact form, we introduce the notations for the Zeeman frequency \(\omega_{0}\), dimensionless regulated field \(b\), coupling attenuation \(\gamma_{0}\), and the dimensionless feedback field \(h\): \[\omega_{0}\equiv-\;\frac{\mu_{S}}{\hbar}\ B_{0}\;,\qquad b\equiv-\;\frac{\mu_ {S}\Delta B}{\hbar\omega_{0}}\;,\qquad\gamma_{0}\equiv\pi\;\frac{\mu_{S}^{2}S} {\hbar V_{res}}\;,\qquad h\equiv-\;\frac{\mu_{S}H}{\hbar\gamma_{0}}\;. \tag{7}\] Also, we define the anisotropy frequencies \[\omega_{D}\equiv(2S-1)\;\frac{D}{\hbar}\;,\qquad\omega_{E}\equiv(2S-1)\;\frac {E}{\hbar}\;,\qquad\omega_{1}\equiv-\;\frac{\mu_{S}}{\hbar}\;B_{1}\;, \tag{8}\] and the anisotropy parameter \[A\equiv\frac{\omega_{D}+\omega_{E}}{\omega_{0}}\;. \tag{9}\] Then the spin equations acquire the structure \[\frac{dx}{dt}=-\omega_{0}(1+b-Az)y+\omega_{1}z\;,\] \[\frac{dy}{dt}=\omega_{0}(1+b-Az)x-\gamma_{0}hz\;,\qquad\frac{dz}{dt}=2\omega_ {E}xy-\omega_{1}x+\gamma_{0}hy\;, \tag{10}\] \[\frac{dh}{dt}+2\gamma h+\omega^{2}\int_{0}^{t}h(t^{\prime})\;dt^{\prime}=4\; \frac{dx}{dt}\;. \tag{11}\] As is seen from the equations, the effective Zeeman frequency is \[\omega_{eff}=\omega_{0}(1+b-Az)\;. \tag{12}\] This frequency is not constant, but depends on time, since the term \(Az\) varies with time. The spin polarization varies in the range \(-1<z<1\), hence the term \(Az\) varies in the interval \([-A,A]\). This implies that the detuning is varying and large, \[\frac{\omega_{eff}-\omega}{\omega_{0}}=b-Az\;. \tag{13}\] Suppose the initial setup is with spin up, \(z_{0}=1\), and the external field \(B_{0}\) turns so that the spin direction up corresponds to a metastable state. Nevertheless, the spin can be kept in this state for very long time being protected by the magnetic anisotropy. Spin reversal can start only when the effective Zeeman frequency is in resonance with the resonator natural frequency \(\omega\). However, the large detuning (13) does not allow for the resonance to occur. Assume that we need to reverse the spin at time \(\tau\). To start the spin motion, we can organize resonance at this time \(t=\tau\) by switching on the regulated field \(b=b(t)\) and setting \(b(\tau)=Az_{0}\), so that at this initial time detuning (13) be zero. This initial resonance triggers the spin reversal because of which this can be called _triggering resonance_[36]. The reversal of the longitudinal spin polarization for the triggering resonance, realized at different times \(\tau\), is shown in Fig. 2. Frequencies are measured in units of \(\gamma_{0}\) and time, in units of \(1/\gamma_{0}\). Although the triggering resonance quickly initiates spin reversal, but at the last stage, when there is no resonance, there appear long tails slowing down the reversal process. The reversal would be much faster provided the resonance could be kept during the whole process of spin reversal. This can be achieved by switching on the regulated field so that to support the resonance by varying \(b(t)\), \[b(t)=\left\{\begin{array}{ll}0,&\quad t<\tau\\ Az_{reg},&\quad t\geq\tau\end{array}\right.. \tag{14}\] Then, till the time \(t=\tau\) the spin is frozen by the anisotropy. Starting from the time \(\tau\), the field \(b(t)\) is varied by tuning \(z_{reg}\) in such a way, that \(z_{reg}\) be close to \(z\), thus diminishing the detuning, \[b(t)-Az(t)=A[\ z_{reg}(t)-z(t)\ ]\to 0\;. \tag{15}\] The time dependence of \(z_{reg}\) can be defined by the spin dynamics in a sample without anisotropy. This method is called _dynamic resonance tuning_[37]. The spin reversal under this method is faster than in the method of triggering resonance, and there are no tails of spin polarization, as is seen from Fig. 2. In the case of a sample containing many nanomagnets, it is necessary to take into account their interactions through dipolar forces. Then the system Hamiltonian \[\hat{H}=-\mu_{S}\sum_{j}{\bf B}\cdot{\bf S}_{i}+\hat{H}_{A}+\hat{H}_{D} \tag{16}\] contains the Zeeman term, the term of magnetic anisotropy \[\hat{H}_{A}=-\sum_{j}D(S_{j}^{z})^{2}\;, \tag{17}\] and the energy of dipolar interactions \[\hat{H}_{D}=\frac{1}{2}\sum_{i\neq j}\;\sum_{\alpha\beta}D_{ij}^{\alpha\beta}S_{ i}^{\alpha}S_{j}^{\beta}\;, \tag{18}\] where \(D_{ij}^{\alpha\beta}\) is a dipolar tensor. The total magnetic field is \({\bf B}=(B_{0}+\Delta B){\bf e}_{z}+H{\bf e}_{x}\;.\) The methods of triggering resonance and of dynamic resonance tuning regulate spin dynamics in a system of many nanomagnets in the same way as in the case of a single nanomagnet. The presence of dipolar interactions produces the dephasing width \(\gamma_{2}\), hence the dephasing time is \(1/\gamma_{2}\). However, the spin motion induced by the resonance coupling with a resonant electric circuit is coherent and spin reversal happens during the time much shorter than the dephasing time. The reversal time can be as short as \(10^{-11}\) s. Thus dipolar interactions do not hinder the possibility of very fast spin reversal when the suggested methods are used. Moreover, under the existence of dipolar interactions, there appear dipolar spin waves triggering the initial spin motion and facilitating spin reversal [4, 33, 34, 35]. One more method allowing for the regulation of spin dynamics is based on the use of the alternating-current quadratic Zeeman effect [26, 27, 38, 39]. Then the Hamiltonian for a system of nanomagnets \(\hat{H}=\hat{H}_{Z}+\hat{H}_{A}+\hat{H}_{D}\) contains the same anisotropy term (17) and the dipolar term (18), but the Zeeman term \[\hat{H}_{Z}=-\mu_{S}\sum_{j}{\bf B}\cdot{\bf S}_{j}+q_{Z}\sum_{j}(S_{j}^{z})^{2} \tag{19}\] includes, in addition to the linear part, the quadratic Zeeman term. The external magnetic field can be taken in the form \({\bf B}=B_{0}{\bf e}_{z}+H{\bf e}_{x}\). Writing down the equations of spin motion shows that the effective Zeeman frequency becomes \(\omega_{eff}=\omega_{0}(1+Az)\), with the effective anisotropy parameter \[A=(2S-1)\;\frac{q_{Z}-D}{\hbar\omega_{0}}\;. \tag{20}\] The coefficient of the alternating-current quadratic Zeeman effect \[q_{Z}(t)=-\;\frac{\hbar\Omega^{2}(t)}{4\Delta_{res}(t)} \tag{21}\] can be varied in time by varying the Rabi frequency \(\Omega(t)\) and the detuning from an internal resonance \(\Delta(t)\). By changing the quadratic Zeeman-effect coefficient \(q_{Z}(t)\) according to the rule \[q_{Z}(t)=\left\{\begin{array}{ll}0,&\quad t<\tau\\ D,&\quad t\geq\tau\end{array}\right. \tag{22}\] makes it possible to freeze the spin before the time \(\tau\) and, when necessary, to suppress the effective anisotropy term, thus realizing resonance and fast spin reversal. In conclusion, we have suggested several ways of regulating spin dynamics in magnetic nanomaterials, such as magnetic nanomolecules and magnetic nanoclusters. These methods can be employed in spintronics, e.g. for creating memory devices.